Sample records for empirically derived values

  1. Asymptotic Properties of the Sequential Empirical ROC, PPV and NPV Curves Under Case-Control Sampling.

    PubMed

    Koopmeiners, Joseph S; Feng, Ziding

    2011-01-01

    The receiver operating characteristic (ROC) curve, the positive predictive value (PPV) curve and the negative predictive value (NPV) curve are three measures of performance for a continuous diagnostic biomarker. The ROC, PPV and NPV curves are often estimated empirically to avoid assumptions about the distributional form of the biomarkers. Recently, there has been a push to incorporate group sequential methods into the design of diagnostic biomarker studies. A thorough understanding of the asymptotic properties of the sequential empirical ROC, PPV and NPV curves will provide more flexibility when designing group sequential diagnostic biomarker studies. In this paper we derive asymptotic theory for the sequential empirical ROC, PPV and NPV curves under case-control sampling using sequential empirical process theory. We show that the sequential empirical ROC, PPV and NPV curves converge to the sum of independent Kiefer processes and show how these results can be used to derive asymptotic results for summaries of the sequential empirical ROC, PPV and NPV curves.

  2. Asymptotic Properties of the Sequential Empirical ROC, PPV and NPV Curves Under Case-Control Sampling

    PubMed Central

    Koopmeiners, Joseph S.; Feng, Ziding

    2013-01-01

    The receiver operating characteristic (ROC) curve, the positive predictive value (PPV) curve and the negative predictive value (NPV) curve are three measures of performance for a continuous diagnostic biomarker. The ROC, PPV and NPV curves are often estimated empirically to avoid assumptions about the distributional form of the biomarkers. Recently, there has been a push to incorporate group sequential methods into the design of diagnostic biomarker studies. A thorough understanding of the asymptotic properties of the sequential empirical ROC, PPV and NPV curves will provide more flexibility when designing group sequential diagnostic biomarker studies. In this paper we derive asymptotic theory for the sequential empirical ROC, PPV and NPV curves under case-control sampling using sequential empirical process theory. We show that the sequential empirical ROC, PPV and NPV curves converge to the sum of independent Kiefer processes and show how these results can be used to derive asymptotic results for summaries of the sequential empirical ROC, PPV and NPV curves. PMID:24039313

  3. An Empirical Method for deriving RBE values associated with Electrons, Photons and Radionuclides

    DOE PAGES

    Bellamy, Michael B; Puskin, J.; Eckerman, Keith F.; ...

    2015-01-01

    There is substantial evidence to justify using relative biological effectiveness (RBE) values greater than one for low-energy electrons and photons. But, in the field of radiation protection, radiation associated with low linear energy transfer (LET) has been assigned a radiation weighting factor w R of one. This value may be suitable for radiation protection but, for risk considerations, it is important to evaluate the potential elevated biological effectiveness of radiation to improve the quality of risk estimates. RBE values between 2 and 3 for tritium are implied by several experimental measurements. Additionally, elevated RBE values have been found for othermore » similar low-energy radiation sources. In this work, RBE values are derived for electrons based upon the fractional deposition of absorbed dose of energies less than a few keV. Using this empirical method, RBE values were also derived for monoenergetic photons and 1070 radionuclides from ICRP Publication 107 for which photons and electrons are the primary emissions.« less

  4. Empirically Derived Combinations of Tools and Clinical Cutoffs: An Illustrative Case with a Sample of Culturally/Linguistically Diverse Children

    ERIC Educational Resources Information Center

    Oetting, Janna B.; Cleveland, Lesli H.; Cope, Robert F., III

    2008-01-01

    Purpose: Using a sample of culturally/linguistically diverse children, we present data to illustrate the value of empirically derived combinations of tools and cutoffs for determining eligibility in child language impairment. Method: Data were from 95 4- and 6-year-olds (40 African American, 55 White; 18 with language impairment, 77 without) who…

  5. An empirical method for deriving RBE values associated with electrons, photons and radionuclides.

    PubMed

    Bellamy, M; Puskin, J; Hertel, N; Eckerman, K

    2015-12-01

    There is substantial evidence to justify using relative biological effectiveness (RBE) values of >1 for low-energy electrons and photons. But, in the field of radiation protection, radiation associated with low linear energy transfer has been assigned a radiation weighting factor wR of 1. This value may be suitable for radiation protection but, for risk considerations, it is important to evaluate the potential elevated biological effectiveness of radiation to improve the quality of risk estimates. RBE values between 2 and 3 for tritium are implied by several experimental measurements. Additionally, elevated RBE values have been found for other similar low-energy radiation sources. In this work, RBE values are derived for electrons based upon the fractional deposition of absorbed dose of energies less than a few kiloelectron volts. Using this empirical method, RBE values were also derived for monoenergetic photons and 1070 radionuclides from ICRP Publication 107 for which photons and electrons are the primary emissions. Published by Oxford University Press 2015. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  6. The methane absorption spectrum near 1.73 μm (5695-5850 cm-1): Empirical line lists at 80 K and 296 K and rovibrational assignments

    NASA Astrophysics Data System (ADS)

    Ghysels, M.; Mondelain, D.; Kassi, S.; Nikitin, A. V.; Rey, M.; Campargue, A.

    2018-07-01

    The methane absorption spectrum is studied at 297 K and 80 K in the center of the Tetradecad between 5695 and 5850 cm-1. The spectra are recorded by differential absorption spectroscopy (DAS) with a noise equivalent absorption of about αmin≈ 1.5 × 10-7 cm-1. Two empirical line lists are constructed including about 4000 and 2300 lines at 297 K and 80 K, respectively. Lines due to 13CH4 present in natural abundance were identified by comparison with a spectrum of pure 13CH4 recorded in the same temperature conditions. About 1700 empirical values of the lower state energy level, Eemp, were derived from the ratios of the line intensities at 80 K and 296 K. They provide accurate temperature dependence for most of the absorption in the region (93% and 82% at 80 K and 296 K, respectively). The quality of the derived empirical values is illustrated by the clear propensity of the corresponding lower state rotational quantum number, Jemp, to be close to integer values. Using an effective Hamiltonian model derived from a previously published ab initio potential energy surface, about 2060 lines are rovibrationnally assigned, adding about 1660 new assignments to those provided in the HITRAN database for 12CH4 in the region.

  7. An empirical, graphical, and analytical study of the relationship between vegetation indices. [derived from LANDSAT data

    NASA Technical Reports Server (NTRS)

    Lautenschlager, L.; Perry, C. R., Jr. (Principal Investigator)

    1981-01-01

    The development of formulae for the reduction of multispectral scanner measurements to a single value (vegetation index) for predicting and assessing vegetative characteristics is addressed. The origin, motivation, and derivation of some four dozen vegetation indices are summarized. Empirical, graphical, and analytical techniques are used to investigate the relationships among the various indices. It is concluded that many vegetative indices are very similar, some being simple algebraic transforms of others.

  8. Small field detector correction factors kQclin,Qmsr (fclin,fmsr) for silicon-diode and diamond detectors with circular 6 MV fields derived using both empirical and numerical methods.

    PubMed

    O'Brien, D J; León-Vintró, L; McClean, B

    2016-01-01

    The use of radiotherapy fields smaller than 3 cm in diameter has resulted in the need for accurate detector correction factors for small field dosimetry. However, published factors do not always agree and errors introduced by biased reference detectors, inaccurate Monte Carlo models, or experimental errors can be difficult to distinguish. The aim of this study was to provide a robust set of detector-correction factors for a range of detectors using numerical, empirical, and semiempirical techniques under the same conditions and to examine the consistency of these factors between techniques. Empirical detector correction factors were derived based on small field output factor measurements for circular field sizes from 3.1 to 0.3 cm in diameter performed with a 6 MV beam. A PTW 60019 microDiamond detector was used as the reference dosimeter. Numerical detector correction factors for the same fields were derived based on calculations from a geant4 Monte Carlo model of the detectors and the Linac treatment head. Semiempirical detector correction factors were derived from the empirical output factors and the numerical dose-to-water calculations. The PTW 60019 microDiamond was found to over-respond at small field sizes resulting in a bias in the empirical detector correction factors. The over-response was similar in magnitude to that of the unshielded diode. Good agreement was generally found between semiempirical and numerical detector correction factors except for the PTW 60016 Diode P, where the numerical values showed a greater over-response than the semiempirical values by a factor of 3.7% for a 1.1 cm diameter field and higher for smaller fields. Detector correction factors based solely on empirical measurement or numerical calculation are subject to potential bias. A semiempirical approach, combining both empirical and numerical data, provided the most reliable results.

  9. Dairy farmers' use and non-use values in animal welfare: Determining the empirical content and structure with anchored best-worst scaling.

    PubMed

    Hansson, H; Lagerkvist, C J

    2016-01-01

    In this study, we sought to identify empirically the types of use and non-use values that motivate dairy farmers in their work relating to animal welfare of dairy cows. We also sought to identify how they prioritize between these use and non-use values. Use values are derived from productivity considerations; non-use values are derived from the wellbeing of the animals, independent of the present or future use the farmer may make of the animal. In particular, we examined the empirical content and structure of the economic value dairy farmers associate with animal welfare of dairy cows. Based on a best-worst scaling approach and data from 123 Swedish dairy farmers, we suggest that the economic value those farmers associate with animal welfare of dairy cows covers aspects of both use and non-use type, with non-use values appearing more important. Using principal component factor analysis, we were able to check unidimensionality of the economic value construct. These findings are useful for understanding why dairy farmers may be interested in considering dairy cow welfare. Such understanding is essential for improving agricultural policy and advice aimed at encouraging dairy farmers to improve animal welfare; communicating to consumers the values under which dairy products are produced; and providing a basis for more realistic assumptions when developing economic models about dairy farmers' behavior. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  10. Modeling of Pickup Ion Distributions in the Halley Cometo-Sheath: Empirical Rates of Ionization, Diffusion, Loss and Creation of Fast Neutral Atoms

    NASA Technical Reports Server (NTRS)

    Huddleston, D.; Neugebauer, M.; Goldstein, B.

    1994-01-01

    The shape of the velocity distribution of water-group ions observed by the Giotto ion mass spectrometer on its approach to comet Halley is modeled to derive empirical values for the rates on ionization, energy diffusion, and loss in the mid-cometosheath.

  11. Determination of errors in derived magnetic field directions in geosynchronous orbit: results from a statistical approach

    NASA Astrophysics Data System (ADS)

    Chen, Yue; Cunningham, Gregory; Henderson, Michael

    2016-09-01

    This study aims to statistically estimate the errors in local magnetic field directions that are derived from electron directional distributions measured by Los Alamos National Laboratory geosynchronous (LANL GEO) satellites. First, by comparing derived and measured magnetic field directions along the GEO orbit to those calculated from three selected empirical global magnetic field models (including a static Olson and Pfitzer 1977 quiet magnetic field model, a simple dynamic Tsyganenko 1989 model, and a sophisticated dynamic Tsyganenko 2001 storm model), it is shown that the errors in both derived and modeled directions are at least comparable. Second, using a newly developed proxy method as well as comparing results from empirical models, we are able to provide for the first time circumstantial evidence showing that derived magnetic field directions should statistically match the real magnetic directions better, with averaged errors < ˜ 2°, than those from the three empirical models with averaged errors > ˜ 5°. In addition, our results suggest that the errors in derived magnetic field directions do not depend much on magnetospheric activity, in contrast to the empirical field models. Finally, as applications of the above conclusions, we show examples of electron pitch angle distributions observed by LANL GEO and also take the derived magnetic field directions as the real ones so as to test the performance of empirical field models along the GEO orbits, with results suggesting dependence on solar cycles as well as satellite locations. This study demonstrates the validity and value of the method that infers local magnetic field directions from particle spin-resolved distributions.

  12. Determination of errors in derived magnetic field directions in geosynchronous orbit: results from a statistical approach

    DOE PAGES

    Chen, Yue; Cunningham, Gregory; Henderson, Michael

    2016-09-21

    Our study aims to statistically estimate the errors in local magnetic field directions that are derived from electron directional distributions measured by Los Alamos National Laboratory geosynchronous (LANL GEO) satellites. First, by comparing derived and measured magnetic field directions along the GEO orbit to those calculated from three selected empirical global magnetic field models (including a static Olson and Pfitzer 1977 quiet magnetic field model, a simple dynamic Tsyganenko 1989 model, and a sophisticated dynamic Tsyganenko 2001 storm model), it is shown that the errors in both derived and modeled directions are at least comparable. Furthermore, using a newly developedmore » proxy method as well as comparing results from empirical models, we are able to provide for the first time circumstantial evidence showing that derived magnetic field directions should statistically match the real magnetic directions better, with averaged errors < ~2°, than those from the three empirical models with averaged errors > ~5°. In addition, our results suggest that the errors in derived magnetic field directions do not depend much on magnetospheric activity, in contrast to the empirical field models. Finally, as applications of the above conclusions, we show examples of electron pitch angle distributions observed by LANL GEO and also take the derived magnetic field directions as the real ones so as to test the performance of empirical field models along the GEO orbits, with results suggesting dependence on solar cycles as well as satellite locations. Finally, this study demonstrates the validity and value of the method that infers local magnetic field directions from particle spin-resolved distributions.« less

  13. Determination of errors in derived magnetic field directions in geosynchronous orbit: results from a statistical approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yue; Cunningham, Gregory; Henderson, Michael

    Our study aims to statistically estimate the errors in local magnetic field directions that are derived from electron directional distributions measured by Los Alamos National Laboratory geosynchronous (LANL GEO) satellites. First, by comparing derived and measured magnetic field directions along the GEO orbit to those calculated from three selected empirical global magnetic field models (including a static Olson and Pfitzer 1977 quiet magnetic field model, a simple dynamic Tsyganenko 1989 model, and a sophisticated dynamic Tsyganenko 2001 storm model), it is shown that the errors in both derived and modeled directions are at least comparable. Furthermore, using a newly developedmore » proxy method as well as comparing results from empirical models, we are able to provide for the first time circumstantial evidence showing that derived magnetic field directions should statistically match the real magnetic directions better, with averaged errors < ~2°, than those from the three empirical models with averaged errors > ~5°. In addition, our results suggest that the errors in derived magnetic field directions do not depend much on magnetospheric activity, in contrast to the empirical field models. Finally, as applications of the above conclusions, we show examples of electron pitch angle distributions observed by LANL GEO and also take the derived magnetic field directions as the real ones so as to test the performance of empirical field models along the GEO orbits, with results suggesting dependence on solar cycles as well as satellite locations. Finally, this study demonstrates the validity and value of the method that infers local magnetic field directions from particle spin-resolved distributions.« less

  14. An Attempt to Derive the epsilon Equation from a Two-Point Closure

    NASA Technical Reports Server (NTRS)

    Canuto, V. M.; Cheng, Y.; Howard, A. M.

    2010-01-01

    The goal of this paper is to derive the equation for the turbulence dissipation rate epsilon for a shear-driven flow. In 1961, Davydov used a one-point closure model to derive the epsilon equation from first principles but the final result contained undetermined terms and thus lacked predictive power. Both in 1987 and in 2001, attempts were made to derive the epsilon equation from first principles using a two-point closure, but their methods relied on a phenomenological assumption. The standard practice has thus been to employ a heuristic form of the equation that contains three empirical ingredients: two constants, c(sub 1 epsilon), and c(sub 2 epsilon), and a diffusion term D(sub epsilon) In this work, a two-point closure is employed, yielding the following results: 1) the empirical constants get replaced by c(sub 1), c(sub 2), which are now functions of Kappa and epsilon; 2) c(sub 1) and c(sub 2) are not independent because a general relation between the two that are valid for any Kappa and epsilon are derived; 3) c(sub 1), c(sub 2) become constant with values close to the empirical values c(sub 1 epsilon), c(sub epsilon 2), (i.e., homogenous flows); and 4) the empirical form of the diffusion term D(sub epsilon) is no longer needed because it gets substituted by the Kappa-epsilon dependence of c(sub 1), c(sub 2), which plays the role of the diffusion, together with the diffusion of the turbulent kinetic energy D(sub Kappa), which now enters the new equation (i.e., inhomogeneous flows). Thus, the three empirical ingredients c(sub 1 epsilon), c(sub epsilon 2), D (sub epsilon)are replaced by a single function c(sub 1)(Kappa, epsilon ) or c(sub 2)(Kappa, epsilon ), plus a D(sub Kappa)term. Three tests of the new equation for epsilon are presented: one concerning channel flow and two concerning the shear-driven planetary boundary layer (PBL).

  15. Against the empirical viability of the Deutsch-Wallace-Everett approach to quantum mechanics

    NASA Astrophysics Data System (ADS)

    Dawid, Richard; Thébault, Karim P. Y.

    2014-08-01

    The subjective Everettian approach to quantum mechanics presented by Deutsch and Wallace fails to constitute an empirically viable theory of quantum phenomena. The decision theoretic implementation of the Born rule realized in this approach provides no basis for rejecting Everettian quantum mechanics in the face of empirical data that contradicts the Born rule. The approach of Greaves and Myrvold, which provides a subjective implementation of the Born rule as well but derives it from empirical data rather than decision theoretic arguments, avoids the problem faced by Deutsch and Wallace and is empirically viable. However, there is good reason to cast doubts on its scientific value.

  16. Studying the Value of Library and Information Services: A Taxonomy of Users Assessments.

    ERIC Educational Resources Information Center

    Kantor, Paul B.; Saracevic, Tefko

    1995-01-01

    Describes the development of a taxonomy of the value of library services based on users' assessments from five large research libraries. Highlights include empirical and derived taxonomy, replicability of the study, reasons for using the library, how library services are related to time and money, and a theory of value. (LRW)

  17. Study of galaxies in the Lynx-Cancer void - VII. New oxygen abundances

    NASA Astrophysics Data System (ADS)

    Pustilnik, S. A.; Perepelitsyna, Y. A.; Kniazev, A. Y.

    2016-11-01

    We present new or improved oxygen abundances (O/H) for the nearby Lynx-Cancer void updated galaxy sample. They are obtained via the SAO 6-m telescope spectroscopy (25 objects), or derived from the Sloan Digital Sky Survey spectra (14 galaxies, of which for seven objects O/H values were unknown). For eight galaxies with detected [O III] λ4363 line, O/H values are derived via the direct (Te) method. For the remaining objects, O/H was estimated via semi-empirical and empirical methods. For all accumulated O/H data for 81 galaxies of this void (with 40 of them derived via Te method), their relation `O/H versus MB' is compared with that for similar late-type galaxies from denser environments (the Local Volume `reference sample'). We confirm our previous conclusion derived for a subsample of 48 objects: void galaxies show systematically reduced O/H for the same luminosity with respect to the reference sample, in average by 0.2 dex, or by a factor of ˜1.6. Moreover, we confirm the fraction of ˜20 per cent of strong outliers, with O/H of two to four times lower than the typical values for the `reference' sample. The new data are consistent with the conclusion on the slower evolution of the main void galaxy population. We obtained Hα velocity for the faint optical counterpart of the most gas-rich (M(H I)/LB = 25) void object J0723+3624, confirming its connection with the respective H I blob. For similar extremely gas-rich dwarf J0706+3020, we give a tentative O/H ˜(O/H)⊙/45. In Appendix A, we present the results of calibration of semi-empirical method by Izotov & Thuan and of empirical calibrators by Pilyugin & Thuan and Yin et al. on the sample of ˜150 galaxies from the literature with O/H measured by Te method.

  18. Statistical mechanics of neocortical interactions. Derivation of short-term-memory capacity

    NASA Astrophysics Data System (ADS)

    Ingber, Lester

    1984-06-01

    A theory developed by the author to describe macroscopic neocortical interactions demonstrates that empirical values of chemical and electrical parameters of synaptic interactions establish several minima of the path-integral Lagrangian as a function of excitatory and inhibitory columnar firings. The number of possible minima, their time scales of hysteresis and probable reverberations, and their nearest-neighbor columnar interactions are all consistent with well-established empirical rules of human short-term memory. Thus, aspects of conscious experience are derived from neuronal firing patterns, using modern methods of nonlinear nonequilibrium statistical mechanics to develop realistic explicit synaptic interactions.

  19. Identities and Social Justice Values of Prospective Teachers of Color

    ERIC Educational Resources Information Center

    Agosto, Vonzell

    2009-01-01

    This empirical study of social justice values among three prospective teachers who identity as being "of color" emphasizes the constellations of social justice sensibilities (perceptions of injustice, concern for the situations of others, socio-political and cultural consciousness, sensitivity regarding the conditions of others) they derived from…

  20. Evaluating the generalizability of GEP models for estimating reference evapotranspiration in distant humid and arid locations

    NASA Astrophysics Data System (ADS)

    Kiafar, Hamed; Babazadeh, Hosssien; Marti, Pau; Kisi, Ozgur; Landeras, Gorka; Karimi, Sepideh; Shiri, Jalal

    2017-10-01

    Evapotranspiration estimation is of crucial importance in arid and hyper-arid regions, which suffer from water shortage, increasing dryness and heat. A modeling study is reported here to cross-station assessment between hyper-arid and humid conditions. The derived equations estimate ET0 values based on temperature-, radiation-, and mass transfer-based configurations. Using data from two meteorological stations in a hyper-arid region of Iran and two meteorological stations in a humid region of Spain, different local and cross-station approaches are applied for developing and validating the derived equations. The comparison of the gene expression programming (GEP)-based-derived equations with corresponding empirical-semi empirical ET0 estimation equations reveals the superiority of new formulas in comparison with the corresponding empirical equations. Therefore, the derived models can be successfully applied in these hyper-arid and humid regions as well as similar climatic contexts especially in data-lack situations. The results also show that when relying on proper input configurations, cross-station might be a promising alternative for locally trained models for the stations with data scarcity.

  1. Semi-empirical estimation of organic compound fugacity ratios at environmentally relevant system temperatures.

    PubMed

    van Noort, Paul C M

    2009-06-01

    Fugacity ratios of organic compounds are used to calculate (subcooled) liquid properties, such as solubility or vapour pressure, from solid properties and vice versa. They can be calculated from the entropy of fusion, the melting temperature, and heat capacity data for the solid and the liquid. For many organic compounds, values for the fusion entropy are lacking. Heat capacity data are even scarcer. In the present study, semi-empirical compound class specific equations were derived to estimate fugacity ratios from molecular weight and melting temperature for polycyclic aromatic hydrocarbons and polychlorinated benzenes, biphenyls, dibenzo[p]dioxins and dibenzofurans. These equations estimate fugacity ratios with an average standard error of about 0.05 log units. In addition, for compounds with known fusion entropy values, a general semi-empirical correction equation based on molecular weight and melting temperature was derived for estimation of the contribution of heat capacity differences to the fugacity ratio. This equation estimates the heat capacity contribution correction factor with an average standard error of 0.02 log units for polycyclic aromatic hydrocarbons, polychlorinated benzenes, biphenyls, dibenzo[p]dioxins and dibenzofurans.

  2. The use of artificial intelligence technology to predict lymph node spread in men with clinically localized prostate carcinoma.

    PubMed

    Crawford, E D; Batuello, J T; Snow, P; Gamito, E J; McLeod, D G; Partin, A W; Stone, N; Montie, J; Stock, R; Lynch, J; Brandt, J

    2000-05-01

    The current study assesses artificial intelligence methods to identify prostate carcinoma patients at low risk for lymph node spread. If patients can be assigned accurately to a low risk group, unnecessary lymph node dissections can be avoided, thereby reducing morbidity and costs. A rule-derivation technology for simple decision-tree analysis was trained and validated using patient data from a large database (4,133 patients) to derive low risk cutoff values for Gleason sum and prostate specific antigen (PSA) level. An empiric analysis was used to derive a low risk cutoff value for clinical TNM stage. These cutoff values then were applied to 2 additional, smaller databases (227 and 330 patients, respectively) from separate institutions. The decision-tree protocol derived cutoff values of < or = 6 for Gleason sum and < or = 10.6 ng/mL for PSA. The empiric analysis yielded a clinical TNM stage low risk cutoff value of < or = T2a. When these cutoff values were applied to the larger database, 44% of patients were classified as being at low risk for lymph node metastases (0.8% false-negative rate). When the same cutoff values were applied to the smaller databases, between 11 and 43% of patients were classified as low risk with a false-negative rate of between 0.0 and 0.7%. The results of the current study indicate that a population of prostate carcinoma patients at low risk for lymph node metastases can be identified accurately using a simple decision algorithm that considers preoperative PSA, Gleason sum, and clinical TNM stage. The risk of lymph node metastases in these patients is < or = 1%; therefore, pelvic lymph node dissection may be avoided safely. The implications of these findings in surgical and nonsurgical treatment are significant.

  3. Deriving Criteria-supporting Benchmark Values from Empirical Response Relationships: Comparison of Statistical Techniques and Effect of Log-transforming the Nutrient Variable

    EPA Science Inventory

    In analyses supporting the development of numeric nutrient criteria, multiple statistical techniques can be used to extract critical values from stressor response relationships. However there is little guidance for choosing among techniques, and the extent to which log-transfor...

  4. Empirical mass-loss rates for 25 O and early B stars, derived from Copernicus observations

    NASA Technical Reports Server (NTRS)

    Gathier, R.; Lamers, H. J. G. L. M.; Snow, T. P.

    1981-01-01

    Ultraviolet line profiles are fitted with theoretical line profiles in the cases of 25 stars covering a spectral type range from O4 to B1, including all luminosity classes. Ion column densities are compared for the determination of wind ionization, and it is found that the O VI/N V ratio is dependent on the mean density of the wind and not on effective temperature value, while the Si IV/N V ratio is temperature-dependent. The column densities are used to derive a mass-loss rate parameter that is empirically correlated against the mass-loss rate by means of standard stars with well-determined rates from IR or radio data. The empirical mass-loss rates obtained are compared with those derived by others and found to vary by as much as a factor of 10, which is shown to be due to uncertainties or errors in the ionization fractions of models used for wind ionization balance prediction.

  5. Maximum Entropy for the International Division of Labor.

    PubMed

    Lei, Hongmei; Chen, Ying; Li, Ruiqi; He, Deli; Zhang, Jiang

    2015-01-01

    As a result of the international division of labor, the trade value distribution on different products substantiated by international trade flows can be regarded as one country's strategy for competition. According to the empirical data of trade flows, countries may spend a large fraction of export values on ubiquitous and competitive products. Meanwhile, countries may also diversify their exports share on different types of products to reduce the risk. In this paper, we report that the export share distribution curves can be derived by maximizing the entropy of shares on different products under the product's complexity constraint once the international market structure (the country-product bipartite network) is given. Therefore, a maximum entropy model provides a good fit to empirical data. The empirical data is consistent with maximum entropy subject to a constraint on the expected value of the product complexity for each country. One country's strategy is mainly determined by the types of products this country can export. In addition, our model is able to fit the empirical export share distribution curves of nearly every country very well by tuning only one parameter.

  6. Maximum Entropy for the International Division of Labor

    PubMed Central

    Lei, Hongmei; Chen, Ying; Li, Ruiqi; He, Deli; Zhang, Jiang

    2015-01-01

    As a result of the international division of labor, the trade value distribution on different products substantiated by international trade flows can be regarded as one country’s strategy for competition. According to the empirical data of trade flows, countries may spend a large fraction of export values on ubiquitous and competitive products. Meanwhile, countries may also diversify their exports share on different types of products to reduce the risk. In this paper, we report that the export share distribution curves can be derived by maximizing the entropy of shares on different products under the product’s complexity constraint once the international market structure (the country-product bipartite network) is given. Therefore, a maximum entropy model provides a good fit to empirical data. The empirical data is consistent with maximum entropy subject to a constraint on the expected value of the product complexity for each country. One country’s strategy is mainly determined by the types of products this country can export. In addition, our model is able to fit the empirical export share distribution curves of nearly every country very well by tuning only one parameter. PMID:26172052

  7. A Review of Multivariate Distributions for Count Data Derived from the Poisson Distribution.

    PubMed

    Inouye, David; Yang, Eunho; Allen, Genevera; Ravikumar, Pradeep

    2017-01-01

    The Poisson distribution has been widely studied and used for modeling univariate count-valued data. Multivariate generalizations of the Poisson distribution that permit dependencies, however, have been far less popular. Yet, real-world high-dimensional count-valued data found in word counts, genomics, and crime statistics, for example, exhibit rich dependencies, and motivate the need for multivariate distributions that can appropriately model this data. We review multivariate distributions derived from the univariate Poisson, categorizing these models into three main classes: 1) where the marginal distributions are Poisson, 2) where the joint distribution is a mixture of independent multivariate Poisson distributions, and 3) where the node-conditional distributions are derived from the Poisson. We discuss the development of multiple instances of these classes and compare the models in terms of interpretability and theory. Then, we empirically compare multiple models from each class on three real-world datasets that have varying data characteristics from different domains, namely traffic accident data, biological next generation sequencing data, and text data. These empirical experiments develop intuition about the comparative advantages and disadvantages of each class of multivariate distribution that was derived from the Poisson. Finally, we suggest new research directions as explored in the subsequent discussion section.

  8. Increasing Chemical Space Coverage by Combining Empirical and Computational Fragment Screens

    PubMed Central

    2015-01-01

    Most libraries for fragment-based drug discovery are restricted to 1,000–10,000 compounds, but over 500,000 fragments are commercially available and potentially accessible by virtual screening. Whether this larger set would increase chemotype coverage, and whether a computational screen can pragmatically prioritize them, is debated. To investigate this question, a 1281-fragment library was screened by nuclear magnetic resonance (NMR) against AmpC β-lactamase, and hits were confirmed by surface plasmon resonance (SPR). Nine hits with novel chemotypes were confirmed biochemically with KI values from 0.2 to low mM. We also computationally docked 290,000 purchasable fragments with chemotypes unrepresented in the empirical library, finding 10 that had KI values from 0.03 to low mM. Though less novel than those discovered by NMR, the docking-derived fragments filled chemotype holes from the empirical library. Crystal structures of nine of the fragments in complex with AmpC β-lactamase revealed new binding sites and explained the relatively high affinity of the docking-derived fragments. The existence of chemotype holes is likely a general feature of fragment libraries, as calculation suggests that to represent the fragment substructures of even known biogenic molecules would demand a library of minimally over 32,000 fragments. Combining computational and empirical fragment screens enables the discovery of unexpected chemotypes, here by the NMR screen, while capturing chemotypes missing from the empirical library and tailored to the target, with little extra cost in resources. PMID:24807704

  9. The First Empirical Determination of the Fe10+ and Fe13+ Freeze-in Distances in the Solar Corona

    NASA Astrophysics Data System (ADS)

    Boe, Benjamin; Habbal, Shadia; Druckmüller, Miloslav; Landi, Enrico; Kourkchi, Ehsan; Ding, Adalbert; Starha, Pavel; Hutton, Joseph

    2018-06-01

    Heavy ions are markers of the physical processes responsible for the density and temperature distribution throughout the fine-scale magnetic structures that define the shape of the solar corona. One of their properties, whose empirical determination has remained elusive, is the “freeze-in” distance (R f ) where they reach fixed ionization states that are adhered to during their expansion with the solar wind. We present the first empirical inference of R f for {Fe}}{10+} and {Fe}}{13+} derived from multi-wavelength imaging observations of the corresponding Fe XI ({Fe}}{10+}) 789.2 nm and Fe XIV ({Fe}}{13+}) 530.3 nm emission acquired during the 2015 March 20 total solar eclipse. We find that the two ions freeze-in at different heliocentric distances. In polar coronal holes (CHs) R f is around 1.45 R ⊙ for {Fe}}{10+} and below 1.25 R ⊙ for {Fe}}{13+}. Along open field lines in streamer regions, R f ranges from 1.4 to 2 R ⊙ for {Fe}}{10+} and from 1.5 to 2.2 R ⊙ for {Fe}}{13+}. These first empirical R f values: (1) reflect the differing plasma parameters between CHs and streamers and structures within them, including prominences and coronal mass ejections; (2) are well below the currently quoted values derived from empirical model studies; and (3) place doubt on the reliability of plasma diagnostics based on the assumption of ionization equilibrium beyond 1.2 R ⊙.

  10. Irrigation water demand: A meta-analysis of price elasticities

    NASA Astrophysics Data System (ADS)

    Scheierling, Susanne M.; Loomis, John B.; Young, Robert A.

    2006-01-01

    Metaregression models are estimated to investigate sources of variation in empirical estimates of the price elasticity of irrigation water demand. Elasticity estimates are drawn from 24 studies reported in the United States since 1963, including mathematical programming, field experiments, and econometric studies. The mean price elasticity is 0.48. Long-run elasticities, those that are most useful for policy purposes, are likely larger than the mean estimate. Empirical results suggest that estimates may be more elastic if they are derived from mathematical programming or econometric studies and calculated at a higher irrigation water price. Less elastic estimates are found to be derived from models based on field experiments and in the presence of high-valued crops.

  11. PREDICTING ESTUARINE SEDIMENT METAL CONCENTRATIONS AND INFERRED ECOLOGICAL CONDITIONS: AN INFORMATION THEORETIC APPROACH

    EPA Science Inventory

    Empirically derived values associating sediment metal concentrations with degraded ecological conditions provide important information to assess estuarine condition. However, resources limit the number, magnitude, and frequency of monitoring programs to gather these data. As su...

  12. Holding-based network of nations based on listed energy companies: An empirical study on two-mode affiliation network of two sets of actors

    NASA Astrophysics Data System (ADS)

    Li, Huajiao; Fang, Wei; An, Haizhong; Gao, Xiangyun; Yan, Lili

    2016-05-01

    Economic networks in the real world are not homogeneous; therefore, it is important to study economic networks with heterogeneous nodes and edges to simulate a real network more precisely. In this paper, we present an empirical study of the one-mode derivative holding-based network constructed by the two-mode affiliation network of two sets of actors using the data of worldwide listed energy companies and their shareholders. First, we identify the primitive relationship in the two-mode affiliation network of the two sets of actors. Then, we present the method used to construct the derivative network based on the shareholding relationship between two sets of actors and the affiliation relationship between actors and events. After constructing the derivative network, we analyze different topological features on the node level, edge level and entire network level and explain the meanings of the different values of the topological features combining the empirical data. This study is helpful for expanding the usage of complex networks to heterogeneous economic networks. For empirical research on the worldwide listed energy stock market, this study is useful for discovering the inner relationships between the nations and regions from a new perspective.

  13. ESTIMATION OF CHEMICAL TOXICITY TO WILDLIFE SPECIES USING INTERSPECIES CORRELATION MODELS

    EPA Science Inventory

    Ecological risks to wildlife are typically assessed using toxicity data for relataively few species and with limited understanding of differences in species sensitivity to contaminants. Empirical interspecies correlation models were derived from LD50 values for 49 wildlife speci...

  14. Refractive Index of Alkali Halides and Its Wavelength and Temperature Derivatives.

    DTIC Science & Technology

    1975-05-01

    of CoBr . . . .......... 236 82. Comparison of Dispersion Equations Proposed for CsBr ... . 237 83. Recommmded Values on the Refractive Index and Its... discovery of empirical relationships which enable us to calculate dn/dT data at 293 K for some ma- terials on which no data are available. In the data...or in handbooks. In the present work, however, this problem 160 was solved by our empirical discoveries by which the unknown parameters of Eq. (19) for

  15. A Review of Multivariate Distributions for Count Data Derived from the Poisson Distribution

    PubMed Central

    Inouye, David; Yang, Eunho; Allen, Genevera; Ravikumar, Pradeep

    2017-01-01

    The Poisson distribution has been widely studied and used for modeling univariate count-valued data. Multivariate generalizations of the Poisson distribution that permit dependencies, however, have been far less popular. Yet, real-world high-dimensional count-valued data found in word counts, genomics, and crime statistics, for example, exhibit rich dependencies, and motivate the need for multivariate distributions that can appropriately model this data. We review multivariate distributions derived from the univariate Poisson, categorizing these models into three main classes: 1) where the marginal distributions are Poisson, 2) where the joint distribution is a mixture of independent multivariate Poisson distributions, and 3) where the node-conditional distributions are derived from the Poisson. We discuss the development of multiple instances of these classes and compare the models in terms of interpretability and theory. Then, we empirically compare multiple models from each class on three real-world datasets that have varying data characteristics from different domains, namely traffic accident data, biological next generation sequencing data, and text data. These empirical experiments develop intuition about the comparative advantages and disadvantages of each class of multivariate distribution that was derived from the Poisson. Finally, we suggest new research directions as explored in the subsequent discussion section. PMID:28983398

  16. Evaluation of backscatter dose from internal lead shielding in clinical electron beams using EGSnrc Monte Carlo simulations.

    PubMed

    De Vries, Rowen J; Marsh, Steven

    2015-11-08

    Internal lead shielding is utilized during superficial electron beam treatments of the head and neck, such as lip carcinoma. Methods for predicting backscattered dose include the use of empirical equations or performing physical measurements. The accuracy of these empirical equations required verification for the local electron beams. In this study, a Monte Carlo model of a Siemens Artiste linac was developed for 6, 9, 12, and 15 MeV electron beams using the EGSnrc MC package. The model was verified against physical measurements to an accuracy of better than 2% and 2mm. Multiple MC simulations of lead interfaces at different depths, corresponding to mean electron energies in the range of 0.2-14 MeV at the interfaces, were performed to calculate electron backscatter values. The simulated electron backscatter was compared with current empirical equations to ascertain their accuracy. The major finding was that the current set of backscatter equations does not accurately predict electron backscatter, particularly in the lower energies region. A new equation was derived which enables estimation of electron backscatter factor at any depth upstream from the interface for the local treatment machines. The derived equation agreed to within 1.5% of the MC simulated electron backscatter at the lead interface and upstream positions. Verification of the equation was performed by comparing to measurements of the electron backscatter factor using Gafchromic EBT2 film. These results show a mean value of 0.997 ± 0.022 to 1σ of the predicted values of electron backscatter. The new empirical equation presented can accurately estimate electron backscatter factor from lead shielding in the range of 0.2 to 14 MeV for the local linacs.

  17. Evaluation of backscatter dose from internal lead shielding in clinical electron beams using EGSnrc Monte Carlo simulations

    PubMed Central

    Marsh, Steven

    2015-01-01

    Internal lead shielding is utilized during superficial electron beam treatments of the head and neck, such as lip carcinoma. Methods for predicting backscattered dose include the use of empirical equations or performing physical measurements. The accuracy of these empirical equations required verification for the local electron beams. In this study, a Monte Carlo model of a Siemens Artiste linac was developed for 6, 9, 12, and 15 MeV electron beams using the EGSnrc MC package. The model was verified against physical measurements to an accuracy of better than 2% and 2 mm. Multiple MC simulations of lead interfaces at different depths, corresponding to mean electron energies in the range of 0.2–14 MeV at the interfaces, were performed to calculate electron backscatter values. The simulated electron backscatter was compared with current empirical equations to ascertain their accuracy. The major finding was that the current set of backscatter equations does not accurately predict electron backscatter, particularly in the lower energies region. A new equation was derived which enables estimation of electron backscatter factor at any depth upstream from the interface for the local treatment machines. The derived equation agreed to within 1.5% of the MC simulated electron backscatter at the lead interface and upstream positions. Verification of the equation was performed by comparing to measurements of the electron backscatter factor using Gafchromic EBT2 film. These results show a mean value of 0.997±0.022 to 1σ of the predicted values of electron backscatter. The new empirical equation presented can accurately estimate electron backscatter factor from lead shielding in the range of 0.2 to 14 MeV for the local linacs. PACS numbers: 87.53.Bn, 87.55.K‐, 87.56.bd PMID:26699566

  18. Permeability Estimation Directly From Logging-While-Drilling Induced Polarization Data

    NASA Astrophysics Data System (ADS)

    Fiandaca, G.; Maurya, P. K.; Balbarini, N.; Hördt, A.; Christiansen, A. V.; Foged, N.; Bjerg, P. L.; Auken, E.

    2018-04-01

    In this study, we present the prediction of permeability from time domain spectral induced polarization (IP) data, measured in boreholes on undisturbed formations using the El-log logging-while-drilling technique. We collected El-log data and hydraulic properties on unconsolidated Quaternary and Miocene deposits in boreholes at three locations at a field site in Denmark, characterized by different electrical water conductivity and chemistry. The high vertical resolution of the El-log technique matches the lithological variability at the site, minimizing ambiguity in the interpretation originating from resolution issues. The permeability values were computed from IP data using a laboratory-derived empirical relationship presented in a recent study for saturated unconsolidated sediments, without any further calibration. A very good correlation, within 1 order of magnitude, was found between the IP-derived permeability estimates and those derived using grain size analyses and slug tests, with similar depth trends and permeability contrasts. Furthermore, the effect of water conductivity on the IP-derived permeability estimations was found negligible in comparison to the permeability uncertainties estimated from the inversion and the laboratory-derived empirical relationship.

  19. Comparison of modelled and empirical atmospheric propagation data

    NASA Technical Reports Server (NTRS)

    Schott, J. R.; Biegel, J. D.

    1983-01-01

    The radiometric integrity of TM thermal infrared channel data was evaluated and monitored to develop improved radiometric preprocessing calibration techniques for removal of atmospheric effects. Modelled atmospheric transmittance and path radiance were compared with empirical values derived from aircraft underflight data. Aircraft thermal infrared imagery and calibration data were available on two dates as were corresponding atmospheric radiosonde data. The radiosonde data were used as input to the LOWTRAN 5A code which was modified to output atmospheric path radiance in addition to transmittance. The aircraft data were calibrated and used to generate analogous measurements. These data indicate that there is a tendancy for the LOWTRAN model to underestimate atmospheric path radiance and transmittance as compared to empirical data. A plot of transmittance versus altitude for both LOWTRAN and empirical data is presented.

  20. Development and system identification of a light unmanned aircraft for flying qualities research

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peters, M.E.; Andrisani, D. II

    This paper describes the design, construction, flight testing and system identification of a light weight remotely piloted aircraft and its use in studying flying qualities in the longitudinal axis. The short period approximation to the longitudinal dynamics of the aircraft was used. Parameters in this model were determined a priori using various empirical estimators. These parameters were then estimated from flight data using a maximum likelihood parameter identification method. A comparison of the parameter values revealed that the stability derivatives obtained from the empirical estimators were reasonably close to the flight test results. However, the control derivatives determined by themore » empirical estimators were too large by a factor of two. The aircraft was also flown to determine how the longitudinal flying qualities of light weight remotely piloted aircraft compared to full size manned aircraft. It was shown that light weight remotely piloted aircraft require much faster short period dynamics to achieve level I flying qualities in an up-and-away flight task.« less

  1. Strategic Renewal and Development Implications of Organisational Effectiveness Research in Higher Education in Australia.

    ERIC Educational Resources Information Center

    Lysons, Art

    1999-01-01

    Suggests that organizational effectiveness research has made considerable progress in empirically deriving a systematic framework of theoretical and practical utility in Australian higher education. Offers a taxonomy based on the competing values framework and discusses use of inter-organizational comparisons and profiles for diagnosis in…

  2. Strength of single-pole utility structures

    Treesearch

    Ronald W. Wolfe

    2006-01-01

    This section presents three basic methods for deriving and documenting Rn as an LTL value along with the coefficient of variation (COVR) for single-pole structures. These include the following: 1. An empirical analysis based primarily on tests of full-sized poles. 2. A theoretical analysis of mechanics-based models used in...

  3. Clinical decision support alert malfunctions: analysis and empirically derived taxonomy.

    PubMed

    Wright, Adam; Ai, Angela; Ash, Joan; Wiesen, Jane F; Hickman, Thu-Trang T; Aaron, Skye; McEvoy, Dustin; Borkowsky, Shane; Dissanayake, Pavithra I; Embi, Peter; Galanter, William; Harper, Jeremy; Kassakian, Steve Z; Ramoni, Rachel; Schreiber, Richard; Sirajuddin, Anwar; Bates, David W; Sittig, Dean F

    2018-05-01

    To develop an empirically derived taxonomy of clinical decision support (CDS) alert malfunctions. We identified CDS alert malfunctions using a mix of qualitative and quantitative methods: (1) site visits with interviews of chief medical informatics officers, CDS developers, clinical leaders, and CDS end users; (2) surveys of chief medical informatics officers; (3) analysis of CDS firing rates; and (4) analysis of CDS overrides. We used a multi-round, manual, iterative card sort to develop a multi-axial, empirically derived taxonomy of CDS malfunctions. We analyzed 68 CDS alert malfunction cases from 14 sites across the United States with diverse electronic health record systems. Four primary axes emerged: the cause of the malfunction, its mode of discovery, when it began, and how it affected rule firing. Build errors, conceptualization errors, and the introduction of new concepts or terms were the most frequent causes. User reports were the predominant mode of discovery. Many malfunctions within our database caused rules to fire for patients for whom they should not have (false positives), but the reverse (false negatives) was also common. Across organizations and electronic health record systems, similar malfunction patterns recurred. Challenges included updates to code sets and values, software issues at the time of system upgrades, difficulties with migration of CDS content between computing environments, and the challenge of correctly conceptualizing and building CDS. CDS alert malfunctions are frequent. The empirically derived taxonomy formalizes the common recurring issues that cause these malfunctions, helping CDS developers anticipate and prevent CDS malfunctions before they occur or detect and resolve them expediently.

  4. Quantifying tolerance indicator values for common stream fish species of the United States

    USGS Publications Warehouse

    Meador, M.R.; Carlisle, D.M.

    2007-01-01

    The classification of fish species tolerance to environmental disturbance is often used as a means to assess ecosystem conditions. Its use, however, may be problematic because the approach to tolerance classification is based on subjective judgment. We analyzed fish and physicochemical data from 773 stream sites collected as part of the U.S. Geological Survey's National Water-Quality Assessment Program to calculate tolerance indicator values for 10 physicochemical variables using weighted averaging. Tolerance indicator values (TIVs) for ammonia, chloride, dissolved oxygen, nitrite plus nitrate, pH, phosphorus, specific conductance, sulfate, suspended sediment, and water temperature were calculated for 105 common fish species of the United States. Tolerance indicator values for specific conductance and sulfate were correlated (rho = 0.87), and thus, fish species may be co-tolerant to these water-quality variables. We integrated TIVs for each species into an overall tolerance classification for comparisons with judgment-based tolerance classifications. Principal components analysis indicated that the distinction between tolerant and intolerant classifications was determined largely by tolerance to suspended sediment, specific conductance, chloride, and total phosphorus. Factors such as water temperature, dissolved oxygen, and pH may not be as important in distinguishing between tolerant and intolerant classifications, but may help to segregate species classified as moderate. Empirically derived tolerance classifications were 58.8% in agreement with judgment-derived tolerance classifications. Canonical discriminant analysis revealed that few TIVs, primarily chloride, could discriminate among judgment-derived tolerance classifications of tolerant, moderate, and intolerant. To our knowledge, this is the first empirically based understanding of fish species tolerance for stream fishes in the United States.

  5. An approach to derive some simple empirical equations to calibrate nuclear and acoustic well logging tools.

    PubMed

    Mohammad Al Alfy, Ibrahim

    2018-01-01

    A set of three pads was constructed from primary materials (sand, gravel and cement) to calibrate the gamma-gamma density tool. A simple equation was devised to convert the qualitative cps values to quantitative g/cc values. The neutron-neutron porosity tool measures the qualitative cps porosity values. A direct equation was derived to calculate the porosity percentage from the cps porosity values. Cement-bond log illustrates the cement quantities, which surround well pipes. This log needs a difficult process due to the existence of various parameters, such as: drilling well diameter as well as internal diameter, thickness and type of well pipes. An equation was invented to calculate the cement percentage at standard conditions. This equation can be modified according to varying conditions. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Uncertainty in Measurement: A Review of Monte Carlo Simulation Using Microsoft Excel for the Calculation of Uncertainties Through Functional Relationships, Including Uncertainties in Empirically Derived Constants

    PubMed Central

    Farrance, Ian; Frenkel, Robert

    2014-01-01

    The Guide to the Expression of Uncertainty in Measurement (usually referred to as the GUM) provides the basic framework for evaluating uncertainty in measurement. The GUM however does not always provide clearly identifiable procedures suitable for medical laboratory applications, particularly when internal quality control (IQC) is used to derive most of the uncertainty estimates. The GUM modelling approach requires advanced mathematical skills for many of its procedures, but Monte Carlo simulation (MCS) can be used as an alternative for many medical laboratory applications. In particular, calculations for determining how uncertainties in the input quantities to a functional relationship propagate through to the output can be accomplished using a readily available spreadsheet such as Microsoft Excel. The MCS procedure uses algorithmically generated pseudo-random numbers which are then forced to follow a prescribed probability distribution. When IQC data provide the uncertainty estimates the normal (Gaussian) distribution is generally considered appropriate, but MCS is by no means restricted to this particular case. With input variations simulated by random numbers, the functional relationship then provides the corresponding variations in the output in a manner which also provides its probability distribution. The MCS procedure thus provides output uncertainty estimates without the need for the differential equations associated with GUM modelling. The aim of this article is to demonstrate the ease with which Microsoft Excel (or a similar spreadsheet) can be used to provide an uncertainty estimate for measurands derived through a functional relationship. In addition, we also consider the relatively common situation where an empirically derived formula includes one or more ‘constants’, each of which has an empirically derived numerical value. Such empirically derived ‘constants’ must also have associated uncertainties which propagate through the functional relationship and contribute to the combined standard uncertainty of the measurand. PMID:24659835

  7. Uncertainty in measurement: a review of monte carlo simulation using microsoft excel for the calculation of uncertainties through functional relationships, including uncertainties in empirically derived constants.

    PubMed

    Farrance, Ian; Frenkel, Robert

    2014-02-01

    The Guide to the Expression of Uncertainty in Measurement (usually referred to as the GUM) provides the basic framework for evaluating uncertainty in measurement. The GUM however does not always provide clearly identifiable procedures suitable for medical laboratory applications, particularly when internal quality control (IQC) is used to derive most of the uncertainty estimates. The GUM modelling approach requires advanced mathematical skills for many of its procedures, but Monte Carlo simulation (MCS) can be used as an alternative for many medical laboratory applications. In particular, calculations for determining how uncertainties in the input quantities to a functional relationship propagate through to the output can be accomplished using a readily available spreadsheet such as Microsoft Excel. The MCS procedure uses algorithmically generated pseudo-random numbers which are then forced to follow a prescribed probability distribution. When IQC data provide the uncertainty estimates the normal (Gaussian) distribution is generally considered appropriate, but MCS is by no means restricted to this particular case. With input variations simulated by random numbers, the functional relationship then provides the corresponding variations in the output in a manner which also provides its probability distribution. The MCS procedure thus provides output uncertainty estimates without the need for the differential equations associated with GUM modelling. The aim of this article is to demonstrate the ease with which Microsoft Excel (or a similar spreadsheet) can be used to provide an uncertainty estimate for measurands derived through a functional relationship. In addition, we also consider the relatively common situation where an empirically derived formula includes one or more 'constants', each of which has an empirically derived numerical value. Such empirically derived 'constants' must also have associated uncertainties which propagate through the functional relationship and contribute to the combined standard uncertainty of the measurand.

  8. Comparing the model-simulated global warming signal to observations using empirical estimates of unforced noise.

    PubMed

    Brown, Patrick T; Li, Wenhong; Cordero, Eugene C; Mauget, Steven A

    2015-04-21

    The comparison of observed global mean surface air temperature (GMT) change to the mean change simulated by climate models has received much public and scientific attention. For a given global warming signal produced by a climate model ensemble, there exists an envelope of GMT values representing the range of possible unforced states of the climate system (the Envelope of Unforced Noise; EUN). Typically, the EUN is derived from climate models themselves, but climate models might not accurately simulate the correct characteristics of unforced GMT variability. Here, we simulate a new, empirical, EUN that is based on instrumental and reconstructed surface temperature records. We compare the forced GMT signal produced by climate models to observations while noting the range of GMT values provided by the empirical EUN. We find that the empirical EUN is wide enough so that the interdecadal variability in the rate of global warming over the 20(th) century does not necessarily require corresponding variability in the rate-of-increase of the forced signal. The empirical EUN also indicates that the reduced GMT warming over the past decade or so is still consistent with a middle emission scenario's forced signal, but is likely inconsistent with the steepest emission scenario's forced signal.

  9. Comparing the model-simulated global warming signal to observations using empirical estimates of unforced noise

    PubMed Central

    Brown, Patrick T.; Li, Wenhong; Cordero, Eugene C.; Mauget, Steven A.

    2015-01-01

    The comparison of observed global mean surface air temperature (GMT) change to the mean change simulated by climate models has received much public and scientific attention. For a given global warming signal produced by a climate model ensemble, there exists an envelope of GMT values representing the range of possible unforced states of the climate system (the Envelope of Unforced Noise; EUN). Typically, the EUN is derived from climate models themselves, but climate models might not accurately simulate the correct characteristics of unforced GMT variability. Here, we simulate a new, empirical, EUN that is based on instrumental and reconstructed surface temperature records. We compare the forced GMT signal produced by climate models to observations while noting the range of GMT values provided by the empirical EUN. We find that the empirical EUN is wide enough so that the interdecadal variability in the rate of global warming over the 20th century does not necessarily require corresponding variability in the rate-of-increase of the forced signal. The empirical EUN also indicates that the reduced GMT warming over the past decade or so is still consistent with a middle emission scenario's forced signal, but is likely inconsistent with the steepest emission scenario's forced signal. PMID:25898351

  10. Principles of parametric estimation in modeling language competition

    PubMed Central

    Zhang, Menghan; Gong, Tao

    2013-01-01

    It is generally difficult to define reasonable parameters and interpret their values in mathematical models of social phenomena. Rather than directly fitting abstract parameters against empirical data, we should define some concrete parameters to denote the sociocultural factors relevant for particular phenomena, and compute the values of these parameters based upon the corresponding empirical data. Taking the example of modeling studies of language competition, we propose a language diffusion principle and two language inheritance principles to compute two critical parameters, namely the impacts and inheritance rates of competing languages, in our language competition model derived from the Lotka–Volterra competition model in evolutionary biology. These principles assign explicit sociolinguistic meanings to those parameters and calculate their values from the relevant data of population censuses and language surveys. Using four examples of language competition, we illustrate that our language competition model with thus-estimated parameter values can reliably replicate and predict the dynamics of language competition, and it is especially useful in cases lacking direct competition data. PMID:23716678

  11. Principles of parametric estimation in modeling language competition.

    PubMed

    Zhang, Menghan; Gong, Tao

    2013-06-11

    It is generally difficult to define reasonable parameters and interpret their values in mathematical models of social phenomena. Rather than directly fitting abstract parameters against empirical data, we should define some concrete parameters to denote the sociocultural factors relevant for particular phenomena, and compute the values of these parameters based upon the corresponding empirical data. Taking the example of modeling studies of language competition, we propose a language diffusion principle and two language inheritance principles to compute two critical parameters, namely the impacts and inheritance rates of competing languages, in our language competition model derived from the Lotka-Volterra competition model in evolutionary biology. These principles assign explicit sociolinguistic meanings to those parameters and calculate their values from the relevant data of population censuses and language surveys. Using four examples of language competition, we illustrate that our language competition model with thus-estimated parameter values can reliably replicate and predict the dynamics of language competition, and it is especially useful in cases lacking direct competition data.

  12. Systematic approach to developing empirical interatomic potentials for III-N semiconductors

    NASA Astrophysics Data System (ADS)

    Ito, Tomonori; Akiyama, Toru; Nakamura, Kohji

    2016-05-01

    A systematic approach to the derivation of empirical interatomic potentials is developed for III-N semiconductors with the aid of ab initio calculations. The parameter values of empirical potential based on bond order potential are determined by reproducing the cohesive energy differences among 3-fold coordinated hexagonal, 4-fold coordinated zinc blende, wurtzite, and 6-fold coordinated rocksalt structures in BN, AlN, GaN, and InN. The bond order p is successfully introduced as a function of the coordination number Z in the form of p = a exp(-bZn ) if Z ≤ 4 and p = (4/Z)α if Z ≥ 4 in empirical interatomic potential. Moreover, the energy difference between wurtzite and zinc blende structures can be successfully evaluated by considering interaction beyond the second-nearest neighbors as a function of ionicity. This approach is feasible for developing empirical interatomic potentials applicable to a system consisting of poorly coordinated atoms at surfaces and interfaces including nanostructures.

  13. Empirical Relationships from Regional Infrasound Signals

    NASA Astrophysics Data System (ADS)

    Negraru, P. T.; Golden, P.

    2011-12-01

    Two yearlong infrasound observations were collected at two arrays located within the so called "Zone of Silence" or "Shadow Zone" from well controlled explosive sources to investigate the long term atmospheric effects on signal propagation. The first array (FNIAR) is located north of Fallon NV, at 154 km from the munitions disposal facility outside of Hawthorne NV, while the second array (DNIAR) is located near Mercury NV, approximately 293 km south east of the detonation site. Based on celerity values, approximately 80% of the observed arrivals at FNIAR are considered stratospheric (celerities below 300 m/s), while 20% of them propagated as tropospheric waveguides with celerities of 330-345 m/s. Although there is considerable scatter in the celerity values, two seasonal effects were observed for both years; 1) a gradual decrease in celerity from summer to winter (July/January period) and 2) an increase in celerity values that starts in April. In the winter months celerity values can be extremely variable, and we have observed signals with celerities as low as 240 m/s. In contrast, at DNIAR we observe much stronger seasonal variations. In winter months we have observed tropospheric, stratospheric and thermospheric arrivals while in the summer mostly tropospheric and slower thermospheric arrivals dominate. This interpretation is consistent with the current seasonal variation of the stratospheric winds and was confirmed by ray tracing with G2S models. In addition we also discuss how the observed infrasound arrivals can be used to improve ground truth estimation methods (location, origin times and yield). For instance an empirical wind parameter derived from G2S models suggests that the differences in celerity values observed for both arrays can be explained by changes in the wind conditions. Currently we have started working on improving location algorithms that take into account empirical celerity models derived from celerity/wind plots.

  14. Predictive vs. Empiric Assessment of Schistosomiasis: Implications for Treatment Projections in Ghana

    PubMed Central

    Kabore, Achille; Biritwum, Nana-Kwadwo; Downs, Philip W.; Soares Magalhaes, Ricardo J.; Zhang, Yaobi; Ottesen, Eric A.

    2013-01-01

    Background Mapping the distribution of schistosomiasis is essential to determine where control programs should operate, but because it is impractical to assess infection prevalence in every potentially endemic community, model-based geostatistics (MBG) is increasingly being used to predict prevalence and determine intervention strategies. Methodology/Principal Findings To assess the accuracy of MBG predictions for Schistosoma haematobium infection in Ghana, school surveys were evaluated at 79 sites to yield empiric prevalence values that could be compared with values derived from recently published MBG predictions. Based on these findings schools were categorized according to WHO guidelines so that practical implications of any differences could be determined. Using the mean predicted values alone, 21 of the 25 empirically determined ‘high-risk’ schools requiring yearly praziquantel would have been undertreated and almost 20% of the remaining schools would have been treated despite empirically-determined absence of infection – translating into 28% of the children in the 79 schools being undertreated and 12% receiving treatment in the absence of any demonstrated need. Conclusions/Significance Using the current predictive map for Ghana as a spatial decision support tool by aggregating prevalence estimates to the district level was clearly not adequate for guiding the national program, but the alternative of assessing each school in potentially endemic areas of Ghana or elsewhere is not at all feasible; modelling must be a tool complementary to empiric assessments. Thus for practical usefulness, predictive risk mapping should not be thought of as a one-time exercise but must, as in the current study, be an iterative process that incorporates empiric testing and model refining to create updated versions that meet the needs of disease control operational managers. PMID:23505584

  15. Empirical Corrections to Nutation Amplitudes and Precession Computed from a Global VLBI Solution

    NASA Astrophysics Data System (ADS)

    Schuh, H.; Ferrandiz, J. M.; Belda-Palazón, S.; Heinkelmann, R.; Karbon, M.; Nilsson, T.

    2017-12-01

    The IAU2000A nutation and IAU2006 precession models were adopted to provide accurate estimations and predictions of the Celestial Intermediate Pole (CIP). However, they are not fully accurate and VLBI (Very Long Baseline Interferometry) observations show that the CIP deviates from the position resulting from the application of the IAU2006/2000A model. Currently, those deviations or offsets of the CIP (Celestial Pole Offsets - CPO), can only be obtained by the VLBI technique. The accuracy of the order of 0.1 milliseconds of arc (mas) allows to compare the observed nutation with theoretical prediction model for a rigid Earth and constrain geophysical parameters describing the Earth's interior. In this study, we empirically evaluate the consistency, systematics and deviations of the IAU 2006/2000A precession-nutation model using several CPO time series derived from the global analysis of VLBI sessions. The final objective is the reassessment of the precession offset and rate, and the amplitudes of the principal terms of nutation, trying to empirically improve the conventional values derived from the precession/nutation theories. The statistical analysis of the residuals after re-fitting the main nutation terms demonstrates that our empirical corrections attain an error reduction by almost 15 micro arc seconds.

  16. LANDSAT 4 band 6 data evaluation

    NASA Technical Reports Server (NTRS)

    1983-01-01

    Computer modelled atmospheric transmittance and path radiance values were compared with empirical values derived from aircraft underflight data. Aircraft thermal infrared imagery and calibration data were available on two dates as were corresponding atmospheric radiosonde data. The radiosonde data were used as input to the LOWTRAN 5A code. The aircraft data were calibrated and utilized to generate analogous measurements. The results of the analysis indicate that there is a tendancy for the LOWTRAN model to underestimate atmospheric path radiance and overestimate atmospheric transmittance.

  17. Universality of market superstatistics

    NASA Astrophysics Data System (ADS)

    Denys, Mateusz; Gubiec, Tomasz; Kutner, Ryszard; Jagielski, Maciej; Stanley, H. Eugene

    2016-10-01

    We use a key concept of the continuous-time random walk formalism, i.e., continuous and fluctuating interevent times in which mutual dependence is taken into account, to model market fluctuation data when traders experience excessive (or superthreshold) losses or excessive (or superthreshold) profits. We analytically derive a class of "superstatistics" that accurately model empirical market activity data supplied by Bogachev, Ludescher, Tsallis, and Bunde that exhibit transition thresholds. We measure the interevent times between excessive losses and excessive profits and use the mean interevent discrete (or step) time as a control variable to derive a universal description of empirical data collapse. Our dominant superstatistic value is a power-law corrected by the lower incomplete gamma function, which asymptotically tends toward robustness but initially gives an exponential. We find that the scaling shape exponent that drives our superstatistics subordinates itself and a "superscaling" configuration emerges. Thanks to the Weibull copula function, our approach reproduces the empirically proven dependence between successive interevent times. We also use the approach to calculate a dynamic risk function and hence the dynamic VaR, which is significant in financial risk analysis. Our results indicate that there is a functional (but not literal) balance between excessive profits and excessive losses that can be described using the same body of superstatistics but different calibration values and driving parameters. We also extend our original approach to cover empirical seismic activity data (e.g., given by Corral), the interevent times of which range from minutes to years. Superpositioned superstatistics is another class of superstatistics that protects power-law behavior both for short- and long-time behaviors. These behaviors describe well the collapse of seismic activity data and capture so-called volatility clustering phenomena.

  18. Suppression cost forecasts in advance of wildfire seasons

    Treesearch

    Jeffrey P. Prestemon; Karen Abt; Krista Gebert

    2008-01-01

    Approaches for forecasting wildfire suppression costs in advance of a wildfire season are demonstrated for two lead times: fall and spring of the current fiscal year (Oct. 1–Sept. 30). Model functional forms are derived from aggregate expressions of a least cost plus net value change model. Empirical estimates of these models are used to generate advance-of-season...

  19. "We Do Not Know What Is the Real Story Anymore": Curricular Contextualization Principles That Support Indigenous Students in Understanding Natural Selection

    ERIC Educational Resources Information Center

    Sánchez Tapia, Ingrid; Krajcik, Joseph; Reiser, Brian

    2018-01-01

    We propose a process of contextualization based on seven empirically derived contextualization principles, aiming to provide opportunities for Indigenous Mexican adolescents to learn science in a way that supports them in fulfilling their right to an education aligned with their own culture and values. The contextualization principles we…

  20. Large wood influence on stream metabolism at a reach-scale in the Assabet River, Massachusetts

    NASA Astrophysics Data System (ADS)

    David, G. C. L.; Snyder, N. P.; Rosario, G. M.

    2016-12-01

    Total stream metabolism (TSM) represents the transfer of carbon through a channel by both primary production and respiration, and thus represents the movement of energy through a watershed. Large wood (LW) creates geomorphically complex channels by diverting flows, altering shear stresses on the channel bed and banks, and pool development. The increase in habitat complexity around LW is expected to increase TSM, but this change has not been directly measured. In this study, we measured changes in TSM around a LW jam in a Massachusetts river. Dissolved oxygen (DO) time series data are used to quantify gross primary production (GPP), ecosystem respiration (ER), which equal TSM when summed. Two primary objectives of this study are to (1) assess changes in TSM around LW and (2) compare empirical methods of deriving TSM to Grace et al.'s (2015) BASE model. We hypothesized that LW would increase TSM by providing larger pools, increasing coverage for fish and macroinvertebrates, increasing organic matter accumulation, and providing a place for primary producers to anchor and grow. The Assabet River is a 78 km2 drainage basin in central Massachusetts that provides public water supply to 7 towns. A change in TSM over a reach-scale was assessed using two YSI 6-Series Multiparameter Water Quality sondes over a 140 m long pool-riffle open meadow section. The reach included 6 pools and one LW jam. Every two weeks from July to November 2015, the sondes were moved to different pools. The sondes collected DO, temperature, depth, pH, salinity, light intensity, and turbidity at 15-minute intervals. Velocity (V) and discharge (Q) were measured weekly around the sondes and at established cross sections. Instantaneous V and Q were calculated for each sonde by modeling flows in HEC-RAS. Overall, TSM was heavily influenced by the pool size and indirectly to the LW jam which was associated with the largest pool. The largest error in TSM calculations is related to the empirically calculated reaeration flux (k), which represents oxygen inputs from the atmosphere. We used two well-established empirical equations to compare k values to the BASE model. The model agreed with empirically derived values during intermediate and high Q. Modeled GPP and ER diverged, sometimes by an order of magnitude, from the empirically derived results during the lowest flows.

  1. The use of interest rate swaps by nonprofit organizations: evidence from nonprofit health care providers.

    PubMed

    Stewart, Louis J; Trussel, John

    2006-01-01

    Although the use of derivatives, particularly interest rate swaps, has grown explosively over the past decade, derivative financial instrument use by nonprofits has received only limited attention in the research literature. Because little is known about the risk management activities of nonprofits, the impact of these instruments on the ability of nonprofits to raise capital may have significant public policy implications. The primary motivation of this study is to determine the types of derivatives used by nonprofits and estimate the frequency of their use among these organizations. Our study also extends contemporary finance theory by an empirical examination of the motivation for interest rate swap usage among nonprofits. Our empirical data came from 193 large nonprofit health care providers that issued debt to the public between 2000 and 2003. We used a univariate analysis and a multivariate analysis relying on logistic regression models to test alternative explanations of interest rate swaps usage by nonprofits, finding that more than 45 percent of our sample, 88 organizations, used interest rate swaps with an aggregate notional value in excess of $8.3 billion. Our empirical tests indicate the primary motive for nonprofits to use interest rate derivatives is to hedge their exposure to interest rate risk. Although these derivatives are a useful risk management tool, under conditions of falling bond market interest rates these derivatives may also expose a nonprofit swap user to the risk of a material unscheduled termination payment. Finally, we found considerable diversity in the informativeness of footnote disclosure among sample organizations that used interest rate swaps. Many nonprofits did not disclose these risks in their financial statements. In conclusion, we find financial managers in large nonprofits commonly use derivative financial instruments as risk management tools, but the use of interest rate swaps by nonprofits may expose them to other risks that are not adequately disclosed in their financial statements.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sorin Zaharia; C.Z. Cheng

    In this paper, we study whether the magnetic field of the T96 empirical model can be in force balance with an isotropic plasma pressure distribution. Using the field of T96, we obtain values for the pressure P by solving a Poisson-type equation {del}{sup 2}P = {del} {center_dot} (J x B) in the equatorial plane, and 1-D profiles on the Sun-Earth axis by integrating {del}P = J x B. We work in a flux coordinate system in which the magnetic field is expressed in terms of Euler potentials. Our results lead to the conclusion that the T96 model field cannot bemore » in equilibrium with an isotropic pressure. We also analyze in detail the computation of Birkeland currents using the Vasyliunas relation and the T96 field, which yields unphysical results, again indicating the lack of force balance in the empirical model. The underlying reason for the force imbalance is likely the fact that the derivatives of the least-square fitted model B are not accurate predictions of the actual magnetospheric field derivatives. Finally, we discuss a possible solution to the problem of lack of force balance in empirical field models.« less

  3. It's time to move on from the bell curve.

    PubMed

    Robinson, Lawrence R

    2017-11-01

    The bell curve was first described in the 18th century by de Moivre and Gauss to depict the distribution of binomial events, such as coin tossing, or repeated measures of physical objects. In the 19th and 20th centuries, the bell curve was appropriated, or perhaps misappropriated, to apply to biologic and social measures across people. For many years we used it to derive reference values for our electrophysiologic studies. There is, however, no reason to believe that electrophysiologic measures should approximate a bell-curve distribution, and empiric evidence suggests they do not. The concept of using mean ± 2 standard deviations should be abandoned. Reference values are best derived by using non-parametric analyses, such as percentile values. This proposal aligns with the recommendation of the recent normative data task force of the American Association of Neuromuscular & Electrodiagnostic Medicine and follows sound statistical principles. Muscle Nerve 56: 859-860, 2017. © 2017 Wiley Periodicals, Inc.

  4. Not All Stars Are the Sun: Empirical Calibration of the Mixing Length for Metal-poor Stars Using One-dimensional Stellar Evolution Models

    NASA Astrophysics Data System (ADS)

    Joyce, M.; Chaboyer, B.

    2018-03-01

    Theoretical stellar evolution models are constructed and tailored to the best known, observationally derived characteristics of metal-poor ([Fe/H] ∼ ‑2.3) stars representing a range of evolutionary phases: subgiant HD 140283, globular cluster M92, and four single, main sequence stars with well-determined parallaxes: HIP 46120, HIP 54639, HIP 106924, and WOLF 1137. It is found that the use of a solar-calibrated value of the mixing length parameter α MLT in models of these objects is ineffective at reproducing their observed properties. Empirically calibrated values of α MLT are presented for each object, accounting for uncertainties in the input physics employed in the models. It is advocated that the implementation of an adaptive mixing length is necessary in order for stellar evolution models to maintain fidelity in the era of high-precision observations.

  5. Error simulation of paired-comparison-based scaling methods

    NASA Astrophysics Data System (ADS)

    Cui, Chengwu

    2000-12-01

    Subjective image quality measurement usually resorts to psycho physical scaling. However, it is difficult to evaluate the inherent precision of these scaling methods. Without knowing the potential errors of the measurement, subsequent use of the data can be misleading. In this paper, the errors on scaled values derived form paired comparison based scaling methods are simulated with randomly introduced proportion of choice errors that follow the binomial distribution. Simulation results are given for various combinations of the number of stimuli and the sampling size. The errors are presented in the form of average standard deviation of the scaled values and can be fitted reasonably well with an empirical equation that can be sued for scaling error estimation and measurement design. The simulation proves paired comparison based scaling methods can have large errors on the derived scaled values when the sampling size and the number of stimuli are small. Examples are also given to show the potential errors on actually scaled values of color image prints as measured by the method of paired comparison.

  6. Semi-empirical proton binding constants for natural organic matter

    NASA Astrophysics Data System (ADS)

    Matynia, Anthony; Lenoir, Thomas; Causse, Benjamin; Spadini, Lorenzo; Jacquet, Thierry; Manceau, Alain

    2010-03-01

    Average proton binding constants ( KH,i) for structure models of humic (HA) and fulvic (FA) acids were estimated semi-empirically by breaking down the macromolecules into reactive structural units (RSUs), and calculating KH,i values of the RSUs using linear free energy relationships (LFER) of Hammett. Predicted log KH,COOH and log KH,Ph-OH are 3.73 ± 0.13 and 9.83 ± 0.23 for HA, and 3.80 ± 0.20 and 9.87 ± 0.31 for FA. The predicted constants for phenolic-type sites (Ph-OH) are generally higher than those derived from potentiometric titrations, but the difference may not be significant in view of the considerable uncertainty of the acidity constants determined from acid-base measurements at high pH. The predicted constants for carboxylic-type sites agree well with titration data analyzed with Model VI (4.10 ± 0.16 for HA, 3.20 ± 0.13 for FA; Tipping, 1998), the Impermeable Sphere model (3.50-4.50 for HA; Avena et al., 1999), and the Stockholm Humic Model (4.10 ± 0.20 for HA, 3.50 ± 0.40 for FA; Gustafsson, 2001), but differ by about one log unit from those obtained by Milne et al. (2001) with the NICA-Donnan model (3.09 ± 0.51 for HA, 2.65 ± 0.43 for FA), and used to derive recommended generic values. To clarify this ambiguity, 10 high-quality titration data from Milne et al. (2001) were re-analyzed with the new predicted equilibrium constants. The data are described equally well with the previous and new sets of values ( R2 ⩾ 0.98), not necessarily because the NICA-Donnan model is overparametrized, but because titration lacks the sensitivity needed to quantify the full binding properties of humic substances. Correlations between NICA-Donnan parameters are discussed, but general progress is impeded by the unknown number of independent parameters that can be varied during regression of a model fit to titration data. The high consistency between predicted and experimental KH,COOH values, excluding those of Milne et al. (2001), gives faith in the proposed semi-empirical structural approach, and its usefulness to assess the plausibility of proton stability constants derived from simulations of titration data.

  7. EFFECTIVE USE OF SEDIMENT QUALITY GUIDELINES: WHICH GUIDELINE IS RIGHT FOR ME?

    EPA Science Inventory

    A bewildering array of sediment quality guidelines have been developed, but fortunately they mostly fall into two families: empirically-derived and theoretically-derived. The empirically-derived guidelines use large data bases of concurrent sediment chemistry and biological effe...

  8. A comparison of daily water use estimates derived from constant-heat sap-flow probe values and gravimetric measurements in pot-grown saplings.

    Treesearch

    K.A. McCulloh; K. Winter; F.C. Meinzer; M. Garcia; J. Aranda; Lachenbruch B.

    2007-01-01

    The use of Granier-style heat dissipation sensors to measure sap flow is common in plant physiology, ecology, and hydrology. There has been concern that any change to the original Granier design invalidates the empirical relationship between sap flux density and the temperature difference between the probes. We compared daily water use estimates from gravimetric...

  9. Cluster subgroups based on overall pressure pain sensitivity and psychosocial factors in chronic musculoskeletal pain: Differences in clinical outcomes.

    PubMed

    Almeida, Suzana C; George, Steven Z; Leite, Raquel D V; Oliveira, Anamaria S; Chaves, Thais C

    2018-05-17

    We aimed to empirically derive psychosocial and pain sensitivity subgroups using cluster analysis within a sample of individuals with chronic musculoskeletal pain (CMP) and to investigate derived subgroups for differences in pain and disability outcomes. Eighty female participants with CMP answered psychosocial and disability scales and were assessed for pressure pain sensitivity. A cluster analysis was used to derive subgroups, and analysis of variance (ANOVA) was used to investigate differences between subgroups. Psychosocial factors (kinesiophobia, pain catastrophizing, anxiety, and depression) and overall pressure pain threshold (PPT) were entered into the cluster analysis. Three subgroups were empirically derived: cluster 1 (high pain sensitivity and high psychosocial distress; n = 12) characterized by low overall PPT and high psychosocial scores; cluster 2 (high pain sensitivity and intermediate psychosocial distress; n = 39) characterized by low overall PPT and intermediate psychosocial scores; and cluster 3 (low pain sensitivity and low psychosocial distress; n = 29) characterized by high overall PPT and low psychosocial scores compared to the other subgroups. Cluster 1 showed higher values for mean pain intensity (F (2,77)  = 10.58, p < 0.001) compared with cluster 3, and cluster 1 showed higher values for disability (F (2,77)  = 3.81, p = 0.03) compared with both clusters 2 and 3. Only cluster 1 was distinct from cluster 3 according to both pain and disability outcomes. Pain catastrophizing, depression, and anxiety were the psychosocial variables that best differentiated the subgroups. Overall, these results call attention to the importance of considering pain sensitivity and psychosocial variables to obtain a more comprehensive characterization of CMP patients' subtypes.

  10. Empirical determination of low J values of 13CH4 transitions from jet cooled and 80 K cell spectra in the icosad region (7170-7367 cm-1)

    NASA Astrophysics Data System (ADS)

    Votava, O.; Mašát, M.; Pracna, P.; Mondelain, D.; Kassi, S.; Liu, A. W.; Hu, S. M.; Campargue, A.

    2014-12-01

    The absorption spectrum of 13CH4 was recorded at two low temperatures in the icosad region near 1.38 μm, using direct absorption tunable diode lasers. Spectra were obtained using a cryogenic cell cooled at liquid nitrogen temperature (80 K) and a supersonic jet providing a 32 K rotational temperature in the 7173-7367 cm-1 and 7200-7354 cm-1 spectral intervals, respectively. Two lists of 4498 and 339 lines, including absolute line intensities, were constructed from the 80 K and jet spectra, respectively. All the transitions observed in jet conditions were observed at 80 K. From the temperature variation of their line intensities, the corresponding lower state energy values were determined. The 339 derived empirical values of the J rotational quantum number are found close to integer values and are all smaller than 4, as a consequence of the efficient rotational cooling. Six R(0) transitions have been identified providing key information on the origins of the vibrational bands which contribute to the very congested and not yet assigned 13CH4 spectrum in the considered region of the icosad.

  11. Substituent and ring effects on enthalpies of formation: 2-methyl- and 2-ethylbenzimidazoles versus benzene- and imidazole-derivatives

    NASA Astrophysics Data System (ADS)

    Jiménez, Pilar; Roux, María Victoria; Dávalos, Juan Z.; Temprado, Manuel; Ribeiro da Silva, Manuel A. V.; Ribeiro da Silva, Maria Das Dores M. C.; Amaral, Luísa M. P. F.; Cabildo, Pilar; Claramunt, Rosa M.; Mó, Otilia; Yáñez, Manuel; Elguero, José

    The enthalpies of combustion, heat capacities, enthalpies of sublimation and enthalpies of formation of 2-methylbenzimidazole (2MeBIM) and 2-ethylbenzimidazole (2EtBIM) are reported and the results compared with those of benzimidazole itself (BIM). Theoretical estimates of the enthalpies of formation were obtained through the use of atom equivalent schemes. The necessary energies were obtained in single-point calculations at the B3LYP/6-311+G(d,p) on B3LYP/6-31G* optimized geometries. The comparison of experimental and calculated values of benzenes, imidazoles and benzimidazoles bearing H (unsubstituted), methyl and ethyl groups shows remarkable homogeneity. The energetic group contribution transferability is not followed, but either using it or adding an empirical interaction term, it is possible to generate an enormous collection of reasonably accurate data for different substituted heterocycles (pyrazole-derivatives, pyridine-derivatives, etc.) from the large amount of values available for substituted benzenes and those of the parent (pyrazole, pyridine) heterocycles.

  12. An empirical formula to calculate the full energy peak efficiency of scintillation detectors.

    PubMed

    Badawi, Mohamed S; Abd-Elzaher, Mohamed; Thabet, Abouzeid A; El-khatib, Ahmed M

    2013-04-01

    This work provides an empirical formula to calculate the FEPE for different detectors using the effective solid angle ratio derived from experimental measurements. The full energy peak efficiency (FEPE) curves of the (2″(*)2″) NaI(Tl) detector at different seven axial distances from the detector were depicted in a wide energy range from 59.53 to 1408keV using standard point sources. The distinction was based on the effects of the source energy and the source-to-detector distance. A good agreement was noticed between the measured and calculated efficiency values for the source-to-detector distances at 20, 25, 30, 35, 40, 45 and 50cm. Copyright © 2012 Elsevier Ltd. All rights reserved.

  13. Selecting Soldiers and Civilians into the U.S. Army Officer Candidate School : Developing Empirical Selection Composites

    DTIC Science & Technology

    2014-07-01

    a biographical instrument measuring personality ; (b) a Work Values instrument representing work preferences investigated in prior officer and...items used in SelectOCS Phase 2 (see Table 2.5). TAPAS uses multidimensional pairwise preference (MDPP) personality items scored using item response...presented respondents with a list of 30 traits and 30 skills (derived from leadership and personality literature) and instructed them to rate the

  14. Chlorophyll-a retrieval in the Philippine waters

    NASA Astrophysics Data System (ADS)

    Perez, G. J. P.; Leonardo, E. M.; Felix, M. J.

    2017-12-01

    Satellite-based monitoring of chlorophyll-a (Chl-a) concentration has been widely used for estimating plankton biomass, detecting harmful algal blooms, predicting pelagic fish abundance, and water quality assessment. Chl-a concentrations at 1 km spatial resolution can be retrieved from MODIS onboard Aqua and Terra satellites. However, with this resolution, MODIS has scarce Chl-a retrieval in coastal and inland waters, which are relevant for archipelagic countries such as the Philippines. These gaps on Chl-a retrieval can be filled by sensors with higher spatial resolution, such as the OLI of Landsat 8. In this study, assessment of Chl-a concentration derived from MODIS/Aqua and OLI/Landsat 8 imageries across the open, coastal and inland waters of the Philippines was done. Validation activities were conducted at eight different sites around the Philippines for the period October 2016 to April 2017. Water samples filtered on the field were processed in the laboratory for Chl-a extraction. In situ remote sensing reflectance was derived from radiometric measurements and ancillary information, such as bathymetry and turbidity, were also measured. Correlation between in situ and satellite-derived Chl-a concentration using the blue-green ratio yielded relatively high R2 values of 0.51 to 0.90. This is despite an observed overestimation for both MODIS and OLI-derived values, especially in turbid and coastal waters. The overestimation of Chl-a may be attributed to inaccuracies in i) remote sensing reflectance (Rrs) retrieval and/or ii) empirical model used in calculating Chl-a concentration. However, a good 1:1 correspondence between the satellite and in situ maximum Rrs band ratio was established. This implies that the overestimation is largely due to the inaccuracies from the default coefficients used in the empirical model. New coefficients were then derived from the correlation analysis of both in situ-measured Chl-a concentration and maximum Rrs band ratio. This results to a significant improvement on calculated RMSE of satellite-derived Chl-a values. Meanwhile, it was observed that the blue-green band ratio has low Chl-a predictive capability in turbid waters. A more accurate estimation was found using the NIR and red band ratios for turbid waters with covarying Chl-a concentration and low sediment load.

  15. Gravity-darkening exponents in semi-detached binary systems from their photometric observations. II.

    NASA Astrophysics Data System (ADS)

    Djurašević, G.; Rovithis-Livaniou, H.; Rovithis, P.; Georgiades, N.; Erkapić, S.; Pavlović, R.

    2006-01-01

    This second part of our study concerning gravity-darkening presents the results for 8 semi-detached close binary systems. From the light-curve analysis of these systems the exponent of the gravity-darkening (GDE) for the Roche lobe filling components has been empirically derived. The method used for the light-curve analysis is based on Roche geometry, and enables simultaneous estimation of the systems' parameters and the gravity-darkening exponents. Our analysis is restricted to the black-body approximation which can influence in some degree the parameter estimation. The results of our analysis are: 1) For four of the systems, namely: TX UMa, β Per, AW Cam and TW Cas, there is a very good agreement between empirically estimated and theoretically predicted values for purely convective envelopes. 2) For the AI Dra system, the estimated value of gravity-darkening exponent is greater, and for UX Her, TW And and XZ Pup lesser than corresponding theoretical predictions, but for all mentioned systems the obtained values of the gravity-darkening exponent are quite close to the theoretically expected values. 3) Our analysis has proved generally that with the correction of the previously estimated mass ratios of the components within some of the analysed systems, the theoretical predictions of the gravity-darkening exponents for stars with convective envelopes are highly reliable. The anomalous values of the GDE found in some earlier studies of these systems can be considered as the consequence of the inappropriate method used to estimate the GDE. 4) The empirical estimations of GDE given in Paper I and in the present study indicate that in the light-curve analysis one can apply the recent theoretical predictions of GDE with high confidence for stars with both convective and radiative envelopes.

  16. Soil-plant transfer models for metals to improve soil screening value guidelines valid for São Paulo, Brazil.

    PubMed

    Dos Santos-Araujo, Sabrina N; Swartjes, Frank A; Versluijs, Kees W; Moreno, Fabio Netto; Alleoni, Luís R F

    2017-11-07

    In Brazil, there is a lack of combined soil-plant data attempting to explain the influence of specific climate, soil conditions, and crop management on heavy metal uptake and accumulation by plants. As a consequence, soil-plant relationships to be used in risk assessments or for derivation of soil screening values are not available. Our objective in this study was to develop empirical soil-plant models for Cd, Cu, Pb, Ni, and Zn, in order to derive appropriate soil screening values representative of humid tropical regions such as the state of São Paulo (SP), Brazil. Soil and plant samples from 25 vegetable species in the production areas of SP were collected. The concentrations of metals found in these soil samples were relatively low. Therefore, data from temperate regions were included in our study. The soil-plant relations derived had a good performance for SP conditions for 8 out of 10 combinations of metal and vegetable species. The bioconcentration factor (BCF) values for Cd, Cu, Ni, Pb, and Zn in lettuce and for Cd, Cu, Pb, and Zn in carrot were determined under three exposure scenarios at pH 5 and 6. The application of soil-plant models and the BCFs proposed in this study can be an important tool to derive national soil quality criteria. However, this methodological approach includes data assessed under different climatic conditions and soil types and need to be carefully considered.

  17. A Universal Threshold for the Assessment of Load and Output Residuals of Strain-Gage Balance Data

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.; Volden, T.

    2017-01-01

    A new universal residual threshold for the detection of load and gage output residual outliers of wind tunnel strain{gage balance data was developed. The threshold works with both the Iterative and Non{Iterative Methods that are used in the aerospace testing community to analyze and process balance data. It also supports all known load and gage output formats that are traditionally used to describe balance data. The threshold's definition is based on an empirical electrical constant. First, the constant is used to construct a threshold for the assessment of gage output residuals. Then, the related threshold for the assessment of load residuals is obtained by multiplying the empirical electrical constant with the sum of the absolute values of all first partial derivatives of a given load component. The empirical constant equals 2.5 microV/V for the assessment of balance calibration or check load data residuals. A value of 0.5 microV/V is recommended for the evaluation of repeat point residuals because, by design, the calculation of these residuals removes errors that are associated with the regression analysis of the data itself. Data from a calibration of a six-component force balance is used to illustrate the application of the new threshold definitions to real{world balance calibration data.

  18. Lifetime measurements and oscillator strengths in singly ionized scandium and the solar abundance of scandium

    NASA Astrophysics Data System (ADS)

    Pehlivan Rhodin, A.; Belmonte, M. T.; Engström, L.; Lundberg, H.; Nilsson, H.; Hartman, H.; Pickering, J. C.; Clear, C.; Quinet, P.; Fivet, V.; Palmeri, P.

    2017-12-01

    The lifetimes of 17 even-parity levels (3d5s, 3d4d, 3d6s and 4p2) in the region 57 743-77 837 cm-1 of singly ionized scandium (Sc II) were measured by two-step time-resolved laser induced fluorescence spectroscopy. Oscillator strengths of 57 lines from these highly excited upper levels were derived using a hollow cathode discharge lamp and a Fourier transform spectrometer. In addition, Hartree-Fock calculations where both the main relativistic and core-polarization effects were taken into account were carried out for both low- and high-excitation levels. There is a good agreement for most of the lines between our calculated branching fractions and the measurements of Lawler & Dakin in the region 9000-45 000 cm-1 for low excitation levels and with our measurements for high excitation levels in the region 23 500-63 100 cm-1. This, in turn, allowed us to combine the calculated branching fractions with the available experimental lifetimes to determine semi-empirical oscillator strengths for a set of 380 E1 transitions in Sc II. These oscillator strengths include the weak lines that were used previously to derive the solar abundance of scandium. The solar abundance of scandium is now estimated to logε⊙ = 3.04 ± 0.13 using these semi-empirical oscillator strengths to shift the values determined by Scott et al. The new estimated abundance value is in agreement with the meteoritic value (logεmet = 3.05 ± 0.02) of Lodders, Palme & Gail.

  19. Teaching and the life history of cultural transmission in Fijian villages.

    PubMed

    Kline, Michelle A; Boyd, Robert; Henrich, Joseph

    2013-12-01

    Much existing literature in anthropology suggests that teaching is rare in non-Western societies, and that cultural transmission is mostly vertical (parent-to-offspring). However, applications of evolutionary theory to humans predict both teaching and non-vertical transmission of culturally learned skills, behaviors, and knowledge should be common cross-culturally. Here, we review this body of theory to derive predictions about when teaching and non-vertical transmission should be adaptive, and thus more likely to be observed empirically. Using three interviews conducted with rural Fijian populations, we find that parents are more likely to teach than are other kin types, high-skill and highly valued domains are more likely to be taught, and oblique transmission is associated with high-skill domains, which are learned later in life. Finally, we conclude that the apparent conflict between theory and empirical evidence is due to a mismatch of theoretical hypotheses and empirical claims across disciplines, and we reconcile theory with the existing literature in light of our results.

  20. Near transferable phenomenological n-body potentials for noble metals

    NASA Astrophysics Data System (ADS)

    Pontikis, Vassilis; Baldinozzi, Gianguido; Luneville, Laurence; Simeone, David

    2017-09-01

    We present a semi-empirical model of cohesion in noble metals with suitable parameters reproducing a selected set of experimental properties of perfect and defective lattices in noble metals. It consists of two short-range, n-body terms accounting respectively for attractive and repulsive interactions, the former deriving from the second moment approximation of the tight-binding scheme and the latter from the gas approximation of the kinetic energy of electrons. The stability of the face centred cubic versus the hexagonal compact stacking is obtained via a long-range, pairwise function of customary use with ionic pseudo-potentials. Lattice dynamics, molecular statics, molecular dynamics and nudged elastic band calculations show that, unlike previous potentials, this cohesion model reproduces and predicts quite accurately thermodynamic properties in noble metals. In particular, computed surface energies, largely underestimated by existing empirical cohesion models, compare favourably with measured values, whereas predicted unstable stacking-fault energy profiles fit almost perfectly ab initio evaluations from the literature. All together the results suggest that this semi-empirical model is nearly transferable.

  1. Near transferable phenomenological n-body potentials for noble metals.

    PubMed

    Pontikis, Vassilis; Baldinozzi, Gianguido; Luneville, Laurence; Simeone, David

    2017-09-06

    We present a semi-empirical model of cohesion in noble metals with suitable parameters reproducing a selected set of experimental properties of perfect and defective lattices in noble metals. It consists of two short-range, n-body terms accounting respectively for attractive and repulsive interactions, the former deriving from the second moment approximation of the tight-binding scheme and the latter from the gas approximation of the kinetic energy of electrons. The stability of the face centred cubic versus the hexagonal compact stacking is obtained via a long-range, pairwise function of customary use with ionic pseudo-potentials. Lattice dynamics, molecular statics, molecular dynamics and nudged elastic band calculations show that, unlike previous potentials, this cohesion model reproduces and predicts quite accurately thermodynamic properties in noble metals. In particular, computed surface energies, largely underestimated by existing empirical cohesion models, compare favourably with measured values, whereas predicted unstable stacking-fault energy profiles fit almost perfectly ab initio evaluations from the literature. All together the results suggest that this semi-empirical model is nearly transferable.

  2. A Sociocognitive Perspective of Women's Participation in Physics: Improving Accessibility throughout the Pipeline

    NASA Astrophysics Data System (ADS)

    Kelly, Angela

    2017-01-01

    Sociopsychological theories and empirical research provide a framework for exploring causal pathways and targeted interventions to increase the representation of women in post-secondary physics. Women earned only 19.7 percent of physics undergraduate degrees in 2012 (APS, 2015). This disparity has been attributed to a variety of factors, including chilly classroom climates, gender-based stereotypes, persistent self-doubt, and a lack of role models in physics departments. The theoretical framework for this research synthesis is based upon several psychological theories of sociocognitive behavior and is derived from three general constructs: 1) self-efficacy and self-concept; 2) expectancy value and planned behavior; and 3) motivation and self-determination. Recent studies have suggested that the gender discrepancy in physics participation may be alleviated by applying interventions derived from social cognitive research. These interventions include social and familial support, welcoming and collaborative classroom environments, critical feedback, and identification with a malleable view of intelligence. This research provides empirically supported mechanisms for university stakeholders to implement reforms that will increase women's participation in physics.

  3. Normal-pressure Tests of Circular Plates with Clamped Edges

    NASA Technical Reports Server (NTRS)

    Mcpherson, Albert E; Ramberg, Walter; Levy, Samuel

    1942-01-01

    A fixture is described for making normal-pressure tests of flat plates 5 inches in diameter in which particular care was taken to obtain rigid clamping at the edges. Results are given for 19 plates, ranging in thickness from 0.015 to 0.072 inch. The center deflections and the extreme-fiber stresses at low pressures were found to agree with theoretical values; the center deflections at high pressures were 4 to 12 percent greater than the theoretical values. Empirical curves are derived of the pressure for the beginning of permanent set as a function of the dimensions of the plate and the tensile properties of the material.

  4. Normal-Pressure Tests of Circular Plates with Clamped Edges

    NASA Technical Reports Server (NTRS)

    Mcpherson, Albert E; Ramberg, Walter; Levy, Samuel

    1942-01-01

    A fixture is described for making normal-pressure tests of flat plates 5 inches in diameter in which particular care was taken to obtain rigid clamping at the edges. Results are given for 19 plates, ranging in thickness form 0.015 to 0.072 inch. The center deflections and the extreme-fiber stresses at low pressures were found to agree with theoretical values; the center deflections at high pressures were 4 to 12 percent greater than the theoretical values. Empirical curves are derived of the pressure for the beginning of the permanent set as a function of the dimensions of the plate and the tensile properties of the material.

  5. An empirical model of electron and ion fluxes derived from observations at geosynchronous orbit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Denton, M. H.; Thomsen, M. F.; Jordanova, V. K.

    Knowledge of the plasma fluxes at geosynchronous orbit is important to both scientific and operational investigations. We present a new empirical model of the ion flux and the electron flux at geosynchronous orbit (GEO) in the energy range ~1 eV to ~40 keV. The model is based on a total of 82 satellite-years of observations from the Magnetospheric Plasma Analyzer instruments on Los Alamos National Laboratory satellites at GEO. These data are assigned to a fixed grid of 24 local-times and 40 energies, at all possible values of Kp. Bi-linear interpolation is used between grid points to provide the ionmore » flux and the electron flux values at any energy and local-time, and for given values of geomagnetic activity (proxied by the 3-hour Kp index), and also for given values of solar activity (proxied by the daily F10.7 index). Initial comparison of the electron flux from the model with data from a Compact Environmental Anomaly Sensor II (CEASE-II), also located at geosynchronous orbit, indicate a good match during both quiet and disturbed periods. The model is available for distribution as a FORTRAN code that can be modified to suit user-requirements.« less

  6. An empirical model of electron and ion fluxes derived from observations at geosynchronous orbit

    DOE PAGES

    Denton, M. H.; Thomsen, M. F.; Jordanova, V. K.; ...

    2015-04-01

    Knowledge of the plasma fluxes at geosynchronous orbit is important to both scientific and operational investigations. We present a new empirical model of the ion flux and the electron flux at geosynchronous orbit (GEO) in the energy range ~1 eV to ~40 keV. The model is based on a total of 82 satellite-years of observations from the Magnetospheric Plasma Analyzer instruments on Los Alamos National Laboratory satellites at GEO. These data are assigned to a fixed grid of 24 local-times and 40 energies, at all possible values of Kp. Bi-linear interpolation is used between grid points to provide the ionmore » flux and the electron flux values at any energy and local-time, and for given values of geomagnetic activity (proxied by the 3-hour Kp index), and also for given values of solar activity (proxied by the daily F10.7 index). Initial comparison of the electron flux from the model with data from a Compact Environmental Anomaly Sensor II (CEASE-II), also located at geosynchronous orbit, indicate a good match during both quiet and disturbed periods. The model is available for distribution as a FORTRAN code that can be modified to suit user-requirements.« less

  7. Development of Quantum Chemical Method to Calculate Half Maximal Inhibitory Concentration (IC50 ).

    PubMed

    Bag, Arijit; Ghorai, Pradip Kr

    2016-05-01

    Till date theoretical calculation of the half maximal inhibitory concentration (IC50 ) of a compound is based on different Quantitative Structure Activity Relationship (QSAR) models which are empirical methods. By using the Cheng-Prusoff equation it may be possible to compute IC50 , but this will be computationally very expensive as it requires explicit calculation of binding free energy of an inhibitor with respective protein or enzyme. In this article, for the first time we report an ab initio method to compute IC50 of a compound based only on the inhibitor itself where the effect of the protein is reflected through a proportionality constant. By using basic enzyme inhibition kinetics and thermodynamic relations, we derive an expression of IC50 in terms of hydrophobicity, electric dipole moment (μ) and reactivity descriptor (ω) of an inhibitor. We implement this theory to compute IC50 of 15 HIV-1 capsid inhibitors and compared them with experimental results and available other QASR based empirical results. Calculated values using our method are in very good agreement with the experimental values compared to the values calculated using other methods. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. The nu sub 9 fundamental of ethane - Integrated intensity and band absorption measurements with application to the atmospheres of the major planets

    NASA Technical Reports Server (NTRS)

    Varanasi, P.; Cess, R. D.; Bangaru, B. R. P.

    1974-01-01

    Measurements of the absolute intensity and integrated band absorption have been performed for the nu sub 9 fundamental band of ethane. The intensity is found to be about 34 per sq cm per atm at STP, and this is significantly higher than previous estimates. It is shown that a Gaussian profile provides an empirical representation of the apparent spectral absorption coefficient. Employing this empirical profile, a simple expression is derived for the integrated band absorption, which is in excellent agreement with experimental values. The band model is then employed to investigate the possible role of ethane as a source of thermal infrared opacity within the atmospheres of Jupiter and Saturn, and to interpret qualitatively observed brightness temperatures for Saturn.

  9. Identifying Early Childhood Personality Dimensions Using the California Child Q-Set and Prospective Associations With Behavioral and Psychosocial Development.

    PubMed

    Wilson, Sylia; Schalet, Benjamin D; Hicks, Brian M; Zucker, Robert A

    2013-08-01

    The present study used an empirical, "bottom-up" approach to delineate the structure of the California Child Q-Set (CCQ), a comprehensive set of personality descriptors, in a sample of 373 preschool-aged children. This approach yielded two broad trait dimensions, Adaptive Socialization (emotional stability, compliance, intelligence) and Anxious Inhibition (emotional/behavioral introversion). Results demonstrate the value of using empirical derivation to investigate the structure of personality in young children, speak to the importance of early-evident personality traits for adaptive development, and are consistent with a growing body of evidence indicating that personality structure in young children is similar, but not identical to, that in adults, suggesting a model of broad personality dimensions in childhood that evolve into narrower traits in adulthood.

  10. The Theory of Value-Based Payment Incentives and Their Application to Health Care.

    PubMed

    Conrad, Douglas A

    2015-12-01

    To present the implications of agency theory in microeconomics, augmented by behavioral economics, for different methods of value-based payment in health care; and to derive a set of future research questions and policy recommendations based on that conceptual analysis. Original literature of agency theory, and secondarily behavioral economics, combined with applied research and empirical evidence on the application of those principles to value-based payment. Conceptual analysis and targeted review of theoretical research and empirical literature relevant to value-based payment in health care. Agency theory and secondarily behavioral economics have powerful implications for design of value-based payment in health care. To achieve improved value-better patient experience, clinical quality, health outcomes, and lower costs of care-high-powered incentives should directly target improved care processes, enhanced patient experience, and create achievable benchmarks for improved outcomes. Differing forms of value-based payment (e.g., shared savings and risk, reference pricing, capitation, and bundled payment), coupled with adjunct incentives for quality and efficiency, can be tailored to different market conditions and organizational settings. Payment contracts that are "incentive compatible"-which directly encourage better care and reduced cost, mitigate gaming, and selectively induce clinically efficient providers to participate-will focus differentially on evidence-based care processes, will right-size and structure incentives to avoid crowd-out of providers' intrinsic motivation, and will align patient incentives with value. Future research should address the details of putting these and related principles into practice; further, by deploying these insights in payment design, policy makers will improve health care value for patients and purchasers. © Health Research and Educational Trust.

  11. Holocene soil pH changes and East Asian summer monsoon evolution derived from loess brGDGTs in the northeastern Tibetan Plateau

    NASA Astrophysics Data System (ADS)

    Duan, Y.; Sun, Q.; Zhao, H.

    2017-12-01

    GDGTs-based proxies have been used successfully to reconstruct paleo-temperature from loess-paleosol sequences during the past few years. However, the pH variations of loess sediments derived from GDGTs covering the geological history remain poorly constrained. Here we present two pH records spanning the last 12 ka (1ka=1000years) based on the modified cyclization ratio index (CBT') of the branched GDGTs using regional CBT'-pH empirical relationship from two well-dated loess-paleosol sections (YWY14 and SHD09) in the northeastern Tibetan Plateau. The results indicate that a slightly alkaline condition occurred during 12 8.5 ka with pH values ranging from 6.98 to 7.24, then CBT'-derived pH decreased from 8.5 to 6.5 ka with values from 7.19 to 6.49 and gradually increased thereafter. The reconstructed pH values from topmost samples can be well compared with instrumental pH values of the surrounding surface soil. The lowest intervals of CBT'-derived pH values during the mid-Holocene in our records are consistent with the results of highest tree pollen percentage from the adjacent lake sediments and regional weakest aeolian activities, which reveals that the moisture maximum during that period, but conflicted with previous results of the wettest early-Holocene inferred from speleothem or ostracod shell oxygen isotope (δ18O) values. Taking together, we conclude that Holocene humidity evolution (wettest middle Holocene) in response to the East Asian summer monsoon (EASM) changes exerts important control on pH variations of loess deposits in northeastern Tibetan Plateau. CBT'-derived pH variations can be potentially used as an indicator of EASM evolution reconstructions. In addition, we argue that speleothem or ostracod shell δ18O records are essentially a signal of the isotopic composition of precipitations rather than EASM intensity.

  12. A Comparison of Modeled and Observed Ocean Mixed Layer Behavior in a Sea Breeze Influenced Coastal Region

    DTIC Science & Technology

    1993-12-21

    Latent(Lower Solid), Net Infrared (Dashed), and Net viii Heat Loss (Upper Solid - the Other 3 Surmmed) are Plotted, with Positive Values :ndicating...gained from solar insolation, Qs, and the heat lost from the surface due to latent, Qe, sensible, Qh, and net infrared radiation, Qb is positive...five empirically derived dimensionless constants in the model. With the introduction of two new unknowns, <E> and < ww2 >, the prediction of the upper

  13. Evidence-based ethics? On evidence-based practice and the "empirical turn" from normative bioethics

    PubMed Central

    Goldenberg, Maya J

    2005-01-01

    Background The increase in empirical methods of research in bioethics over the last two decades is typically perceived as a welcomed broadening of the discipline, with increased integration of social and life scientists into the field and ethics consultants into the clinical setting, however it also represents a loss of confidence in the typical normative and analytic methods of bioethics. Discussion The recent incipiency of "Evidence-Based Ethics" attests to this phenomenon and should be rejected as a solution to the current ambivalence toward the normative resolution of moral problems in a pluralistic society. While "evidence-based" is typically read in medicine and other life and social sciences as the empirically-adequate standard of reasonable practice and a means for increasing certainty, I propose that the evidence-based movement in fact gains consensus by displacing normative discourse with aggregate or statistically-derived empirical evidence as the "bottom line". Therefore, along with wavering on the fact/value distinction, evidence-based ethics threatens bioethics' normative mandate. The appeal of the evidence-based approach is that it offers a means of negotiating the demands of moral pluralism. Rather than appealing to explicit values that are likely not shared by all, "the evidence" is proposed to adjudicate between competing claims. Quantified measures are notably more "neutral" and democratic than liberal markers like "species normal functioning". Yet the positivist notion that claims stand or fall in light of the evidence is untenable; furthermore, the legacy of positivism entails the quieting of empirically non-verifiable (or at least non-falsifiable) considerations like moral claims and judgments. As a result, evidence-based ethics proposes to operate with the implicit normativity that accompanies the production and presentation of all biomedical and scientific facts unchecked. Summary The "empirical turn" in bioethics signals a need for reconsideration of the methods used for moral evaluation and resolution, however the options should not include obscuring normative content by seemingly neutral technical measure. PMID:16277663

  14. Performance evaluation of ocean color satellite models for deriving accurate chlorophyll estimates in the Gulf of Saint Lawrence

    NASA Astrophysics Data System (ADS)

    Montes-Hugo, M.; Bouakba, H.; Arnone, R.

    2014-06-01

    The understanding of phytoplankton dynamics in the Gulf of the Saint Lawrence (GSL) is critical for managing major fisheries off the Canadian East coast. In this study, the accuracy of two atmospheric correction techniques (NASA standard algorithm, SA, and Kuchinke's spectral optimization, KU) and three ocean color inversion models (Carder's empirical for SeaWiFS (Sea-viewing Wide Field-of-View Sensor), EC, Lee's quasi-analytical, QAA, and Garver- Siegel-Maritorena semi-empirical, GSM) for estimating the phytoplankton absorption coefficient at 443 nm (aph(443)) and the chlorophyll concentration (chl) in the GSL is examined. Each model was validated based on SeaWiFS images and shipboard measurements obtained during May of 2000 and April 2001. In general, aph(443) estimates derived from coupling KU and QAA models presented the smallest differences with respect to in situ determinations as measured by High Pressure liquid Chromatography measurements (median absolute bias per cruise up to 0.005, RMSE up to 0.013). A change on the inversion approach used for estimating aph(443) values produced up to 43.4% increase on prediction error as inferred from the median relative bias per cruise. Likewise, the impact of applying different atmospheric correction schemes was secondary and represented an additive error of up to 24.3%. By using SeaDAS (SeaWiFS Data Analysis System) default values for the optical cross section of phytoplankton (i.e., aph(443) = aph(443)/chl = 0.056 m2mg-1), the median relative bias of our chl estimates as derived from the most accurate spaceborne aph(443) retrievals and with respect to in situ determinations increased up to 29%.

  15. Development and evaluation of consensus-based sediment effect concentrations for polychlorinated biphenyls

    USGS Publications Warehouse

    MacDonald, Donald D.; Dipinto, Lisa M.; Field, Jay; Ingersoll, Christopher G.; Long, Edward R.; Swartz, Richard C.

    2000-01-01

    Sediment-quality guidelines (SQGs) have been published for polychlorinated biphenyls (PCBs) using both empirical and theoretical approaches. Empirically based guidelines have been developed using the screening-level concentration, effects range, effects level, and apparent effects threshold approaches. Theoretically based guidelines have been developed using the equilibrium-partitioning approach. Empirically-based guidelines were classified into three general categories, in accordance with their original narrative intents, and used to develop three consensus-based sediment effect concentrations (SECs) for total PCBs (tPCBs), including a threshold effect concentration, a midrange effect concentration, and an extreme effect concentration. Consensus-based SECs were derived because they estimate the central tendency of the published SQGs and, thus, reconcile the guidance values that have been derived using various approaches. Initially, consensus-based SECs for tPCBs were developed separately for freshwater sediments and for marine and estuarine sediments. Because the respective SECs were statistically similar, the underlying SQGs were subsequently merged and used to formulate more generally applicable SECs. The three consensus-based SECs were then evaluated for reliability using matching sediment chemistry and toxicity data from field studies, dose-response data from spiked-sediment toxicity tests, and SQGs derived from the equilibrium-partitioning approach. The results of this evaluation demonstrated that the consensus-based SECs can accurately predict both the presence and absence of toxicity in field-collected sediments. Importantly, the incidence of toxicity increases incrementally with increasing concentrations of tPCBs. Moreover, the consensus-based SECs are comparable to the chronic toxicity thresholds that have been estimated from dose-response data and equilibrium-partitioning models. Therefore, consensus-based SECs provide a unifying synthesis of existing SQGs, reflect causal rather than correlative effects, and accurately predict sediment toxicity in PCB-contaminated sediments.

  16. Testing a new Free Core Nutation empirical model

    NASA Astrophysics Data System (ADS)

    Belda, Santiago; Ferrándiz, José M.; Heinkelmann, Robert; Nilsson, Tobias; Schuh, Harald

    2016-03-01

    The Free Core Nutation (FCN) is a free mode of the Earth's rotation caused by the different material characteristics of the Earth's core and mantle. This causes the rotational axes of those layers to slightly diverge from each other, resulting in a wobble of the Earth's rotation axis comparable to nutations. In this paper we focus on estimating empirical FCN models using the observed nutations derived from the VLBI sessions between 1993 and 2013. Assuming a fixed value for the oscillation period, the time-variable amplitudes and phases are estimated by means of multiple sliding window analyses. The effects of using different a priori Earth Rotation Parameters (ERP) in the derivation of models are also addressed. The optimal choice of the fundamental parameters of the model, namely the window width and step-size of its shift, is searched by performing a thorough experimental analysis using real data. The former analyses lead to the derivation of a model with a temporal resolution higher than the one used in the models currently available, with a sliding window reduced to 400 days and a day-by-day shift. It is shown that this new model increases the accuracy of the modeling of the observed Earth's rotation. Besides, empirical models determined from USNO Finals as a priori ERP present a slightly lower Weighted Root Mean Square (WRMS) of residuals than IERS 08 C04 along the whole period of VLBI observations, according to our computations. The model is also validated through comparisons with other recognized models. The level of agreement among them is satisfactory. Let us remark that our estimates give rise to the lowest residuals and seem to reproduce the FCN signal in more detail.

  17. A semi-empirical model for estimating surface solar radiation from satellite data

    NASA Astrophysics Data System (ADS)

    Janjai, Serm; Pattarapanitchai, Somjet; Wattan, Rungrat; Masiri, Itsara; Buntoung, Sumaman; Promsen, Worrapass; Tohsing, Korntip

    2013-05-01

    This paper presents a semi-empirical model for estimating surface solar radiation from satellite data for a tropical environment. The model expresses solar irradiance as a semi-empirical function of cloud index, aerosol optical depth, precipitable water, total column ozone and air mass. The cloud index data were derived from MTSAT-1R satellite, whereas the aerosol optical depth data were obtained from MODIS/Terra satellite. The total column ozone data were derived from OMI/AURA satellite and the precipitable water data were obtained from NCEP/NCAR. A five year period (2006-2010) of these data and global solar irradiance measured at four sites in Thailand namely, Chiang Mai (18.78 °N, 98.98 °E), Nakhon Pathom (13.82 °N, 100.04 °E), Ubon Ratchathani (15.25 °N, 104.87 °E) and Songkhla (7.20 °N, 100.60 °E), were used to derive the coefficients of the model. To evaluate its performance, the model was used to calculate solar radiation at four sites in Thailand namely, Phisanulok (16.93 °N, 100.24 °E), Kanchanaburi (14.02 °N, 99.54 °E), Nongkhai (17.87 °N, 102.72 °E) and Surat Thani (9.13 °N, 99.15 °E) and the results were compared with solar radiation measured at these sites. It was found that the root mean square difference (RMSD) between measured and calculated values of hourly solar radiation was in the range of 25.5-29.4%. The RMSD is reduced to 10.9-17.0% for the case of monthly average hourly radiation. The proposed model has the advantage in terms of the simplicity for applications and reasonable accuracy of the results.

  18. Stellar Diameters and Temperatures. III. Main-sequence A, F, G, and K Stars: Additional High-precision Measurements and Empirical Relations

    NASA Astrophysics Data System (ADS)

    Boyajian, Tabetha S.; von Braun, Kaspar; van Belle, Gerard; Farrington, Chris; Schaefer, Gail; Jones, Jeremy; White, Russel; McAlister, Harold A.; ten Brummelaar, Theo A.; Ridgway, Stephen; Gies, Douglas; Sturmann, Laszlo; Sturmann, Judit; Turner, Nils H.; Goldfinger, P. J.; Vargas, Norm

    2013-07-01

    Based on CHARA Array measurements, we present the angular diameters of 23 nearby, main-sequence stars, ranging from spectral types A7 to K0, 5 of which are exoplanet host stars. We derive linear radii, effective temperatures, and absolute luminosities of the stars using Hipparcos parallaxes and measured bolometric fluxes. The new data are combined with previously published values to create an Angular Diameter Anthology of measured angular diameters to main-sequence stars (luminosity classes V and IV). This compilation consists of 125 stars with diameter uncertainties of less than 5%, ranging in spectral types from A to M. The large quantity of empirical data is used to derive color-temperature relations to an assortment of color indices in the Johnson (BVR J I J JHK), Cousins (R C I C), Kron (R K I K), Sloan (griz), and WISE (W 3 W 4) photometric systems. These relations have an average standard deviation of ~3% and are valid for stars with spectral types A0-M4. To derive even more accurate relations for Sun-like stars, we also determined these temperature relations omitting early-type stars (T eff > 6750 K) that may have biased luminosity estimates because of rapid rotation; for this subset the dispersion is only ~2.5%. We find effective temperatures in agreement within a couple of percent for the interferometrically characterized sample of main-sequence stars compared to those derived via the infrared flux method and spectroscopic analysis.

  19. Measuring the effects of heat wave episodes on the human body's thermal balance

    NASA Astrophysics Data System (ADS)

    Katavoutas, George; Theoharatos, George; Flocas, Helena A.; Asimakopoulos, Dimosthenis N.

    2009-03-01

    During the peak of an extensive heat wave episode on 23-25 July 2007, simultaneous thermophysiological measurements were made in two non-acclimated healthy adults of different sex in a suburban area of Greater Athens, Greece. Based on experimental measurements of mean skin temperature and metabolic heat production, heat fluxes to and from the human body were calculated, and the biometeorological index heat load (HL) produced was determined according to the heat balance equation. Comparing experimental values with those derived from theoretical estimates revealed a great heat stress for both individuals, especially the male, while theoretical values underestimated heat stress. The study also revealed that thermophysiological factors, such as mean skin temperature and metabolic heat production, play an important role in determining heat fluxes patterns in the heat balance equation. The theoretical values of mean skin temperature as derived from an empirical equation may not be appropriate to describe the changes that take place in a non-acclimated individual. Furthermore, the changes in metabolic heat production were significant even for standard activity.

  20. Defining landscape resistance values in least-cost connectivity models for the invasive grey squirrel: a comparison of approaches using expert-opinion and habitat suitability modelling.

    PubMed

    Stevenson-Holt, Claire D; Watts, Kevin; Bellamy, Chloe C; Nevin, Owen T; Ramsey, Andrew D

    2014-01-01

    Least-cost models are widely used to study the functional connectivity of habitat within a varied landscape matrix. A critical step in the process is identifying resistance values for each land cover based upon the facilitating or impeding impact on species movement. Ideally resistance values would be parameterised with empirical data, but due to a shortage of such information, expert-opinion is often used. However, the use of expert-opinion is seen as subjective, human-centric and unreliable. This study derived resistance values from grey squirrel habitat suitability models (HSM) in order to compare the utility and validity of this approach with more traditional, expert-led methods. Models were built and tested with MaxEnt, using squirrel presence records and a categorical land cover map for Cumbria, UK. Predictions on the likelihood of squirrel occurrence within each land cover type were inverted, providing resistance values which were used to parameterise a least-cost model. The resulting habitat networks were measured and compared to those derived from a least-cost model built with previously collated information from experts. The expert-derived and HSM-inferred least-cost networks differ in precision. The HSM-informed networks were smaller and more fragmented because of the higher resistance values attributed to most habitats. These results are discussed in relation to the applicability of both approaches for conservation and management objectives, providing guidance to researchers and practitioners attempting to apply and interpret a least-cost approach to mapping ecological networks.

  1. Developing Empirical Lightning Cessation Forecast Guidance for the Cape Canaveral Air Force Station and Kennedy Space Center

    NASA Technical Reports Server (NTRS)

    Stano, Geoffrey T.; Fuelberg, Henry E.; Roeder, William P.

    2010-01-01

    This research addresses the 45th Weather Squadron's (45WS) need for improved guidance regarding lightning cessation at Cape Canaveral Air Force Station and Kennedy Space Center (KSC). KSC's Lightning Detection and Ranging (LDAR) network was the primary observational tool to investigate both cloud-to-ground and intracloud lightning. Five statistical and empirical schemes were created from LDAR, sounding, and radar parameters derived from 116 storms. Four of the five schemes were unsuitable for operational use since lightning advisories would be canceled prematurely, leading to safety risks to personnel. These include a correlation and regression tree analysis, three variants of multiple linear regression, event time trending, and the time delay between the greatest height of the maximum dBZ value to the last flash. These schemes failed to adequately forecast the maximum interval, the greatest time between any two flashes in the storm. The majority of storms had a maximum interval less than 10 min, which biased the schemes toward small values. Success was achieved with the percentile method (PM) by separating the maximum interval into percentiles for the 100 dependent storms.

  2. Τhe observational and empirical thermospheric CO2 and NO power do not exhibit power-law behavior; an indication of their reliability

    NASA Astrophysics Data System (ADS)

    Varotsos, C. A.; Efstathiou, M. N.

    2018-03-01

    In this paper we investigate the evolution of the energy emitted by CO2 and NO from the Earth's thermosphere on a global scale using both observational and empirically derived data. In the beginning, we analyze the daily power observations of CO2 and NO received from the Sounding of the Atmosphere using Broadband Emission Radiometry (SABER) equipment on the NASA Thermosphere-Ionosphere-Mesosphere Energetics and Dynamics (TIMED) satellite for the entire period 2002-2016. We then perform the same analysis on the empirical daily power emitted by CO2 and NO that were derived recently from the infrared energy budget of the thermosphere during 1947-2016. The tool used for the analysis of the observational and empirical datasets is the detrended fluctuation analysis, in order to investigate whether the power emitted by CO2 and by NO from the thermosphere exhibits power-law behavior. The results obtained from both observational and empirical data do not support the establishment of the power-law behavior. This conclusion reveals that the empirically derived data are characterized by the same intrinsic properties as those of the observational ones, thus enhancing the validity of their reliability.

  3. Advancing Empirical Scholarship to Further Develop Evaluation Theory and Practice

    ERIC Educational Resources Information Center

    Christie, Christina A.

    2011-01-01

    Good theory development is grounded in empirical inquiry. In the context of educational evaluation, the development of empirically grounded theory has important benefits for the field and the practitioner. In particular, a shift to empirically derived theory will assist in advancing more systematic and contextually relevant evaluation practice, as…

  4. Investigating regional mobility in the southern hinterland of the Wari Empire: biogeochemistry at the site of Beringa, Peru.

    PubMed

    Knudson, Kelly J; Tung, Tiffiny A

    2011-06-01

    Empires have transformed political, social, and environmental landscapes in the past and present. Although much research on archaeological empires focuses on large-scale imperial processes, we use biogeochemistry and bioarchaeology to investigate how imperialism may have reshaped regional political organization and regional migration patterns in the Wari Empire of the Andean Middle Horizon (ca. AD 600-1000). Radiogenic strontium isotope analysis of human remains from the site of Beringa in the Majes Valley of southern Peru identified the geographic origins of individuals impacted by the Wari Empire. At Beringa, the combined archaeological human enamel and bone values range from (87)Sr/(86)Sr = 0.70802 - 0.70960, with a mean (87)Sr/(86)Sr = 0.70842 ± 0.00027 (1σ, n = 52). These data are consistent with radiogenic strontium isotope data from the local fauna in the Majes Valley and imply that most individuals were local inhabitants, rather than migrants from the Wari heartland or some other locale. There were two outliers at Beringa, and these "non-local" individuals may have derived from other parts of the South Central Andes. This is consistent with our understanding of expansive trade networks and population movement in the Andean Middle Horizon, likely influenced by the policies of the Wari Empire. Although not a Wari colony, the incorporation of small sites like Beringa into the vast social and political networks of the Middle Horizon resulted in small numbers of migrants at Beringa. Copyright © 2011 Wiley-Liss, Inc.

  5. On Allometry Relations

    NASA Astrophysics Data System (ADS)

    West, Damien; West, Bruce J.

    2012-07-01

    There are a substantial number of empirical relations that began with the identification of a pattern in data; were shown to have a terse power-law description; were interpreted using existing theory; reached the level of "law" and given a name; only to be subsequently fade away when it proved impossible to connect the "law" with a larger body of theory and/or data. Various forms of allometry relations (ARs) have followed this path. The ARs in biology are nearly two hundred years old and those in ecology, geophysics, physiology and other areas of investigation are not that much younger. In general if X is a measure of the size of a complex host network and Y is a property of a complex subnetwork embedded within the host network a theoretical AR exists between the two when Y = aXb. We emphasize that the reductionistic models of AR interpret X and Y as dynamic variables, albeit the ARs themselves are explicitly time independent even though in some cases the parameter values change over time. On the other hand, the phenomenological models of AR are based on the statistical analysis of data and interpret X and Y as averages to yield the empirical AR: = ab. Modern explanations of AR begin with the application of fractal geometry and fractal statistics to scaling phenomena. The detailed application of fractal geometry to the explanation of theoretical ARs in living networks is slightly more than a decade old and although well received it has not been universally accepted. An alternate perspective is given by the empirical AR that is derived using linear regression analysis of fluctuating data sets. We emphasize that the theoretical and empirical ARs are not the same and review theories "explaining" AR from both the reductionist and statistical fractal perspectives. The probability calculus is used to systematically incorporate both views into a single modeling strategy. We conclude that the empirical AR is entailed by the scaling behavior of the probability density, which is derived using the probability calculus.

  6. Identifying Early Childhood Personality Dimensions Using the California Child Q-Set and Prospective Associations With Behavioral and Psychosocial Development

    PubMed Central

    Wilson, Sylia; Schalet, Benjamin D.; Hicks, Brian M.; Zucker, Robert A.

    2013-01-01

    The present study used an empirical, “bottom-up” approach to delineate the structure of the California Child Q-Set (CCQ), a comprehensive set of personality descriptors, in a sample of 373 preschool-aged children. This approach yielded two broad trait dimensions, Adaptive Socialization (emotional stability, compliance, intelligence) and Anxious Inhibition (emotional/behavioral introversion). Results demonstrate the value of using empirical derivation to investigate the structure of personality in young children, speak to the importance of early-evident personality traits for adaptive development, and are consistent with a growing body of evidence indicating that personality structure in young children is similar, but not identical to, that in adults, suggesting a model of broad personality dimensions in childhood that evolve into narrower traits in adulthood. PMID:24223448

  7. Empirical-statistical downscaling of reanalysis data to high-resolution air temperature and specific humidity above a glacier surface (Cordillera Blanca, Peru)

    NASA Astrophysics Data System (ADS)

    Hofer, Marlis; MöLg, Thomas; Marzeion, Ben; Kaser, Georg

    2010-06-01

    Recently initiated observation networks in the Cordillera Blanca (Peru) provide temporally high-resolution, yet short-term, atmospheric data. The aim of this study is to extend the existing time series into the past. We present an empirical-statistical downscaling (ESD) model that links 6-hourly National Centers for Environmental Prediction (NCEP)/National Center for Atmospheric Research (NCAR) reanalysis data to air temperature and specific humidity, measured at the tropical glacier Artesonraju (northern Cordillera Blanca). The ESD modeling procedure includes combined empirical orthogonal function and multiple regression analyses and a double cross-validation scheme for model evaluation. Apart from the selection of predictor fields, the modeling procedure is automated and does not include subjective choices. We assess the ESD model sensitivity to the predictor choice using both single-field and mixed-field predictors. Statistical transfer functions are derived individually for different months and times of day. The forecast skill largely depends on month and time of day, ranging from 0 to 0.8. The mixed-field predictors perform better than the single-field predictors. The ESD model shows added value, at all time scales, against simpler reference models (e.g., the direct use of reanalysis grid point values). The ESD model forecast 1960-2008 clearly reflects interannual variability related to the El Niño/Southern Oscillation but is sensitive to the chosen predictor type.

  8. Deriving Multidimensional Poverty Indicators: Methodological Issues and an Empirical Analysis for Italy

    ERIC Educational Resources Information Center

    Coromaldi, Manuela; Zoli, Mariangela

    2012-01-01

    Theoretical and empirical studies have recently adopted a multidimensional concept of poverty. There is considerable debate about the most appropriate degree of multidimensionality to retain in the analysis. In this work we add to the received literature in two ways. First, we derive indicators of multiple deprivation by applying a particular…

  9. Equation of state for dense nucleonic matter from metamodeling. I. Foundational aspects

    NASA Astrophysics Data System (ADS)

    Margueron, Jérôme; Hoffmann Casali, Rudiney; Gulminelli, Francesca

    2018-02-01

    Metamodeling for the nucleonic equation of state (EOS), inspired from a Taylor expansion around the saturation density of symmetric nuclear matter, is proposed and parameterized in terms of the empirical parameters. The present knowledge of nuclear empirical parameters is first reviewed in order to estimate their average values and associated uncertainties, and thus defining the parameter space of the metamodeling. They are divided into isoscalar and isovector types, and ordered according to their power in the density expansion. The goodness of the metamodeling is analyzed against the predictions of the original models. In addition, since no correlation among the empirical parameters is assumed a priori, all arbitrary density dependences can be explored, which might not be accessible in existing functionals. Spurious correlations due to the assumed functional form are also removed. This meta-EOS allows direct relations between the uncertainties on the empirical parameters and the density dependence of the nuclear equation of state and its derivatives, and the mapping between the two can be done with standard Bayesian techniques. A sensitivity analysis shows that the more influential empirical parameters are the isovector parameters Lsym and Ksym, and that laboratory constraints at supersaturation densities are essential to reduce the present uncertainties. The present metamodeling for the EOS for nuclear matter is proposed for further applications in neutron stars and supernova matter.

  10. A comparison of daily water use estimates derived from constant-heat sap-flow probe values and gravimetric measurements in pot-grown saplings.

    PubMed

    McCulloh, Katherine A; Winter, Klaus; Meinzer, Frederick C; Garcia, Milton; Aranda, Jorge; Lachenbruch, Barbara

    2007-09-01

    Use of Granier-style heat dissipation sensors to measure sap flow is common in plant physiology, ecology and hydrology. There has been concern that any change to the original Granier design invalidates the empirical relationship between sap flux density and the temperature difference between the probes. Here, we compared daily water use estimates from gravimetric measurements with values from variable length heat dissipation sensors, which are a relatively new design. Values recorded during a one-week period were compared for three large pot-grown saplings of each of the tropical trees Pseudobombax septenatum (Jacq.) Dugand and Calophyllum longifolium Willd. For five of the six individuals, P values from paired t-tests comparing the two methods ranged from 0.12 to 0.43 and differences in estimates of total daily water use over the week of the experiment averaged < 3%. In one P. septenatum sapling, the sap flow sensors underestimated water use relative to the gravimetric measurements. This discrepancy could have been associated with naturally occurring gradients in temperature that reduced the difference in temperature between the probes, which would have caused the sensor method to underestimate water use. Our results indicate that substitution of variable length heat dissipation probes for probes of the original Granier design did not invalidate the empirical relationship determined by Granier between sap flux density and the temperature difference between probes.

  11. EPIC-Simulated and MODIS-Derived Leaf Area Index (LAI) ...

    EPA Pesticide Factsheets

    Leaf Area Index (LAI) is an important parameter in assessing vegetation structure for characterizing forest canopies over large areas at broad spatial scales using satellite remote sensing data. However, satellite-derived LAI products can be limited by obstructed atmospheric conditions yielding sub-optimal values, or complete non-returns. The United States Environmental Protection Agency’s Exposure Methods and Measurements and Computational Exposure Divisions are investigating the viability of supplemental modelled LAI inputs into satellite-derived data streams to support various regional and local scale air quality models for retrospective and future climate assessments. In this present study, one-year (2002) of plot level stand characteristics at four study sites located in Virginia and North Carolina are used to calibrate species-specific plant parameters in a semi-empirical biogeochemical model. The Environmental Policy Integrated Climate (EPIC) model was designed primarily for managed agricultural field crop ecosystems, but also includes managed woody species that span both xeric and mesic sites (e.g., mesquite, pine, oak, etc.). LAI was simulated using EPIC at a 4 km2 and 12 km2 grid coincident with the regional Community Multiscale Air Quality Model (CMAQ) grid. LAI comparisons were made between model-simulated and MODIS-derived LAI. Field/satellite-upscaled LAI was also compared to the corresponding MODIS LAI value. Preliminary results show field/satel

  12. Low temperature heat capacities and thermodynamic functions described by Debye-Einstein integrals.

    PubMed

    Gamsjäger, Ernst; Wiessner, Manfred

    2018-01-01

    Thermodynamic data of various crystalline solids are assessed from low temperature heat capacity measurements, i.e., from almost absolute zero to 300 K by means of semi-empirical models. Previous studies frequently present fit functions with a large amount of coefficients resulting in almost perfect agreement with experimental data. It is, however, pointed out in this work that special care is required to avoid overfitting. Apart from anomalies like phase transformations, it is likely that data from calorimetric measurements can be fitted by a relatively simple Debye-Einstein integral with sufficient precision. Thereby, reliable values for the heat capacities, standard enthalpies, and standard entropies at T  = 298.15 K are obtained. Standard thermodynamic functions of various compounds strongly differing in the number of atoms in the formula unit can be derived from this fitting procedure and are compared to the results of previous fitting procedures. The residuals are of course larger when the Debye-Einstein integral is applied instead of using a high number of fit coefficients or connected splines, but the semi-empiric fit coefficients keep their meaning with respect to physics. It is suggested to use the Debye-Einstein integral fit as a standard method to describe heat capacities in the range between 0 and 300 K so that the derived thermodynamic functions are obtained on the same theory-related semi-empiric basis. Additional fitting is recommended when a precise description for data at ultra-low temperatures (0-20 K) is requested.

  13. An empirically based conceptual framework for fostering meaningful patient engagement in research.

    PubMed

    Hamilton, Clayon B; Hoens, Alison M; Backman, Catherine L; McKinnon, Annette M; McQuitty, Shanon; English, Kelly; Li, Linda C

    2018-02-01

    Patient engagement in research (PEIR) is promoted to improve the relevance and quality of health research, but has little conceptualization derived from empirical data. To address this issue, we sought to develop an empirically based conceptual framework for meaningful PEIR founded on a patient perspective. We conducted a qualitative secondary analysis of in-depth interviews with 18 patient research partners from a research centre-affiliated patient advisory board. Data analysis involved three phases: identifying the themes, developing a framework and confirming the framework. We coded and organized the data, and abstracted, illustrated, described and explored the emergent themes using thematic analysis. Directed content analysis was conducted to derive concepts from 18 publications related to PEIR to supplement, confirm or refute, and extend the emergent conceptual framework. The framework was reviewed by four patient research partners on our research team. Participants' experiences of working with researchers were generally positive. Eight themes emerged: procedural requirements, convenience, contributions, support, team interaction, research environment, feel valued and benefits. These themes were interconnected and formed a conceptual framework to explain the phenomenon of meaningful PEIR from a patient perspective. This framework, the PEIR Framework, was endorsed by the patient research partners on our team. The PEIR Framework provides guidance on aspects of PEIR to address for meaningful PEIR. It could be particularly useful when patient-researcher partnerships are led by researchers with little experience of engaging patients in research. © 2017 The Authors Health Expectations Published by John Wiley & Sons Ltd.

  14. Empirical linelist of 13CH4 at 1.67 micron with lower state energies using intensities at 296 and 81 K

    NASA Astrophysics Data System (ADS)

    Lyulin, O. M.; Kassi, S.; Campargue, A.; Sung, K.; Brown, L. R.

    2010-04-01

    The high resolution absorption spectra of 13CH4 were recorded at 81 K by differential absorption spectroscopy using a cryogenic cell and a series of Distributed Feed Back (DFB) diode lasers and at room temperature by Fourier transform spectroscopy. The investigated spectral region corresponds to the 13CH4 tetradecad containing 2nu3 near 5988 cm-1. Empirical linelists were constructed for 1629 transitions at 81 K (5852-6124 cm-1) and for 3488 features at room temperature (5850 - 6150 cm-1); the smallest observed intensity was 3×10-26 cm/molecule at 81 K. The lower state energy values were derived for 1208 13CH4 transitions using line intensities at 81 K and 296 K. Over 400 additional features were seen only at 81 K. The quality of the resulting empirical low energy values is demonstrated by the excellent agreement with the already-assigned transitions and the clear propensity of the empirical low J values to be close to integers. The two line lists at 81 K and at 296 K provided as Supplementary Material will enable future theoretical analyses of the upper 13CH4 tetradecad. Acknowledgements O.M. L. (IAO, Tomsk) is grateful to the French Embassy in Moscow for a two months visiting support at Grenoble University. This work is part of the ANR project "CH4@Titan" (ref: BLAN08-2_321467). The supports by RFBR (Grant RFBR 09-05-92508-ИК_а), CRDF (grant RUG1-2954-TO-09) and by the Groupement de Recherche International SAMIA between CNRS (France), RFBR (Russia) and CAS (China) is acknowledged. Part of the research described in this paper was performed at the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.

  15. Semi-empirical model for retrieval of soil moisture using RISAT-1 C-Band SAR data over a sub-tropical semi-arid area of Rewari district, Haryana (India)

    NASA Astrophysics Data System (ADS)

    Rawat, Kishan Singh; Sehgal, Vinay Kumar; Pradhan, Sanatan; Ray, Shibendu S.

    2018-03-01

    We have estimated soil moisture (SM) by using circular horizontal polarization backscattering coefficient (σ o_{RH}), differences of circular vertical and horizontal σ o (σ o_{RV} {-} σ o_{RH}) from FRS-1 data of Radar Imaging Satellite (RISAT-1) and surface roughness in terms of RMS height ({RMS}_{height}). We examined the performance of FRS-1 in retrieving SM under wheat crop at tillering stage. Results revealed that it is possible to develop a good semi-empirical model (SEM) to estimate SM of the upper soil layer using RISAT-1 SAR data rather than using existing empirical model based on only single parameter, i.e., σ o. Near surface SM measurements were related to σ o_{RH}, σ o_{RV} {-} σ o_{RH} derived using 5.35 GHz (C-band) image of RISAT-1 and {RMS}_{height}. The roughness component derived in terms of {RMS}_{height} showed a good positive correlation with σ o_{RV} {-} σ o_{RH} (R2 = 0.65). By considering all the major influencing factors (σ o_{RH}, σ o_{RV} {-} σ o_{RH}, and {RMS}_{height}), an SEM was developed where SM (volumetric) predicted values depend on σ o_{RH}, σ o_{RV} {-} σ o_{RH}, and {RMS}_{height}. This SEM showed R2 of 0.87 and adjusted R2 of 0.85, multiple R=0.94 and with standard error of 0.05 at 95% confidence level. Validation of the SM derived from semi-empirical model with observed measurement ({SM}_{Observed}) showed root mean square error (RMSE) = 0.06, relative-RMSE (R-RMSE) = 0.18, mean absolute error (MAE) = 0.04, normalized RMSE (NRMSE) = 0.17, Nash-Sutcliffe efficiency (NSE) = 0.91 ({≈ } 1), index of agreement (d) = 1, coefficient of determination (R2) = 0.87, mean bias error (MBE) = 0.04, standard error of estimate (SEE) = 0.10, volume error (VE) = 0.15, variance of the distribution of differences ({S}d2) = 0.004. The developed SEM showed better performance in estimating SM than Topp empirical model which is based only on σ o. By using the developed SEM, top soil SM can be estimated with low mean absolute percent error (MAPE) = 1.39 and can be used for operational applications.

  16. An improved empirical dynamic control system model of global mean sea level rise and surface temperature change

    NASA Astrophysics Data System (ADS)

    Wu, Qing; Luu, Quang-Hung; Tkalich, Pavel; Chen, Ge

    2018-04-01

    Having great impacts on human lives, global warming and associated sea level rise are believed to be strongly linked to anthropogenic causes. Statistical approach offers a simple and yet conceptually verifiable combination of remotely connected climate variables and indices, including sea level and surface temperature. We propose an improved statistical reconstruction model based on the empirical dynamic control system by taking into account the climate variability and deriving parameters from Monte Carlo cross-validation random experiments. For the historic data from 1880 to 2001, we yielded higher correlation results compared to those from other dynamic empirical models. The averaged root mean square errors are reduced in both reconstructed fields, namely, the global mean surface temperature (by 24-37%) and the global mean sea level (by 5-25%). Our model is also more robust as it notably diminished the unstable problem associated with varying initial values. Such results suggest that the model not only enhances significantly the global mean reconstructions of temperature and sea level but also may have a potential to improve future projections.

  17. Body composition of Colombian women.

    PubMed

    Spurr, G B; Reina, J C; Li, S J; de Orozco, B; Dufour, D L

    1994-08-01

    Measurements of anthropometry and total body water (TBW) were made in 99 women 19-44 y of age living in socioeconomically deprived circumstances in Cali, Colombia. TBW was measured by dilution of deuterium oxide. An empirical equation for estimating lean body mass (LBM) was derived and applied satisfactorily to an independent study group. Comparisons were also made with body-composition values obtained by the Durnin and Womersley equations and an equation derived from rural women living in Guatemala. Neither set of equations was suitable for use with the Colombian subjects because both significantly overestimated LBM and therefore underestimated body fat. Lower values of standing height in older women suggest that they may have been subjected to more severe undernutrition during their growth than the younger subjects. When compared with a group of US women, Colombian subjects were less physically fit and had greater subcutaneous-fat deposits, which were distributed over the trunk and limbs, whereas body mass indexes and waist-hip ratios were not significantly different.

  18. Intervals for posttest probabilities: a comparison of 5 methods.

    PubMed

    Mossman, D; Berger, J O

    2001-01-01

    Several medical articles discuss methods of constructing confidence intervals for single proportions and the likelihood ratio, but scant attention has been given to the systematic study of intervals for the posterior odds, or the positive predictive value, of a test. The authors describe 5 methods of constructing confidence intervals for posttest probabilities when estimates of sensitivity, specificity, and the pretest probability of a disorder are derived from empirical data. They then evaluate each method to determine how well the intervals' coverage properties correspond to their nominal value. When the estimates of pretest probabilities, sensitivity, and specificity are derived from more than 80 subjects and are not close to 0 or 1, all methods generate intervals with appropriate coverage properties. When these conditions are not met, however, the best-performing method is an objective Bayesian approach implemented by a simple simulation using a spreadsheet. Physicians and investigators can generate accurate confidence intervals for posttest probabilities in small-sample situations using the objective Bayesian approach.

  19. Accurate electronic and chemical properties of 3d transition metal oxides using a calculated linear response U and a DFT + U(V) method.

    PubMed

    Xu, Zhongnan; Joshi, Yogesh V; Raman, Sumathy; Kitchin, John R

    2015-04-14

    We validate the usage of the calculated, linear response Hubbard U for evaluating accurate electronic and chemical properties of bulk 3d transition metal oxides. We find calculated values of U lead to improved band gaps. For the evaluation of accurate reaction energies, we first identify and eliminate contributions to the reaction energies of bulk systems due only to changes in U and construct a thermodynamic cycle that references the total energies of unique U systems to a common point using a DFT + U(V) method, which we recast from a recently introduced DFT + U(R) method for molecular systems. We then introduce a semi-empirical method based on weighted DFT/DFT + U cohesive energies to calculate bulk oxidation energies of transition metal oxides using density functional theory and linear response calculated U values. We validate this method by calculating 14 reactions energies involving V, Cr, Mn, Fe, and Co oxides. We find up to an 85% reduction of the mean average error (MAE) compared to energies calculated with the Perdew-Burke-Ernzerhof functional. When our method is compared with DFT + U with empirically derived U values and the HSE06 hybrid functional, we find up to 65% and 39% reductions in the MAE, respectively.

  20. How warm is too warm for the life cycle of actinopterygian fishes?

    PubMed Central

    Motani, Ryosuke; Wainwright, Peter C.

    2015-01-01

    We investigated the highest constant temperature at which actinopterygian fishes can complete their lifecycles, based on an oxygen supply model for cleavage-stage eggs. This stage is one of the most heat-sensitive periods during the lifecycle, likely reflecting the exhaustion of maternally supplied heat shock proteins without new production. The model suggests that average eggs would not develop normally under a constant temperature of about 36 °C or higher. This estimate matches published empirical values derived from laboratory and field observations. Spermatogenesis is more heat sensitive than embryogenesis in fishes, so the threshold may indeed be lower, at about 35 °C, unless actinopterygian fishes evolve heat tolerance during spermatogenesis as in birds. Our model also predicts an inverse relationship between egg size and temperature, and empirical data support this prediction. Therefore, the average egg size, and hence hatching size, is expected to shrink in a greenhouse world but a feeding function prohibits the survival of very small hatchlings, posing a limit to the shrinkage. It was once suggested that a marine animal community may be sustained under temperatures up to about 38 °C, and this value is being used, for example, in paleotemperature reconstruction. A revision of the value is overdue. (199/200) PMID:26166622

  1. Dietary patterns in the Avon Longitudinal Study of Parents and Children

    PubMed Central

    Jones, Louise R.; Northstone, Kate

    2015-01-01

    Publications from the Avon Longitudinal Study of Parents and Children that used empirically derived dietary patterns were reviewed. The relationships of dietary patterns with socioeconomic background and childhood development were examined. Diet was assessed using food frequency questionnaires and food records. Three statistical methods were used: principal components analysis, cluster analysis, and reduced rank regression. Throughout childhood, children and parents have similar dietary patterns. The “health-conscious” and “traditional” patterns were associated with high intakes of fruits and/or vegetables and better nutrient profiles than the “processed” patterns. There was evidence of tracking in childhood diet, with the “health-conscious” patterns tracking most strongly, followed by the “processed” pattern. An “energy-dense, low-fiber, high-fat” dietary pattern was extracted using reduced rank regression; high scores on this pattern were associated with increasing adiposity. Maternal education was a strong determinant of pattern score or cluster membership; low educational attainment was associated with higher scores on processed, energy-dense patterns in both parents and children. The Avon Longitudinal Study of Parents and Children has provided unique insights into the value of empirically derived dietary patterns and has demonstrated that they are a useful tool in nutritional epidemiology. PMID:26395343

  2. Semi-empirical models of the wind in cool supergiant stars

    NASA Technical Reports Server (NTRS)

    Kuin, N. P. M.; Ahmad, Imad A.

    1988-01-01

    A self-consistent semi-empirical model for the wind of the supergiant in zeta Aurigae type systems is proposed. The damping of the Alfven waves which are assumed to drive the wind is derived from the observed velocity profile. Solution of the ionization balance and energy equation gives the temperature structure for given stellar magnetic field and wave flux. Physically acceptable solutions of the temperature structure place limits on the stellar magnetic field. A crude formula for a critical mass loss rate is derived. For a mass loss rate below the critical value the wind cannot be cool. Comparison between the observed and the critical mass loss rate suggests that the proposed theory may provide an explanation for the coronal dividing line in the Hertzsprung-Russell diagram. The physical explanation may be that the atmosphere has a cool wind, unless it is physically impossible to have one. Stars which cannot have a cool wind release their nonthermal energy in an outer atmosphere at coronal temperatures. It is possible that in the absence of a substantial stellar wind the magnetic field has less incentive to extend radially outward, and coronal loop structures may become more dominant.

  3. What makes people leave their food? The interaction of personal and situational factors leading to plate leftovers in canteens.

    PubMed

    Lorenz, Bettina Anne-Sophie; Hartmann, Monika; Langen, Nina

    2017-09-01

    In order to provide a basis for the reduction of food losses, our study analyzes individual food choice, eating and leftover behavior in a university canteen by consideration of personal, social and environmental determinants. Based on an extended literature review, a structural equation model is derived and empirically tested for a sample of 343 students. The empirical estimates support the derived model with a good overall model fit and sufficient R 2 values for dependent variables. Hence, our results provide evidence for a general significant impact of behavioral intention and related personal and social determinants as well as for the relevance of environmental/situational determinants such as portion sizes and palatability of food for plate leftovers. Moreover, we find that environmental and personal determinants are interrelated and that the impact of different determinants is relative to perceived time constraints during a visit of the university canteen. Accordingly, we conclude that simple measures to decrease avoidable food waste may take effects via complex and interrelated behavioral structures and that future research should focus on these effects to understand and change food leftover behavior. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Permutation methods for the structured exploratory data analysis (SEDA) of familial trait values.

    PubMed

    Karlin, S; Williams, P T

    1984-07-01

    A collection of functions that contrast familial trait values between and across generations is proposed for studying transmission effects and other collateral influences in nuclear families. Two classes of structured exploratory data analysis (SEDA) statistics are derived from ratios of these functions. SEDA-functionals are the empirical cumulative distributions of the ratio of the two contrasts computed within each family. SEDA-indices are formed by first averaging the numerator and denominator contrasts separately over the population and then forming their ratio. The significance of SEDA results are determined by a spectrum of permutation techniques that selectively shuffle the trait values across families. The process systematically alters certain family structure relationships while keeping other familial relationships intact. The methodology is applied to five data examples of plasma total cholesterol concentrations, reported height values, dermatoglyphic pattern intensity index scores, measurements of dopamine-beta-hydroxylase activity, and psychometric cognitive test results.

  5. Increasing Functional Communication in Non-Speaking Preschool Children: Comparison of PECS and VOCA

    ERIC Educational Resources Information Center

    Bock, Stacey Jones; Stoner, Julia B.; Beck, Ann R.; Hanley, Laurie; Prochnow, Jessica

    2005-01-01

    For individuals who have complex communication needs and for the interventionists who work with them, the collection of empirically derived data that support the use of an intervention approach is critical. The purposes of this study were to continue building an empirically derived base of support for, and to compare the relative effectiveness of…

  6. Interaction of Hurricane Katrina with Optically Complex Water in the Gulf of Mexico: Interpretation Using Satellite-Derived Inherent Optical Properties and Chlorophyll Concentration

    DTIC Science & Technology

    2009-04-01

    Shelf, and into the Gulf of Mexico, empirically derived chl ; increases were observed in the Tortugas Gyre circulation feature, and in adjacent...Mexico, empirically derived chl a increases were observed in the Tortugas Gyre circulation feature, and in adjacent waters. Analy- sis of the...hurricane interaction also influenced the Tortugas Gyre, a recognized circulation feature in the southern Gulf of Mexico induced by the flow of the

  7. AAPI college students' willingness to seek counseling: the role of culture, stigma, and attitudes.

    PubMed

    Choi, Na-Yeun; Miller, Matthew J

    2014-07-01

    This study tested 4 theoretically and empirically derived structural equation models of Asian, Asian American, and Pacific Islanders' willingness to seek counseling with a sample of 278 college students. The models represented competing hypotheses regarding the manner in which Asian cultural values, European American cultural values, public stigma, stigma by close others, self-stigma, and attitudes toward seeking professional help related to willingness to seek counseling. We found that Asian and European American cultural values differentially related to willingness to seek counseling indirectly through specific indirect pathways (public stigma, stigma by close others, self-stigma, and attitudes toward seeking professional help). Our results also showed that the magnitude of model-implied relationships did not vary as a function of generational status. Study limitations, future directions for research, and implications for counseling are discussed.

  8. Resampling-Based Empirical Bayes Multiple Testing Procedures for Controlling Generalized Tail Probability and Expected Value Error Rates: Focus on the False Discovery Rate and Simulation Study

    PubMed Central

    Dudoit, Sandrine; Gilbert, Houston N.; van der Laan, Mark J.

    2014-01-01

    Summary This article proposes resampling-based empirical Bayes multiple testing procedures for controlling a broad class of Type I error rates, defined as generalized tail probability (gTP) error rates, gTP(q, g) = Pr(g(Vn, Sn) > q), and generalized expected value (gEV) error rates, gEV(g) = E[g(Vn, Sn)], for arbitrary functions g(Vn, Sn) of the numbers of false positives Vn and true positives Sn. Of particular interest are error rates based on the proportion g(Vn, Sn) = Vn/(Vn + Sn) of Type I errors among the rejected hypotheses, such as the false discovery rate (FDR), FDR = E[Vn/(Vn + Sn)]. The proposed procedures offer several advantages over existing methods. They provide Type I error control for general data generating distributions, with arbitrary dependence structures among variables. Gains in power are achieved by deriving rejection regions based on guessed sets of true null hypotheses and null test statistics randomly sampled from joint distributions that account for the dependence structure of the data. The Type I error and power properties of an FDR-controlling version of the resampling-based empirical Bayes approach are investigated and compared to those of widely-used FDR-controlling linear step-up procedures in a simulation study. The Type I error and power trade-off achieved by the empirical Bayes procedures under a variety of testing scenarios allows this approach to be competitive with or outperform the Storey and Tibshirani (2003) linear step-up procedure, as an alternative to the classical Benjamini and Hochberg (1995) procedure. PMID:18932138

  9. Semi-empirical fragmentation model of meteoroid motion and radiation during atmospheric penetration

    NASA Astrophysics Data System (ADS)

    Revelle, D. O.; Ceplecha, Z.

    2002-11-01

    A semi-empirical fragmentation model (FM) of meteoroid motion, ablation, and radiation including two types of fragmentation is outlined. The FM was applied to observational data (height as function of time and the light curve) of Lost City, Innisfree and Benešov bolides. For the Lost City bolide we were able to fit the FM to the observed height as function of time with ±13 m and to the observed light curve with ±0.17 magnitude. Corresponding numbers for Innisfree are ±25 m and ±0.14 magnitude, and for Benešov ±46 m and ±0.19 magnitude. We also define apparent and intrinsic values of σ, K, and τ. Using older results and our fit of FM to the Lost City bolide we derived corrections to intrinsic luminous efficiencies expressed as functions of velocity, mass, and normalized air density.

  10. A Grounded Theory of Sexual Minority Women and Transgender Individuals' Social Justice Activism.

    PubMed

    Hagen, Whitney B; Hoover, Stephanie M; Morrow, Susan L

    2018-01-01

    Psychosocial benefits of activism include increased empowerment, social connectedness, and resilience. Yet sexual minority women (SMW) and transgender individuals with multiple oppressed statuses and identities are especially prone to oppression-based experiences, even within minority activist communities. This study sought to develop an empirical model to explain the diverse meanings of social justice activism situated in SMW and transgender individuals' social identities, values, and experiences of oppression and privilege. Using a grounded theory design, 20 SMW and transgender individuals participated in initial, follow-up, and feedback interviews. The most frequent demographic identities were queer or bisexual, White, middle-class women with advanced degrees. The results indicated that social justice activism was intensely relational, replete with multiple benefits, yet rife with experiences of oppression from within and outside of activist communities. The empirically derived model shows the complexity of SMW and transgender individuals' experiences, meanings, and benefits of social justice activism.

  11. Optical and Thermo-optical Properties of Polyimide-Single-Walled Carbon Nanotube Films: Experimental Results and Empirical Equations

    NASA Technical Reports Server (NTRS)

    Smith, Joseph G., Jr.; Connell, John W.; Watson, Kent A.; Danehy, Paul M.

    2005-01-01

    The incorporation of single-walled carbon nanotubes (SWNTs) into the bulk of space environmentally durable polymers at loading levels greater than or equal to 0.05 wt % has afforded thin films with surface and volume resistivities sufficient for electrostatic charge mitigation. However, the optical transparency at 500 nm decreased and the thermo-optical properties (solar absorptivity and thermal emissivity) increased with increaed SWNT loading. These properties were also dependent on film thickness. The absorbance characteristics of the films as a function of SWNT loading and film thickness were measured and determined to follow the classical Beer-Lambert law. Based on these results, an empirical relationship was derived and molar absorptivities determined for both the SWNTs and polymer matrix to provide a predictive approximation of these properties. The molar absorptivity determined for SWNTs dispersed in the polymer was comparable to reported solution determined values for HiPco SWNTs.

  12. An Empirically Derived Taxonomy for Personality Diagnosis: Bridging Science and Practice in Conceptualizing Personality

    PubMed Central

    Westen, Drew; Shedler, Jonathan; Bradley, Bekh; DeFife, Jared A.

    2013-01-01

    Objective The authors describe a system for diagnosing personality pathology that is empirically derived, clinically relevant, and practical for day-to-day use. Method A random national sample of psychiatrists and clinical psychologists (N=1,201) described a randomly selected current patient with any degree of personality dysfunction (from minimal to severe) using the descriptors in the Shedler-Westen Assessment Procedure–II and completed additional research forms. Results The authors applied factor analysis to identify naturally occurring diagnostic groupings within the patient sample. The analysis yielded 10 clinically coherent personality diagnoses organized into three higher-order clusters: internalizing, externalizing, and borderline-dysregulated. The authors selected the most highly rated descriptors to construct a diagnostic prototype for each personality syndrome. In a second, independent sample, research interviewers and patients’ treating clinicians were able to diagnose the personality syndromes with high agreement and minimal comorbidity among diagnoses. Conclusions The empirically derived personality prototypes described here provide a framework for personality diagnosis that is both empirically based and clinically relevant. PMID:22193534

  13. A semi-empirical analysis of strong-motion peaks in terms of seismic source, propagation path, and local site conditions

    NASA Astrophysics Data System (ADS)

    Kamiyama, M.; Orourke, M. J.; Flores-Berrones, R.

    1992-09-01

    A new type of semi-empirical expression for scaling strong-motion peaks in terms of seismic source, propagation path, and local site conditions is derived. Peak acceleration, peak velocity, and peak displacement are analyzed in a similar fashion because they are interrelated. However, emphasis is placed on the peak velocity which is a key ground motion parameter for lifeline earthquake engineering studies. With the help of seismic source theories, the semi-empirical model is derived using strong motions obtained in Japan. In the derivation, statistical considerations are used in the selection of the model itself and the model parameters. Earthquake magnitude M and hypocentral distance r are selected as independent variables and the dummy variables are introduced to identify the amplification factor due to individual local site conditions. The resulting semi-empirical expressions for the peak acceleration, velocity, and displacement are then compared with strong-motion data observed during three earthquakes in the U.S. and Mexico.

  14. Examining the Stability of "DSM-IV" and Empirically Derived Eating Disorder Classification: Implications for "DSM-5"

    ERIC Educational Resources Information Center

    Peterson, Carol B.; Crow, Scott J.; Swanson, Sonja A.; Crosby, Ross D.; Wonderlich, Stephen A.; Mitchell, James E.; Agras, W. Stewart; Halmi, Katherine A.

    2011-01-01

    Objective: The purpose of this investigation was to derive an empirical classification of eating disorder symptoms in a heterogeneous eating disorder sample using latent class analysis (LCA) and to examine the longitudinal stability of these latent classes (LCs) and the stability of DSM-IV eating disorder (ED) diagnoses. Method: A total of 429…

  15. Student Response to Faculty Instruction (SRFI): An Empirically Derived Instrument to Measure Student Evaluations of Teaching

    ERIC Educational Resources Information Center

    Beitzel, Brian D.

    2013-01-01

    The Student Response to Faculty Instruction (SRFI) is an instrument designed to measure the student perspective on courses in higher education. The SRFI was derived from decades of empirical studies of student evaluations of teaching. This article describes the development of the SRFI and its psychometric attributes demonstrated in two pilot study…

  16. Development of an epiphyte indicator of nutrient enrichment ...

    EPA Pesticide Factsheets

    Metrics of epiphyte load on macrophytes were evaluated for use as quantitative biological indicators for nutrient impacts in estuarine waters, based on review and analysis of the literature on epiphytes and macrophytes, primarily seagrasses, but including some brackish and freshwater rooted macrophyte species. An approach is presented that empirically derives threshold epiphyte loads which are likely to cause specified levels of decrease in macrophyte response metrics such as biomass, shoot density, percent cover, production and growth. Data from 36 studies of 10 macrophyte species were pooled to derive relationships between epiphyte load and -25 and -50% seagrass response levels, which are proposed as the primary basis for establishment of critical threshold values. Given multiple sources of variability in the response data, threshold ranges based on the range of values falling between the median and the 75th quantiles of observations at a given seagrass response level are proposed rather than single, critical point values. Four epiphyte load threshold categories - low, moderate, high, very high, are proposed. Comparison of values of epiphyte loads associated with 25 and 50% reductions in light to macrophytes suggest that the threshold ranges are realistic both in terms of the principle mechanism of impact to macrophytes and in terms of the magnitude of resultant impacts expressed by the macrophytes. Some variability in response levels was observed among

  17. A combined qualitative-quantitative approach for the identification of highly co-creative technology-driven firms

    NASA Astrophysics Data System (ADS)

    Milyakov, Hristo; Tanev, Stoyan; Ruskov, Petko

    2011-03-01

    Value co-creation, is an emerging business and innovation paradigm, however, there is not enough clarity on the distinctive characteristics of value co-creation as compared to more traditional value creation approaches. The present paper summarizes the results from an empirically-derived research study focusing on the development of a systematic procedure for the identification of firms that are active in value co-creation. The study is based on a sample 273 firms that were selected for being representative of the breadth of their value co-creation activities. The results include: i) the identification of the key components of value co-creation based on a research methodology using web search and Principal Component Analysis techniques, and ii) the comparison of two different classification techniques identifying the firms with the highest degree of involvement in value co-creation practices. To the best of our knowledge this is the first study using sophisticated data collection techniques to provide a classification of firms according to the degree of their involvement in value co-creation.

  18. Impact of orbit modeling on DORIS station position and Earth rotation estimates

    NASA Astrophysics Data System (ADS)

    Štěpánek, Petr; Rodriguez-Solano, Carlos Javier; Hugentobler, Urs; Filler, Vratislav

    2014-04-01

    The high precision of estimated station coordinates and Earth rotation parameters (ERP) obtained from satellite geodetic techniques is based on the precise determination of the satellite orbit. This paper focuses on the analysis of the impact of different orbit parameterizations on the accuracy of station coordinates and the ERPs derived from DORIS observations. In a series of experiments the DORIS data from the complete year 2011 were processed with different orbit model settings. First, the impact of precise modeling of the non-conservative forces on geodetic parameters was compared with results obtained with an empirical-stochastic modeling approach. Second, the temporal spacing of drag scaling parameters was tested. Third, the impact of estimating once-per-revolution harmonic accelerations in cross-track direction was analyzed. And fourth, two different approaches for solar radiation pressure (SRP) handling were compared, namely adjusting SRP scaling parameter or fixing it on pre-defined values. Our analyses confirm that the empirical-stochastic orbit modeling approach, which does not require satellite attitude information and macro models, results for most of the monitored station parameters in comparable accuracy as the dynamical model that employs precise non-conservative force modeling. However, the dynamical orbit model leads to a reduction of the RMS values for the estimated rotation pole coordinates by 17% for x-pole and 12% for y-pole. The experiments show that adjusting atmospheric drag scaling parameters each 30 min is appropriate for DORIS solutions. Moreover, it was shown that the adjustment of cross-track once-per-revolution empirical parameter increases the RMS of the estimated Earth rotation pole coordinates. With recent data it was however not possible to confirm the previously known high annual variation in the estimated geocenter z-translation series as well as its mitigation by fixing the SRP parameters on pre-defined values.

  19. The Value of Satellite Early Warning Systems in Kenya and Guatemala: Results and Lessons Learned from Contingent Valuation and Loss Avoidance Approaches

    NASA Astrophysics Data System (ADS)

    Morrison, I.; Berenter, J. S.

    2017-12-01

    SERVIR, the joint USAID and NASA initiative, conducted two studies to assess the value of two distinctly different Early Warning Systems (EWS) in Guatemala and Kenya. Each study applied a unique method to asses EWS value. The evaluation team conducted a Contingent Valuation (CV) choice experiment to measure the value of a near-real time VIIRS and MODIS-based hot-spot mapping tool for forest management professionals targeting seasonal forest fires in Northern Guatemala. The team also conducted a survey-based Damage and Loss Avoidance (DaLA) exercise to calculate the monetary benefits of a MODIS-derived frost forecasting system for farmers in the tea-growing highlands of Kenya. This presentation compares and contrasts the use and utility of these two valuation approaches to assess EWS value. Although interest in these methods is growing, few empirical studies have applied them to benefit and value assessment for EWS. Furthermore, the application of CV and DaLA methods is much less common outside of the developed world. Empirical findings from these two studies indicated significant value for two substantially different beneficiary groups: natural resource management specialists and smallholder tea farmers. Additionally, the valuation processes generated secondary information that can help improve the format and delivery of both types of EWS outputs for user and beneficiary communities in Kenya and Guatemala. Based on lessons learned from the two studies, this presentation will also compare and contrast the methodological and logistical advantages, challenges, and limitations in applying the CV and DaLA methods in developing countries. By reviewing these two valuation methods alongside each other, the authors will outline conditions where they can be applied - individually or jointly - to other early warning systems and delivery contexts.

  20. An Empirical Study of Atmospheric Correction Procedures for Regional Infrasound Amplitudes with Ground Truth.

    NASA Astrophysics Data System (ADS)

    Howard, J. E.

    2014-12-01

    This study focusses on improving methods of accounting for atmospheric effects on infrasound amplitudes observed on arrays at regional distances in the southwestern United States. Recordings at ranges of 150 to nearly 300 km from a repeating ground truth source of small HE explosions are used. The explosions range in actual weight from approximately 2000-4000 lbs. and are detonated year-round which provides signals for a wide range of atmospheric conditions. Three methods of correcting the observed amplitudes for atmospheric effects are investigated with the data set. The first corrects amplitudes for upper stratospheric wind as developed by Mutschlecner and Whitaker (1999) and uses the average wind speed between 45-55 km altitudes in the direction of propagation to derive an empirical correction formula. This approach was developed using large chemical and nuclear explosions and is tested with the smaller explosions for which shorter wavelengths cause the energy to be scattered by the smaller scale structure of the atmosphere. The second approach isa semi-empirical method using ray tracing to determine wind speed at ray turning heights where the wind estimates replace the wind values in the existing formula. Finally, parabolic equation (PE) modeling is used to predict the amplitudes at the arrays at 1 Hz. The PE amplitudes are compared to the observed amplitudes with a narrow band filter centered at 1 Hz. An analysis is performed of the conditions under which the empirical and semi-empirical methods fail and full wave methods must be used.

  1. A Multi-Band Analytical Algorithm for Deriving Absorption and Backscattering Coefficients from Remote-Sensing Reflectance of Optically Deep Waters

    NASA Technical Reports Server (NTRS)

    Lee, Zhong-Ping; Carder, Kendall L.

    2001-01-01

    A multi-band analytical (MBA) algorithm is developed to retrieve absorption and backscattering coefficients for optically deep waters, which can be applied to data from past and current satellite sensors, as well as data from hyperspectral sensors. This MBA algorithm applies a remote-sensing reflectance model derived from the Radiative Transfer Equation, and values of absorption and backscattering coefficients are analytically calculated from values of remote-sensing reflectance. There are only limited empirical relationships involved in the algorithm, which implies that this MBA algorithm could be applied to a wide dynamic range of waters. Applying the algorithm to a simulated non-"Case 1" data set, which has no relation to the development of the algorithm, the percentage error for the total absorption coefficient at 440 nm a (sub 440) is approximately 12% for a range of 0.012 - 2.1 per meter (approximately 6% for a (sub 440) less than approximately 0.3 per meter), while a traditional band-ratio approach returns a percentage error of approximately 30%. Applying it to a field data set ranging from 0.025 to 2.0 per meter, the result for a (sub 440) is very close to that using a full spectrum optimization technique (9.6% difference). Compared to the optimization approach, the MBA algorithm cuts the computation time dramatically with only a small sacrifice in accuracy, making it suitable for processing large data sets such as satellite images. Significant improvements over empirical algorithms have also been achieved in retrieving the optical properties of optically deep waters.

  2. Petrophysics of low-permeability medina sandstone, northwestern Pennsylvania, Appalachian Basin

    USGS Publications Warehouse

    Castle, J.W.; Byrnes, A.P.

    1998-01-01

    Petrophysical core testing combined with geophysical log analysis of low-permeability, Lower Silurian sandstones of the Appalachian basin provides guidelines and equations for predicting gas producibility. Permeability values are predictable from the borehole logs by applying empirically derived equations based on correlation between in-situ porosity and in-situ effective gas permeability. An Archie-form equation provides reasonable accuracy of log-derived water saturations because of saturated brine salinities and low clay content in the sands. Although measured porosity and permeability average less than 6% and 0.1 mD, infrequent values as high as 18% and 1,048 mD occur. Values of effective gas permeability at irreducible water saturation (Swi) range from 60% to 99% of routine values for the highest permeability rocks to several orders of magnitude less for the lowest permeability rocks. Sandstones having porosity greater than 6% and effective gas permeability greater than 0.01 mD exhibit Swi less than 20%. With decreasing porosity, Swi sharply increases to values near 40% at 3 porosity%. Analysis of cumulative storage and flow capacity indicates zones with porosity greater than 6% generally contain over 90% of flow capacity and hold a major portion of storage capacity. For rocks with Swi < 20%, gas relative permeabilities exceed 45%. Gas relative permeability and hydrocarbon volume decrease rapidly with increasing Swi as porosity drops below 6%. At Swi above 40%, gas relative permeabilities are less than approximately 10%.

  3. Group Sequential Testing of the Predictive Accuracy of a Continuous Biomarker with Unknown Prevalence

    PubMed Central

    Koopmeiners, Joseph S.; Feng, Ziding

    2015-01-01

    Group sequential testing procedures have been proposed as an approach to conserving resources in biomarker validation studies. Previously, Koopmeiners and Feng (2011) derived the asymptotic properties of the sequential empirical positive predictive value (PPV) and negative predictive value curves, which summarize the predictive accuracy of a continuous marker, under case-control sampling. A limitation of their approach is that the prevalence can not be estimated from a case-control study and must be assumed known. In this manuscript, we consider group sequential testing of the predictive accuracy of a continuous biomarker with unknown prevalence. First, we develop asymptotic theory for the sequential empirical PPV and NPV curves when the prevalence must be estimated, rather than assumed known in a case-control study. We then discuss how our results can be combined with standard group sequential methods to develop group sequential testing procedures and bias-adjusted estimators for the PPV and NPV curve. The small sample properties of the proposed group sequential testing procedures and estimators are evaluated by simulation and we illustrate our approach in the context of a study to validate a novel biomarker for prostate cancer. PMID:26537180

  4. On the signal-to-noise ratio in IUE high-dispersion spectra

    NASA Technical Reports Server (NTRS)

    Leckrone, David S.; Adelman, Saul J.

    1989-01-01

    An observational and data reduction technique for fixed pattern noise (FPN) and random noise (RN) in fully extracted IUE high-dispersion spectra is described in detail, along with actual empirical values of signal-to-noise ratio (S/N) achieved. A co-addition procedure, involving SWP and LWR cameras observations of the same spectrum at different positions in the image format, provides a basis to disentangle FPN from RN, allowing each average amplitude, within a given wavelength interval, to be estimated as a function of average flux number. Empirical curves, derived with the noise algorithm, make it possible to estimate the S/N in individual spectra at the wavelengths investigated. The average S/N at the continuum level in well-exposed stellar spectra varies from 10 to 20, for the orders analyzed, depending on position in the spectral format. The co-addition procedure yields an improvement in S/N by factors ranging from 2.3 to 2.9. Direct measurements of S/N in narrow, line-free wavelength intervals of individual and co-added spectra for weak-lined stars yield comparable, or in some cases somewhat higher, S/N values and improvement factors.

  5. Are Women Over-Represented in Dead-End Jobs? A Swedish Study Using Empirically Derived Measures of Dead-End Jobs

    ERIC Educational Resources Information Center

    Bihagen, Erik; Ohls, Marita

    2007-01-01

    It has been claimed that women experience fewer career opportunities than men do mainly because they are over-represented in "Dead-end Jobs" (DEJs). Using Swedish panel data covering 1.1 million employees with the same employer in 1999 and 2003, measures of DEJ are empirically derived from analyses of wage mobility. The results indicate…

  6. Diagnostic Classification of Eating Disorders in Children and Adolescents: How Does DSM-IV-TR Compare to Empirically-Derived Categories?

    ERIC Educational Resources Information Center

    Eddy, Kamryn T.; Le Grange, Daniel; Crosby, Ross D.; Hoste, Renee Rienecke; Doyle, Angela Celio; Smyth, Angela; Herzog, David B.

    2010-01-01

    Objective: The purpose of this study was to empirically derive eating disorder phenotypes in a clinical sample of children and adolescents using latent profile analysis (LPA), and to compare these latent profile (LP) groups to the DSM-IV-TR eating disorder categories. Method: Eating disorder symptom data collected from 401 youth (aged 7 through 19…

  7. Semi-empirical airframe noise prediction model

    NASA Technical Reports Server (NTRS)

    Hersh, A. S.; Putnam, T. W.; Lasagna, P. L.; Burcham, F. W., Jr.

    1976-01-01

    A semi-empirical maximum overall sound pressure level (OASPL) airframe noise model was derived. The noise radiated from aircraft wings and flaps was modeled by using the trailing-edge diffracted quadrupole sound theory derived by Ffowcs Williams and Hall. The noise radiated from the landing gear was modeled by using the acoustic dipole sound theory derived by Curle. The model was successfully correlated with maximum OASPL flyover noise measurements obtained at the NASA Dryden Flight Research Center for three jet aircraft - the Lockheed JetStar, the Convair 990, and the Boeing 747 aircraft.

  8. Comparability of children's sedentary time estimates derived from wrist worn GENEActiv and hip worn ActiGraph accelerometer thresholds.

    PubMed

    Boddy, Lynne M; Noonan, Robert J; Kim, Youngwon; Rowlands, Alex V; Welk, Greg J; Knowles, Zoe R; Fairclough, Stuart J

    2018-03-28

    To examine the comparability of children's free-living sedentary time (ST) derived from raw acceleration thresholds for wrist mounted GENEActiv accelerometer data, with ST estimated using the waist mounted ActiGraph 100count·min -1 threshold. Secondary data analysis. 108 10-11-year-old children (n=43 boys) from Liverpool, UK wore one ActiGraph GT3X+ and one GENEActiv accelerometer on their right hip and left wrist, respectively for seven days. Signal vector magnitude (SVM; mg) was calculated using the ENMO approach for GENEActiv data. ST was estimated from hip-worn ActiGraph data, applying the widely used 100count·min -1 threshold. ROC analysis using 10-fold hold-out cross-validation was conducted to establish a wrist-worn GENEActiv threshold comparable to the hip ActiGraph 100count·min -1 threshold. GENEActiv data were also classified using three empirical wrist thresholds and equivalence testing was completed. Analysis indicated that a GENEActiv SVM value of 51mg demonstrated fair to moderate agreement (Kappa: 0.32-0.41) with the 100count·min -1 threshold. However, the generated and empirical thresholds for GENEActiv devices were not significantly equivalent to ActiGraph 100count·min -1 . GENEActiv data classified using the 35.6mg threshold intended for ActiGraph devices generated significantly equivalent ST estimates as the ActiGraph 100count·min -1 . The newly generated and empirical GENEActiv wrist thresholds do not provide equivalent estimates of ST to the ActiGraph 100count·min -1 approach. More investigation is required to assess the validity of applying ActiGraph cutpoints to GENEActiv data. Future studies are needed to examine the backward compatibility of ST data and to produce a robust method of classifying SVM-derived ST. Copyright © 2018 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  9. Contribution of competition for light to within-species variability in stomatal conductance

    NASA Astrophysics Data System (ADS)

    Loranty, Michael M.; Mackay, D. Scott; Ewers, Brent E.; Traver, Elizabeth; Kruger, Eric L.

    2010-05-01

    Sap flux (JS) measurements were collected across two stands dominated by either trembling aspen or sugar maple in northern Wisconsin. Observed canopy transpiration (EC-obs) values derived from JS were used to parameterize the Terrestrial Regional Ecosystem Exchange Simulator ecosystem model. Modeled values of stomatal conductance (GS) were used to determine reference stomatal conductance (GSref), a proxy for GS that removes the effects of temporal responses to vapor pressure deficit (D) on spatial patterns of GS. Values of GSref were compared to observations of soil moisture, several physiological variables, and a competition index (CI) derived from a stand inventory, to determine the underlying cause of observed variability. Considerable variability in GSref between individual trees was found, with values ranging from 20 to 200 mmol m-2 s-1 and 20 to 100 mmol m-2 s-1 at the aspen and maple stands, respectively. Model-derived values of GSref and a sensitivity to D parameter (m) showed good agreement with a known empirical relationship for both stands. At both sites, GSref did not vary with topographic position, as indicated by surface soil moisture. No relationships were observed between GSref and tree height (HT), and a weak correlation with sapwood area (AS) was only significant for aspen. Significant nonlinear inverse relationships between GSref and CI were observed at both stands. Simulations with uniform reductions in incident photosynthetically active radiation (Q0) resulted in better agreement between observed and simulated EC. Our results suggest a link between photosynthesis and plant hydraulics whereby individual trees subject to photosynthetic limitation as a result of competitive shading exhibit a dynamic stomatal response resulting in a more conservative strategy for managing hydrologic resources.

  10. Pre- and Post-equinox ROSINA production rates calculated using a realistic empirical coma model derived from AMPS-DSMC simulations of comet 67P/Churyumov-Gerasimenko

    NASA Astrophysics Data System (ADS)

    Hansen, Kenneth; Altwegg, Kathrin; Berthelier, Jean-Jacques; Bieler, Andre; Calmonte, Ursina; Combi, Michael; De Keyser, Johan; Fiethe, Björn; Fougere, Nicolas; Fuselier, Stephen; Gombosi, Tamas; Hässig, Myrtha; Huang, Zhenguang; Le Roy, Lena; Rubin, Martin; Tenishev, Valeriy; Toth, Gabor; Tzou, Chia-Yu

    2016-04-01

    We have previously used results from the AMPS DSMC (Adaptive Mesh Particle Simulator Direct Simulation Monte Carlo) model to create an empirical model of the near comet coma (<400 km) of comet 67P for the pre-equinox orbit of comet 67P/Churyumov-Gerasimenko. In this work we extend the empirical model to the post-equinox, post-perihelion time period. In addition, we extend the coma model to significantly further from the comet (~100,000-1,000,000 km). The empirical model characterizes the neutral coma in a comet centered, sun fixed reference frame as a function of heliocentric distance, radial distance from the comet, local time and declination. Furthermore, we have generalized the model beyond application to 67P by replacing the heliocentric distance parameterizations and mapping them to production rates. Using this method, the model become significantly more general and can be applied to any comet. The model is a significant improvement over simpler empirical models, such as the Haser model. For 67P, the DSMC results are, of course, a more accurate representation of the coma at any given time, but the advantage of a mean state, empirical model is the ease and speed of use. One application of the empirical model is to de-trend the spacecraft motion from the ROSINA COPS and DFMS data (Rosetta Orbiter Spectrometer for Ion and Neutral Analysis, Comet Pressure Sensor, Double Focusing Mass Spectrometer). The ROSINA instrument measures the neutral coma density at a single point and the measured value is influenced by the location of the spacecraft relative to the comet and the comet-sun line. Using the empirical coma model we can correct for the position of the spacecraft and compute a total production rate based on the single point measurement. In this presentation we will present the coma production rate as a function of heliocentric distance both pre- and post-equinox and perihelion.

  11. Empirical ionization fractions in the winds and the determination of mass-loss rates for early-type stars

    NASA Technical Reports Server (NTRS)

    Lamers, H. J. G. L. M.; Gathier, R.; Snow, T. P.

    1980-01-01

    From a study of the UV lines in the spectra of 25 stars from 04 to B1, the empirical relations between the mean density in the wind and the ionization fractions of O VI, N V, Si IV, and the excited C III (2p 3P0) level were derived. Using these empirical relations, a simple relation was derived between the mass-loss rate and the column density of any of these four ions. This relation can be used for a simple determination of the mass-loss rate from O4 to B1 stars.

  12. Stopping Distances: An Excellent Example of Empirical Modelling.

    ERIC Educational Resources Information Center

    Lawson, D. A.; Tabor, J. H.

    2001-01-01

    Explores the derivation of empirical models for the stopping distance of a car being driven at a range of speeds. Indicates that the calculation of stopping distances makes an excellent example of empirical modeling because it is a situation that is readily understood and particularly relevant to many first-year undergraduates who are learning or…

  13. Structural Patterns in Empirical Research Articles: A Cross-Disciplinary Study

    ERIC Educational Resources Information Center

    Lin, Ling; Evans, Stephen

    2012-01-01

    This paper presents an analysis of the major generic structures of empirical research articles (RAs), with a particular focus on disciplinary variation and the relationship between the adjacent sections in the introductory and concluding parts. The findings were derived from a close "manual" analysis of 433 recent empirical RAs from high-impact…

  14. Solar wind driven empirical forecast models of the time derivative of the ground magnetic field

    NASA Astrophysics Data System (ADS)

    Wintoft, Peter; Wik, Magnus; Viljanen, Ari

    2015-03-01

    Empirical models are developed to provide 10-30-min forecasts of the magnitude of the time derivative of local horizontal ground geomagnetic field (|dBh/dt|) over Europe. The models are driven by ACE solar wind data. A major part of the work has been devoted to the search and selection of datasets to support the model development. To simplify the problem, but at the same time capture sudden changes, 30-min maximum values of |dBh/dt| are forecast with a cadence of 1 min. Models are tested both with and without the use of ACE SWEPAM plasma data. It is shown that the models generally capture sudden increases in |dBh/dt| that are associated with sudden impulses (SI). The SI is the dominant disturbance source for geomagnetic latitudes below 50° N and with minor contribution from substorms. However, at occasions, large disturbances can be seen associated with geomagnetic pulsations. For higher latitudes longer lasting disturbances, associated with substorms, are generally also captured. It is also shown that the models using only solar wind magnetic field as input perform in most cases equally well as models with plasma data. The models have been verified using different approaches including the extremal dependence index which is suitable for rare events.

  15. [The midwives of Guadalajara (México) in the 19th century, the plundering of their art].

    PubMed

    Díaz Robles, Laura Catalina; Oropeza Sandoval, Luciano

    2007-01-01

    This study examines the social devaluation of the knowledge and practice used by midwives in their work. The research is limited to historical events that took place during the 19th century in the city of Guadalajara, capital of the state of Jalisco in Mexico. The study shows how the displacement and subordination of these women were associated with the higher social status of physicians. Supported by advances in medicine and by the authority derived from the knowledge acquired through formal educational institutions, doctors started to undermine the value of empirical knowledge and subordinate it to the knowledge that came from these advances. It is shown how doctors detract from and subordinated the midwife to the scientific-employment field of medicine by using a discourse that degraded empirical knowledge and by institutionalizing training courses that tended to ignore the practical know-how of these women and replace it with knowledge derived from scientific medicine. The study is based on information from archives and scientific journals of the time: Archiva Fondos Especiales de la Biblioteca Pública de Jalisco, Archivo Histórico de Jalisco, Archivo Histórico de la Universidad de Guadalajara, Archivo Municipal de Guadalajara and Revista Médica.

  16. Anisotropy of the Fermi surface, Fermi velocity, many-body enhancement, and superconducting energy gap in Nb

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crabtree, G.W.; Dye, D.H.; Karim, D.P.

    1987-02-01

    The detailed angular dependence of the Fermi radius k/sub F/, the Fermi velocity v/sub F/(k), the many-body enhancement factor lambda(k), and the superconducting energy gap ..delta..(k), for electrons on the Fermi surface of Nb are derived with use of the de Haas--van Alphen (dHvA) data of Karim, Ketterson, and Crabtree (J. Low Temp. Phys. 30, 389 (1978)), a Korringa-Kohn-Rostoker parametrization scheme, and an empirically adjusted band-structure calculation of Koelling. The parametrization is a nonrelativistic five-parameter fit allowing for cubic rather than spherical symmetry inside the muffin-tin spheres. The parametrized Fermi surface gives a detailed interpretation of the previously unexplained kappa,more » ..cap alpha..', and ..cap alpha..'' orbits in the dHvA data. Comparison of the parametrized Fermi velocities with those of the empirically adjusted band calculation allow the anisotropic many-body enhancement factor lambda(k) to be determined. Theoretical calculations of the electron-phonon interaction based on the tight-binding model agree with our derived values of lambda(k) much better than those based on the rigid-muffin-tin approximation. The anisotropy in the superconducting energy gap ..delta..(k) is estimated from our results for lambda(k), assuming weak anisotropy.« less

  17. Anisotropy of the Fermi surface, Fermi velocity, many-body enhancement, and superconducting energy gap in Nb

    NASA Astrophysics Data System (ADS)

    Crabtree, G. W.; Dye, D. H.; Karim, D. P.; Campbell, S. A.; Ketterson, J. B.

    1987-02-01

    The detailed angular dependence of the Fermi radius kF, the Fermi velocity vF(k), the many-body enhancement factor λ(k), and the superconducting energy gap Δ(k), for electrons on the Fermi surface of Nb are derived with use of the de Haas-van Alphen (dHvA) data of Karim, Ketterson, and Crabtree [J. Low Temp. Phys. 30, 389 (1978)], a Korringa-Kohn-Rostoker parametrization scheme, and an empirically adjusted band-structure calculation of Koelling. The parametrization is a nonrelativistic five-parameter fit allowing for cubic rather than spherical symmetry inside the muffin-tin spheres. The parametrized Fermi surface gives a detailed interpretation of the previously unexplained κ, α', and α'' orbits in the dHvA data. Comparison of the parametrized Fermi velocities with those of the empirically adjusted band calculation allow the anisotropic many-body enhancement factor λ(k) to be determined. Theoretical calculations of the electron-phonon interaction based on the tight-binding model agree with our derived values of λ(k) much better than those based on the rigid-muffin-tin approximation. The anisotropy in the superconducting energy gap Δ(k) is estimated from our results for λ(k), assuming weak anisotropy.

  18. Effects of spatial orientation of prairie vegetation in an agricultural landscape on curve number values

    NASA Astrophysics Data System (ADS)

    Franz, K.; Dziubanski, D.; Helmers, M. J.

    2015-12-01

    The simplicity of the Curve Number (CN) method, which summarizes an area's hydrologic soil group, land cover, treatment, and hydrologic condition into a single number, make it a consistently popular choice for modelers. When multiple land cover types are present, a weighted average of the CNs is used. However, the weighted CN does not account for the spatial distribution of different land cover types within the watershed. To overcome this limitation, it becomes necessary to discretize the model into homogenous subunits, perhaps even to the hillslope scale, leading to a more complex model application. The objective of this study is to empirically derive CN values that reflect the effects of placements of native prairie vegetation (NPV) within agricultural landscapes. We derived CN values using precipitation and runoff data from (May 1 - Sept 30 over a 7 year period (2008 - 2014) for 9 ephemeral watersheds in Iowa (USA) ranging from 0.47 to 3.19 ha. The watersheds were planted with varying extents of NPV (0%, 10%, 20%) in different watershed positions (footslope vs. contour strips), with the rest of the watershed as row crop. The derived CN values from watersheds with all row crop were consistent with published values and watersheds with NPV had an average CN reduction of 6.4%, with a maximum reduction of 11.6%. Four of the six sites with treatment had a lower CN than one calculated using a weighted average of look-up values, indicating that accounting for placement of vegetation within the landscape is important for modeling runoff with the CN method. The derived CNs were verified using the leave-one-year-out method (computing CN using data from 6 of the 7 years, and then estimating runoff on the seventh year with that CN). Nash-Sutcliffe Efficiency (NSE) values for the estimated runoff typically ranged from 0.4-0.6. Our results suggest that the new CNs could confidently be used in future modeling studies to explore the hydrologic impacts of the NPV treatments at increasingly larger watershed scales.

  19. Quantitative evaluation of simulated functional brain networks in graph theoretical analysis.

    PubMed

    Lee, Won Hee; Bullmore, Ed; Frangou, Sophia

    2017-02-01

    There is increasing interest in the potential of whole-brain computational models to provide mechanistic insights into resting-state brain networks. It is therefore important to determine the degree to which computational models reproduce the topological features of empirical functional brain networks. We used empirical connectivity data derived from diffusion spectrum and resting-state functional magnetic resonance imaging data from healthy individuals. Empirical and simulated functional networks, constrained by structural connectivity, were defined based on 66 brain anatomical regions (nodes). Simulated functional data were generated using the Kuramoto model in which each anatomical region acts as a phase oscillator. Network topology was studied using graph theory in the empirical and simulated data. The difference (relative error) between graph theory measures derived from empirical and simulated data was then estimated. We found that simulated data can be used with confidence to model graph measures of global network organization at different dynamic states and highlight the sensitive dependence of the solutions obtained in simulated data on the specified connection densities. This study provides a method for the quantitative evaluation and external validation of graph theory metrics derived from simulated data that can be used to inform future study designs. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  20. Stochastic modelling of non-stationary financial assets

    NASA Astrophysics Data System (ADS)

    Estevens, Joana; Rocha, Paulo; Boto, João P.; Lind, Pedro G.

    2017-11-01

    We model non-stationary volume-price distributions with a log-normal distribution and collect the time series of its two parameters. The time series of the two parameters are shown to be stationary and Markov-like and consequently can be modelled with Langevin equations, which are derived directly from their series of values. Having the evolution equations of the log-normal parameters, we reconstruct the statistics of the first moments of volume-price distributions which fit well the empirical data. Finally, the proposed framework is general enough to study other non-stationary stochastic variables in other research fields, namely, biology, medicine, and geology.

  1. Universal portfolios generated by the Bregman divergence

    NASA Astrophysics Data System (ADS)

    Tan, Choon Peng; Kuang, Kee Seng

    2017-04-01

    The Bregman divergence of two probability vectors is a stronger form of the f-divergence introduced by Csiszar. Two versions of the Bregman universal portfolio are presented by exploiting the mean-value theorem. The explicit form of the Bregman universal portfolio generated by a function of a convex polynomial is derived and studied empirically. This portfolio can be regarded as another generalized of the well-known Helmbold portfolio. By running the portfolios on selected stock-price data sets from the local stock exchange, it is shown that it is possible to increase the wealth of the investor by using the portfolios in investment.

  2. What scientists want from their research ethics committee.

    PubMed

    Keith-Spiegel, Patricia; Tabachnick, Barbara

    2006-03-01

    Whereas investigators have directed considerable criticism against Institutional Review Boards (IRBs), the desirable characteristics of IRBs have not previously been empirically determined. A sample of 886 experienced biomedical and social and behavioral scientists rated 45 descriptors of IRB actions and functions as to their importance. Predictions derived from organizational justice research findings in other work settings were generally borne out. Investigators place high value on the fairness and respectful consideration of their IRBs. Expected differences between biomedical and social behavioral researchers and other variables were unfounded. Recommendations are offered for educating IRBs to accord researchers greater respect and fair treatment.

  3. Recent solar extreme ultraviolet irradiance observations and modeling: A review

    NASA Technical Reports Server (NTRS)

    Tobiska, W. Kent

    1993-01-01

    For more than 90 years, solar extreme ultraviolet (EUV) irradiance modeling has progressed from empirical blackbody radiation formulations, through fudge factors, to typically measured irradiances and reference spectra was well as time-dependent empirical models representing continua and line emissions. A summary of recent EUV measurements by five rockets and three satellites during the 1980s is presented along with the major modeling efforts. The most significant reference spectra are reviewed and threee independently derived empirical models are described. These include Hinteregger's 1981 SERF1, Nusinov's 1984 two-component, and Tobiska's 1990/1991/SERF2/EUV91 flux models. They each provide daily full-disk broad spectrum flux values from 2 to 105 nm at 1 AU. All the models depend to one degree or another on the long time series of the Atmosphere Explorer E (AE-E) EUV database. Each model uses ground- and/or space-based proxies to create emissions from solar atmospheric regions. Future challenges in EUV modeling are summarized including the basic requirements of models, the task of incorporating new observations and theory into the models, the task of comparing models with solar-terrestrial data sets, and long-term goals and modeling objectives. By the late 1990s, empirical models will potentially be improved through the use of proposed solar EUV irradiance measurements and images at selected wavelengths that will greatly enhance modeling and predictive capabilities.

  4. Comparison between global latent heat flux computed from multisensor (SSM/I and AVHRR) and from in situ data

    NASA Technical Reports Server (NTRS)

    Jourdan, Didier; Gautier, Catherine

    1995-01-01

    Comprehensive Ocean-Atmosphere Data Set (COADS) and satellite-derived parameters are input to a similarity theory-based model and treated in completely equivalent ways to compute global latent heat flux (LHF). In order to compute LHF exclusively from satellite measurements, an empirical relationship (Q-W relationship) is used to compute the air mixing ratio from Special Sensor Microwave/Imager (SSM/I) precipitable water W and a new one is derived to compute the air temperature also from retrieved W(T-W relationship). First analyses indicate that in situ and satellite LHF computations compare within 40%, but systematic errors increase the differences up to 100% in some regions. By investigating more closely the origin of the discrepancies, the spatial sampling of ship reports has been found to be an important source of error in the observed differences. When the number of in situ data records increases (more than 20 per month), the agreement is about 50 W/sq m rms (40 W/sq m rms for multiyear averages). Limitations of both empirical relationships and W retrieval errors strongly affect the LHF computation. Systematic LHF overestimation occurs in strong subsidence regions and LHF underestimation occurs within surface convergence zones and over oceanic upwelling areas. The analysis of time series of the different parameters in these regions confirms that systematic LHF discrepancies are negatively correlated with the differences between COADS and satellite-derived values of the air mixing ratio and air temperature. To reduce the systematic differences in satellite-derived LHF, a preliminary ship-satellite blending procedure has been developed for the air mixing ratio and air temperature.

  5. Improved inland water levels from SAR altimetry using novel empirical and physical retrackers

    NASA Astrophysics Data System (ADS)

    Villadsen, Heidi; Deng, Xiaoli; Andersen, Ole B.; Stenseng, Lars; Nielsen, Karina; Knudsen, Per

    2016-06-01

    Satellite altimetry has proven a valuable resource of information on river and lake levels where in situ data are sparse or non-existent. In this study several new methods for obtaining stable inland water levels from CryoSat-2 Synthetic Aperture Radar (SAR) altimetry are presented and evaluated. In addition, the possible benefits from combining physical and empirical retrackers are investigated. The retracking methods evaluated in this paper include the physical SAR Altimetry MOde Studies and Applications (SAMOSA3) model, a traditional subwaveform threshold retracker, the proposed Multiple Waveform Persistent Peak (MWaPP) retracker, and a method combining the physical and empirical retrackers. Using a physical SAR waveform retracker over inland water has not been attempted before but shows great promise in this study. The evaluation is performed for two medium-sized lakes (Lake Vänern in Sweden and Lake Okeechobee in Florida), and in the Amazon River in Brazil. Comparing with in situ data shows that using the SAMOSA3 retracker generally provides the lowest root-mean-squared-errors (RMSE), closely followed by the MWaPP retracker. For the empirical retrackers, the RMSE values obtained when comparing with in situ data in Lake Vänern and Lake Okeechobee are in the order of 2-5 cm for well-behaved waveforms. Combining the physical and empirical retrackers did not offer significantly improved mean track standard deviations or RMSEs. Based on these studies, it is suggested that future SAR derived water levels are obtained using the SAMOSA3 retracker whenever information about other physical properties apart from range is desired. Otherwise we suggest using the empirical MWaPP retracker described in this paper, which is both easy to implement, computationally efficient, and gives a height estimate for even the most contaminated waveforms.

  6. Performance of biometric quality measures.

    PubMed

    Grother, Patrick; Tabassi, Elham

    2007-04-01

    We document methods for the quantitative evaluation of systems that produce a scalar summary of a biometric sample's quality. We are motivated by a need to test claims that quality measures are predictive of matching performance. We regard a quality measurement algorithm as a black box that converts an input sample to an output scalar. We evaluate it by quantifying the association between those values and observed matching results. We advance detection error trade-off and error versus reject characteristics as metrics for the comparative evaluation of sample quality measurement algorithms. We proceed this with a definition of sample quality, a description of the operational use of quality measures. We emphasize the performance goal by including a procedure for annotating the samples of a reference corpus with quality values derived from empirical recognition scores.

  7. Everybody else is: Networks, power laws and peer contagion in the aggressive recess behavior of elementary school boys.

    PubMed

    Warren, Keith; Craciun, Gheorghe; Anderson-Butcher, Dawn

    2005-04-01

    This paper develops a simple random network model of peer contagion in aggressive behavior among inner-city elementary school boys during recess periods. The model predicts a distribution of aggressive behaviors per recess period with a power law tail beginning at two aggressive behaviors and having a slope of approximately -1.5. Comparison of these values with values derived from observations of aggressive behaviors during recess at an inner-city elementary school provides empirical support for the model. These results suggest that fluctuations in aggressive behaviors during recess arise from the interactions between students, rather than from variations in the behavior of individual students. The results therefore support those interventions that aim to change the pattern of interaction between students.

  8. Satellite-based retrieval of particulate matter concentrations over the United Arab Emirates (UAE)

    NASA Astrophysics Data System (ADS)

    Zhao, Jun; Temimi, Marouane; Hareb, Fahad; Eibedingil, Iyasu

    2016-04-01

    In this study, an empirical algorithm was established to retrieve particulate matter (PM) concentrations (PM2.5 and PM10) using satellite-derived aerosol optical depth (AOD) over the United Arab Emirates (UAE). Validation of the proposed algorithm using ground truth data demonstrates its good accuracy. Time series of in situ measured PM concentrations between 2014 and 2015 showed high values in summer and low values in winter. Estimated and in situ measured PM concentrations were higher in 2015 than 2014. Remote sensing is an essential tool to reveal and back track the seasonality and inter-annual variations of PM concentrations and provide valuable information on the protection of human health and the response of air quality to anthropogenic activities and climate change.

  9. Calibration of an M L scale for South Africa using tectonic earthquake data recorded by the South African National Seismograph Network: 2006 to 2009

    NASA Astrophysics Data System (ADS)

    Saunders, Ian; Ottemöller, Lars; Brandt, Martin B. C.; Fourie, Christoffel J. S.

    2013-04-01

    A relation to determine local magnitude ( M L) based on the original Richter definition is empirically derived from synthetic Wood-Anderson seismograms recorded by the South African National Seismograph Network. In total, 263 earthquakes in the distance range 10 to 1,000 km, representing 1,681 trace amplitudes measured in nanometers from synthesized Wood-Anderson records on the vertical channel were considered to derive an attenuation relation appropriate for South Africa through multiple regression analysis. Additionally, station corrections were determined for 26 stations during the regression analysis resulting in values ranging between -0.31 and 0.50. The most appropriate M L scale for South Africa from this study satisfies the equation: {M_{{{L}}}} = {{lo}}{{{g}}_{{10}}}(A) + 1.149{{lo}}{{{g}}_{{10}}}(R) + 0.00063R + 2.04 - S The anelastic attenuation term derived from this study indicates that ground motion attenuation is significantly different from Southern California but comparable with stable continental regions.

  10. A comparison between two powder compaction parameters of plasticity: the effective medium A parameter and the Heckel 1/K parameter.

    PubMed

    Mahmoodi, Foad; Klevan, Ingvild; Nordström, Josefina; Alderborn, Göran; Frenning, Göran

    2013-09-10

    The purpose of the research was to introduce a procedure to derive a powder compression parameter (EM A) representing particle yield stress using an effective medium equation and to compare the EM A parameter with the Heckel compression parameter (1/K). 16 pharmaceutical powders, including drugs and excipients, were compressed in a materials testing instrument and powder compression profiles were derived using the EM and Heckel equations. The compression profiles thus obtained could be sub-divided into regions among which one region was approximately linear and from this region, the compression parameters EM A and 1/K were calculated. A linear relationship between the EM A parameter and the 1/K parameter was obtained with a strong correlation. The slope of the plot was close to 1 (0.84) and the intercept of the plot was small in comparison to the range of parameter values obtained. The relationship between the theoretical EM A parameter and the 1/K parameter supports the interpretation of the empirical Heckel parameter as being a measure of yield stress. It is concluded that the combination of Heckel and EM equations represents a suitable procedure to derive a value of particle plasticity from powder compression data. Copyright © 2013 Elsevier B.V. All rights reserved.

  11. Survival estimation and the effects of dependency among animals

    USGS Publications Warehouse

    Schmutz, Joel A.; Ward, David H.; Sedinger, James S.; Rexstad, Eric A.

    1995-01-01

    Survival models assume that fates of individuals are independent, yet the robustness of this assumption has been poorly quantified. We examine how empirically derived estimates of the variance of survival rates are affected by dependency in survival probability among individuals. We used Monte Carlo simulations to generate known amounts of dependency among pairs of individuals and analyzed these data with Kaplan-Meier and Cormack-Jolly-Seber models. Dependency significantly increased these empirical variances as compared to theoretically derived estimates of variance from the same populations. Using resighting data from 168 pairs of black brant, we used a resampling procedure and program RELEASE to estimate empirical and mean theoretical variances. We estimated that the relationship between paired individuals caused the empirical variance of the survival rate to be 155% larger than the empirical variance for unpaired individuals. Monte Carlo simulations and use of this resampling strategy can provide investigators with information on how robust their data are to this common assumption of independent survival probabilities.

  12. Solar-wind predictions for the Parker Solar Probe orbit. Near-Sun extrapolations derived from an empirical solar-wind model based on Helios and OMNI observations

    NASA Astrophysics Data System (ADS)

    Venzmer, M. S.; Bothmer, V.

    2018-03-01

    Context. The Parker Solar Probe (PSP; formerly Solar Probe Plus) mission will be humanitys first in situ exploration of the solar corona with closest perihelia at 9.86 solar radii (R⊙) distance to the Sun. It will help answer hitherto unresolved questions on the heating of the solar corona and the source and acceleration of the solar wind and solar energetic particles. The scope of this study is to model the solar-wind environment for PSPs unprecedented distances in its prime mission phase during the years 2018 to 2025. The study is performed within the Coronagraphic German And US SolarProbePlus Survey (CGAUSS) which is the German contribution to the PSP mission as part of the Wide-field Imager for Solar PRobe. Aim. We present an empirical solar-wind model for the inner heliosphere which is derived from OMNI and Helios data. The German-US space probes Helios 1 and Helios 2 flew in the 1970s and observed solar wind in the ecliptic within heliocentric distances of 0.29 au to 0.98 au. The OMNI database consists of multi-spacecraft intercalibrated in situ data obtained near 1 au over more than five solar cycles. The international sunspot number (SSN) and its predictions are used to derive dependencies of the major solar-wind parameters on solar activity and to forecast their properties for the PSP mission. Methods: The frequency distributions for the solar-wind key parameters, magnetic field strength, proton velocity, density, and temperature, are represented by lognormal functions. In addition, we consider the velocity distributions bi-componental shape, consisting of a slower and a faster part. Functional relations to solar activity are compiled with use of the OMNI data by correlating and fitting the frequency distributions with the SSN. Further, based on the combined data set from both Helios probes, the parameters frequency distributions are fitted with respect to solar distance to obtain power law dependencies. Thus an empirical solar-wind model for the inner heliosphere confined to the ecliptic region is derived, accounting for solar activity and for solar distance through adequate shifts of the lognormal distributions. Finally, the inclusion of SSN predictions and the extrapolation down to PSPs perihelion region enables us to estimate the solar-wind environment for PSPs planned trajectory during its mission duration. Results: The CGAUSS empirical solar-wind model for PSP yields dependencies on solar activity and solar distance for the solar-wind parameters' frequency distributions. The estimated solar-wind median values for PSPs first perihelion in 2018 at a solar distance of 0.16 au are 87 nT, 340 km s-1, 214 cm-3, and 503 000 K. The estimates for PSPs first closest perihelion, occurring in 2024 at 0.046 au (9.86 R⊙), are 943 nT, 290 km s-1, 2951 cm-3, and 1 930 000 K. Since the modeled velocity and temperature values below approximately 20 R⊙appear overestimated in comparison with existing observations, this suggests that PSP will directly measure solar-wind acceleration and heating processes below 20 R⊙ as planned.

  13. Simultaneous Measurements of Chlorophyll Concentration by Lidar, Fluorometry, above-Water Radiometry, and Ocean Color MODIS Images in the Southwestern Atlantic.

    PubMed

    Kampel, Milton; Lorenzzetti, João A; Bentz, Cristina M; Nunes, Raul A; Paranhos, Rodolfo; Rudorff, Frederico M; Politano, Alexandre T

    2009-01-01

    Comparisons between in situ measurements of surface chlorophyll-a concentration (CHL) and ocean color remote sensing estimates were conducted during an oceanographic cruise on the Brazilian Southeastern continental shelf and slope, Southwestern South Atlantic. In situ values were based on fluorometry, above-water radiometry and lidar fluorosensor. Three empirical algorithms were used to estimate CHL from radiometric measurements: Ocean Chlorophyll 3 bands (OC3M(RAD)), Ocean Chlorophyll 4 bands (OC4v4(RAD)), and Ocean Chlorophyll 2 bands (OC2v4(RAD)). The satellite estimates of CHL were derived from data collected by the MODerate-resolution Imaging Spectroradiometer (MODIS) with a nominal 1.1 km resolution at nadir. Three algorithms were used to estimate chlorophyll concentrations from MODIS data: one empirical - OC3M(SAT), and two semi-analytical - Garver, Siegel, Maritorena version 01 (GSM01(SAT)), and Carder(SAT). In the present work, MODIS, lidar and in situ above-water radiometry and fluorometry are briefly described and the estimated values of chlorophyll retrieved by these techniques are compared. The chlorophyll concentration in the study area was in the range 0.01 to 0.2 mg/m(3). In general, the empirical algorithms applied to the in situ radiometric and satellite data showed a tendency to overestimate CHL with a mean difference between estimated and measured values of as much as 0.17 mg/m(3) (OC2v4(RAD)). The semi-analytical GSM01 algorithm applied to MODIS data performed better (rmse 0.28, rmse-L 0.08, mean diff. -0.01 mg/m(3)) than the Carder and the empirical OC3M algorithms (rmse 1.14 and 0.36, rmse-L 0.34 and 0.11, mean diff. 0.17 and 0.02 mg/m(3), respectively). We find that rmsd values between MODIS relative to the in situ radiometric measurements are < 26%, i.e., there is a trend towards overestimation of R(RS) by MODIS for the stations considered in this work. Other authors have already reported over and under estimation of MODIS remotely sensed reflectance due to several errors in the bio-optical algorithm performance, in the satellite sensor calibration, and in the atmospheric-correction algorithm.

  14. How rational should bioethics be? The value of empirical approaches.

    PubMed

    Alvarez, A A

    2001-10-01

    Rational justification of claims with empirical content calls for empirical and not only normative philosophical investigation. Empirical approaches to bioethics are epistemically valuable, i.e., such methods may be necessary in providing and verifying basic knowledge about cultural values and norms. Our assumptions in moral reasoning can be verified or corrected using these methods. Moral arguments can be initiated or adjudicated by data drawn from empirical investigation. One may argue that individualistic informed consent, for example, is not compatible with the Asian communitarian orientation. But this normative claim uses an empirical assumption that may be contrary to the fact that some Asians do value and argue for informed consent. Is it necessary and factual to neatly characterize some cultures as individualistic and some as communitarian? Empirical investigation can provide a reasonable way to inform such generalizations. In a multi-cultural context, such as in the Philippines, there is a need to investigate the nature of the local ethos before making any appeal to authenticity. Otherwise we may succumb to the same ethical imperialism we are trying hard to resist. Normative claims that involve empirical premises cannot be reasonable verified or evaluated without utilizing empirical methods along with philosophical reflection. The integration of empirical methods to the standard normative approach to moral reasoning should be reasonably guided by the epistemic demands of claims arising from cross-cultural discourse in bioethics.

  15. Sample and population exponents of generalized Taylor's law.

    PubMed

    Giometto, Andrea; Formentin, Marco; Rinaldo, Andrea; Cohen, Joel E; Maritan, Amos

    2015-06-23

    Taylor's law (TL) states that the variance V of a nonnegative random variable is a power function of its mean M; i.e., V = aM(b). TL has been verified extensively in ecology, where it applies to population abundance, physics, and other natural sciences. Its ubiquitous empirical verification suggests a context-independent mechanism. Sample exponents b measured empirically via the scaling of sample mean and variance typically cluster around the value b = 2. Some theoretical models of population growth, however, predict a broad range of values for the population exponent b pertaining to the mean and variance of population density, depending on details of the growth process. Is the widely reported sample exponent b ≃ 2 the result of ecological processes or could it be a statistical artifact? Here, we apply large deviations theory and finite-sample arguments to show exactly that in a broad class of growth models the sample exponent is b ≃ 2 regardless of the underlying population exponent. We derive a generalized TL in terms of sample and population exponents b(jk) for the scaling of the kth vs. the jth cumulants. The sample exponent b(jk) depends predictably on the number of samples and for finite samples we obtain b(jk) ≃ k = j asymptotically in time, a prediction that we verify in two empirical examples. Thus, the sample exponent b ≃ 2 may indeed be a statistical artifact and not dependent on population dynamics under conditions that we specify exactly. Given the broad class of models investigated, our results apply to many fields where TL is used although inadequately understood.

  16. Automatic motion and noise artifact detection in Holter ECG data using empirical mode decomposition and statistical approaches.

    PubMed

    Lee, Jinseok; McManus, David D; Merchant, Sneh; Chon, Ki H

    2012-06-01

    We present a real-time method for the detection of motion and noise (MN) artifacts, which frequently interferes with accurate rhythm assessment when ECG signals are collected from Holter monitors. Our MN artifact detection approach involves two stages. The first stage involves the use of the first-order intrinsic mode function (F-IMF) from the empirical mode decomposition to isolate the artifacts' dynamics as they are largely concentrated in the higher frequencies. The second stage of our approach uses three statistical measures on the F-IMF time series to look for characteristics of randomness and variability, which are hallmark signatures of MN artifacts: the Shannon entropy, mean, and variance. We then use the receiver-operator characteristics curve on Holter data from 15 healthy subjects to derive threshold values associated with these statistical measures to separate between the clean and MN artifacts' data segments. With threshold values derived from 15 training data sets, we tested our algorithms on 30 additional healthy subjects. Our results show that our algorithms are able to detect the presence of MN artifacts with sensitivity and specificity of 96.63% and 94.73%, respectively. In addition, when we applied our previously developed algorithm for atrial fibrillation (AF) detection on those segments that have been labeled to be free from MN artifacts, the specificity increased from 73.66% to 85.04% without loss of sensitivity (74.48%-74.62%) on six subjects diagnosed with AF. Finally, the computation time was less than 0.2 s using a MATLAB code, indicating that real-time application of the algorithms is possible for Holter monitoring.

  17. Travel cost demand model based river recreation benefit estimates with on-site and household surveys: Comparative results and a correction procedure

    NASA Astrophysics Data System (ADS)

    Loomis, John

    2003-04-01

    Past recreation studies have noted that on-site or visitor intercept surveys are subject to over-sampling of avid users (i.e., endogenous stratification) and have offered econometric solutions to correct for this. However, past papers do not estimate the empirical magnitude of the bias in benefit estimates with a real data set, nor do they compare the corrected estimates to benefit estimates derived from a population sample. This paper empirically examines the magnitude of the recreation benefits per trip bias by comparing estimates from an on-site river visitor intercept survey to a household survey. The difference in average benefits is quite large, with the on-site visitor survey yielding 24 per day trip, while the household survey yields 9.67 per day trip. A simple econometric correction for endogenous stratification in our count data model lowers the benefit estimate to $9.60 per day trip, a mean value nearly identical and not statistically different from the household survey estimate.

  18. Bridging the Knowledge Gaps between Richards' Equation and Budyko Equation

    NASA Astrophysics Data System (ADS)

    Wang, D.

    2017-12-01

    The empirical Budyko equation represents the partitioning of mean annual precipitation into evaporation and runoff. Richards' equation, based on Darcy's law, represents the movement of water in unsaturated soils. The linkage between Richards' equation and Budyko equation is presented by invoking the empirical Soil Conservation Service curve number (SCS-CN) model for computing surface runoff at the event-scale. The basis of the SCS-CN method is the proportionality relationship, i.e., the ratio of continuing abstraction to its potential is equal to the ratio of surface runoff to its potential value. The proportionality relationship can be derived from the Richards' equation for computing infiltration excess and saturation excess models at the catchment scale. Meanwhile, the generalized proportionality relationship is demonstrated as the common basis of SCS-CN method, monthly "abcd" model, and Budyko equation. Therefore, the linkage between Darcy's law and the emergent pattern of mean annual water balance at the catchment scale is presented through the proportionality relationship.

  19. A reduced-order model from high-dimensional frictional hysteresis

    PubMed Central

    Biswas, Saurabh; Chatterjee, Anindya

    2014-01-01

    Hysteresis in material behaviour includes both signum nonlinearities as well as high dimensionality. Available models for component-level hysteretic behaviour are empirical. Here, we derive a low-order model for rate-independent hysteresis from a high-dimensional massless frictional system. The original system, being given in terms of signs of velocities, is first solved incrementally using a linear complementarity problem formulation. From this numerical solution, to develop a reduced-order model, basis vectors are chosen using the singular value decomposition. The slip direction in generalized coordinates is identified as the minimizer of a dissipation-related function. That function includes terms for frictional dissipation through signum nonlinearities at many friction sites. Luckily, it allows a convenient analytical approximation. Upon solution of the approximated minimization problem, the slip direction is found. A final evolution equation for a few states is then obtained that gives a good match with the full solution. The model obtained here may lead to new insights into hysteresis as well as better empirical modelling thereof. PMID:24910522

  20. How good are indirect tests at detecting recombination in human mtDNA?

    PubMed

    White, Daniel James; Bryant, David; Gemmell, Neil John

    2013-07-08

    Empirical proof of human mitochondrial DNA (mtDNA) recombination in somatic tissues was obtained in 2004; however, a lack of irrefutable evidence exists for recombination in human mtDNA at the population level. Our inability to demonstrate convincingly a signal of recombination in population data sets of human mtDNA sequence may be due, in part, to the ineffectiveness of current indirect tests. Previously, we tested some well-established indirect tests of recombination (linkage disequilibrium vs. distance using D' and r(2), Homoplasy Test, Pairwise Homoplasy Index, Neighborhood Similarity Score, and Max χ(2)) on sequence data derived from the only empirically confirmed case of human mtDNA recombination thus far and demonstrated that some methods were unable to detect recombination. Here, we assess the performance of these six well-established tests and explore what characteristics specific to human mtDNA sequence may affect their efficacy by simulating sequence under various parameters with levels of recombination (ρ) that vary around an empirically derived estimate for human mtDNA (population parameter ρ = 5.492). No test performed infallibly under any of our scenarios, and error rates varied across tests, whereas detection rates increased substantially with ρ values > 5.492. Under a model of evolution that incorporates parameters specific to human mtDNA, including rate heterogeneity, population expansion, and ρ = 5.492, successful detection rates are limited to a range of 7-70% across tests with an acceptable level of false-positive results: the neighborhood similarity score incompatibility test performed best overall under these parameters. Population growth seems to have the greatest impact on recombination detection probabilities across all models tested, likely due to its impact on sequence diversity. The implications of our findings on our current understanding of mtDNA recombination in humans are discussed.

  1. An analytical model of iceberg drift

    NASA Astrophysics Data System (ADS)

    Eisenman, I.; Wagner, T. J. W.; Dell, R.

    2017-12-01

    Icebergs transport freshwater from glaciers and ice shelves, releasing the freshwater into the upper ocean thousands of kilometers from the source. This influences ocean circulation through its effect on seawater density. A standard empirical rule-of-thumb for estimating iceberg trajectories is that they drift at the ocean surface current velocity plus 2% of the atmospheric surface wind velocity. This relationship has been observed in empirical studies for decades, but it has never previously been physically derived or justified. In this presentation, we consider the momentum balance for an individual iceberg, which includes nonlinear drag terms. Applying a series of approximations, we derive an analytical solution for the iceberg velocity as a function of time. In order to validate the model, we force it with surface velocity and temperature data from an observational state estimate and compare the results with iceberg observations in both hemispheres. We show that the analytical solution reduces to the empirical 2% relationship in the asymptotic limit of small icebergs (or strong winds), which approximately applies for typical Arctic icebergs. We find that the 2% value arises due to a term involving the drag coefficients for water and air and the densities of the iceberg, ocean, and air. In the opposite limit of large icebergs (or weak winds), which approximately applies for typical Antarctic icebergs with horizontal length scales greater than about 12 km, we find that the 2% relationship is not applicable and that icebergs instead move with the ocean current, unaffected by the wind. The two asymptotic regimes can be understood by considering how iceberg size influences the relative importance of the wind and ocean current drag terms compared with the Coriolis and pressure gradient force terms in the iceberg momentum balance.

  2. How Good Are Indirect Tests at Detecting Recombination in Human mtDNA?

    PubMed Central

    White, Daniel James; Bryant, David; Gemmell, Neil John

    2013-01-01

    Empirical proof of human mitochondrial DNA (mtDNA) recombination in somatic tissues was obtained in 2004; however, a lack of irrefutable evidence exists for recombination in human mtDNA at the population level. Our inability to demonstrate convincingly a signal of recombination in population data sets of human mtDNA sequence may be due, in part, to the ineffectiveness of current indirect tests. Previously, we tested some well-established indirect tests of recombination (linkage disequilibrium vs. distance using D′ and r2, Homoplasy Test, Pairwise Homoplasy Index, Neighborhood Similarity Score, and Max χ2) on sequence data derived from the only empirically confirmed case of human mtDNA recombination thus far and demonstrated that some methods were unable to detect recombination. Here, we assess the performance of these six well-established tests and explore what characteristics specific to human mtDNA sequence may affect their efficacy by simulating sequence under various parameters with levels of recombination (ρ) that vary around an empirically derived estimate for human mtDNA (population parameter ρ = 5.492). No test performed infallibly under any of our scenarios, and error rates varied across tests, whereas detection rates increased substantially with ρ values > 5.492. Under a model of evolution that incorporates parameters specific to human mtDNA, including rate heterogeneity, population expansion, and ρ = 5.492, successful detection rates are limited to a range of 7−70% across tests with an acceptable level of false-positive results: the neighborhood similarity score incompatibility test performed best overall under these parameters. Population growth seems to have the greatest impact on recombination detection probabilities across all models tested, likely due to its impact on sequence diversity. The implications of our findings on our current understanding of mtDNA recombination in humans are discussed. PMID:23665874

  3. Flood loss modelling with FLF-IT: a new flood loss function for Italian residential structures

    NASA Astrophysics Data System (ADS)

    Hasanzadeh Nafari, Roozbeh; Amadio, Mattia; Ngo, Tuan; Mysiak, Jaroslav

    2017-07-01

    The damage triggered by different flood events costs the Italian economy millions of euros each year. This cost is likely to increase in the future due to climate variability and economic development. In order to avoid or reduce such significant financial losses, risk management requires tools which can provide a reliable estimate of potential flood impacts across the country. Flood loss functions are an internationally accepted method for estimating physical flood damage in urban areas. In this study, we derived a new flood loss function for Italian residential structures (FLF-IT), on the basis of empirical damage data collected from a recent flood event in the region of Emilia-Romagna. The function was developed based on a new Australian approach (FLFA), which represents the confidence limits that exist around the parameterized functional depth-damage relationship. After model calibration, the performance of the model was validated for the prediction of loss ratios and absolute damage values. It was also contrasted with an uncalibrated relative model with frequent usage in Europe. In this regard, a three-fold cross-validation procedure was carried out over the empirical sample to measure the range of uncertainty from the actual damage data. The predictive capability has also been studied for some sub-classes of water depth. The validation procedure shows that the newly derived function performs well (no bias and only 10 % mean absolute error), especially when the water depth is high. Results of these validation tests illustrate the importance of model calibration. The advantages of the FLF-IT model over other Italian models include calibration with empirical data, consideration of the epistemic uncertainty of data, and the ability to change parameters based on building practices across Italy.

  4. Volatility in financial markets: stochastic models and empirical results

    NASA Astrophysics Data System (ADS)

    Miccichè, Salvatore; Bonanno, Giovanni; Lillo, Fabrizio; Mantegna, Rosario N.

    2002-11-01

    We investigate the historical volatility of the 100 most capitalized stocks traded in US equity markets. An empirical probability density function (pdf) of volatility is obtained and compared with the theoretical predictions of a lognormal model and of the Hull and White model. The lognormal model well describes the pdf in the region of low values of volatility whereas the Hull and White model better approximates the empirical pdf for large values of volatility. Both models fail in describing the empirical pdf over a moderately large volatility range.

  5. An empirical Bayes approach for the Poisson life distribution.

    NASA Technical Reports Server (NTRS)

    Canavos, G. C.

    1973-01-01

    A smooth empirical Bayes estimator is derived for the intensity parameter (hazard rate) in the Poisson distribution as used in life testing. The reliability function is also estimated either by using the empirical Bayes estimate of the parameter, or by obtaining the expectation of the reliability function. The behavior of the empirical Bayes procedure is studied through Monte Carlo simulation in which estimates of mean-squared errors of the empirical Bayes estimators are compared with those of conventional estimators such as minimum variance unbiased or maximum likelihood. Results indicate a significant reduction in mean-squared error of the empirical Bayes estimators over the conventional variety.

  6. A DEIM Induced CUR Factorization

    DTIC Science & Technology

    2015-09-18

    CUR approximate matrix factorization based on the Discrete Empirical Interpolation Method (DEIM). For a given matrix A, such a factorization provides a...CUR approximations based on leverage scores. 1 Introduction This work presents a new CUR matrix factorization based upon the Discrete Empirical...SUPPLEMENTARY NOTES 14. ABSTRACT We derive a CUR approximate matrix factorization based on the Discrete Empirical Interpolation Method (DEIM). For a given

  7. Primordial 4He abundance: a determination based on the largest sample of H II regions with a methodology tested on model H II regions

    NASA Astrophysics Data System (ADS)

    Izotov, Y. I.; Stasińska, G.; Guseva, N. G.

    2013-10-01

    We verified the validity of the empirical method to derive the 4He abundance used in our previous papers by applying it to CLOUDY (v13.01) models. Using newly published He i emissivities for which we present convenient fits as well as the output CLOUDY case B hydrogen and He i line intensities, we found that the empirical method is able to reproduce the input CLOUDY 4He abundance with an accuracy of better than 1%. The CLOUDY output data also allowed us to derive the non-recombination contribution to the intensities of the strongest Balmer hydrogen Hα, Hβ, Hγ, and Hδ emission lines and the ionisation correction factors for He. With these improvements we used our updated empirical method to derive the 4He abundances and to test corrections for several systematic effects in a sample of 1610 spectra of low-metallicity extragalactic H ii regions, the largest sample used so far. From this sample we extracted a subsample of 111 H ii regions with Hβ equivalent width EW(Hβ) ≥ 150 Å, with excitation parameter x = O2+/O ≥ 0.8, and with helium mass fraction Y derived with an accuracy better than 3%. With this subsample we derived the primordial 4He mass fraction Yp = 0.254 ± 0.003 from linear regression Y - O/H. The derived value of Yp is higher at the 68% confidence level (CL) than that predicted by the standard big bang nucleosynthesis (SBBN) model, possibly implying the existence of different types of neutrino species in addition to the three known types of active neutrinos. Using the most recently derived primordial abundances D/H = (2.60 ± 0.12) × 10-5 and Yp = 0.254 ± 0.003 and the χ2 technique, we found that the best agreement between abundances of these light elements is achieved in a cosmological model with baryon mass density Ωbh2 = 0.0234 ± 0.0019 (68% CL) and an effective number of the neutrino species Neff = 3.51 ± 0.35 (68% CL). Based on observations collected at the European Southern Observatory, Chile, programs 073.B-0283(A), 081.C-0113(A), 65.N-0642(A), 68.B-0310(A), 69.C-0203(A), 69.D-0174(A), 70.B-0717(A), 70.C-0008(A), 71.B-0055(A).Based on observations at the Kitt Peak National Observatory, National Optical Astronomical Observatory, operated by the Association of Universities for Research in Astronomy, Inc., under contract with the National Science Foundation.Tables 2 and 3 are available in electronic form at http://www.aanda.org

  8. River meanders and channel size

    USGS Publications Warehouse

    Williams, G.P.

    1986-01-01

    This study uses an enlarged data set to (1) compare measured meander geometry to that predicted by the Langbein and Leopold (1966) theory, (2) examine the frequency distribution of the ratio radius of curvature/channel width, and (3) derive 40 empirical equations (31 of which are original) involving meander and channel size features. The data set, part of which comes from publications by other authors, consists of 194 sites from a large variety of physiographic environments in various countries. The Langbein-Leopold sine-generated-curve theory for predicting radius of curvature agrees very well with the field data (78 sites). The ratio radius of curvature/channel width has a modal value in the range of 2 to 3, in accordance with earlier work; about one third of the 79 values is less than 2.0. The 40 empirical relations, most of which include only two variables, involve channel cross-section dimensions (bankfull area, width, and mean depth) and meander features (wavelength, bend length, radius of curvature, and belt width). These relations have very high correlation coefficients, most being in the range of 0.95-0.99. Although channel width traditionally has served as a scale indicator, bankfull cross-sectional area and mean depth also can be used for this purpose. ?? 1986.

  9. Temperature-influenced energetics model for migrating waterfowl

    USGS Publications Warehouse

    Aagaard, Kevin; Thogmartin, Wayne E.; Lonsdorg, Eric V.

    2018-01-01

    Climate and weather affect avian migration by influencing when and where birds fly, the energy costs and risks of flight, and the ability to sense cues necessary for proper navigation. We review the literature of the physiology of avian migration and the influence of climate, specifically temperature, on avian migration dynamics. We use waterfowl as a model guild because of the ready availability of empirical physiological data and their enormous economic value, but our discussion and expectations are broadly generalizable to migratory birds in general. We detail potential consequences of an increasingly warm climate on avian migration, including the possibility of the cessation of migration by some populations and species. Our intent is to lay the groundwork for including temperature effects on energetic gains and losses of migratory birds with the expected consequences of increasing temperatures into a predictive modeling framework. To this end, we provide a simulation of migration progression exclusively focused on the influence of temperature on the physiological determinants of migration. This simulation produced comparable results to empirically derived and observed values for different migratory factors (e.g., body fat content, flight range, departure date). By merging knowledge from the arenas of avian physiology and migratory theory we have identified a clear need for research and have developed hypotheses for a path forward.

  10. ISC-GEM: Global Instrumental Earthquake Catalogue (1900-2009), III. Re-computed MS and mb, proxy MW, final magnitude composition and completeness assessment

    NASA Astrophysics Data System (ADS)

    Di Giacomo, Domenico; Bondár, István; Storchak, Dmitry A.; Engdahl, E. Robert; Bormann, Peter; Harris, James

    2015-02-01

    This paper outlines the re-computation and compilation of the magnitudes now contained in the final ISC-GEM Reference Global Instrumental Earthquake Catalogue (1900-2009). The catalogue is available via the ISC website (http://www.isc.ac.uk/iscgem/). The available re-computed MS and mb provided an ideal basis for deriving new conversion relationships to moment magnitude MW. Therefore, rather than using previously published regression models, we derived new empirical relationships using both generalized orthogonal linear and exponential non-linear models to obtain MW proxies from MS and mb. The new models were tested against true values of MW, and the newly derived exponential models were then preferred to the linear ones in computing MW proxies. For the final magnitude composition of the ISC-GEM catalogue, we preferred directly measured MW values as published by the Global CMT project for the period 1976-2009 (plus intermediate-depth earthquakes between 1962 and 1975). In addition, over 1000 publications have been examined to obtain direct seismic moment M0 and, therefore, also MW estimates for 967 large earthquakes during 1900-1978 (Lee and Engdahl, 2015) by various alternative methods to the current GCMT procedure. In all other instances we computed MW proxy values by converting our re-computed MS and mb values into MW, using the newly derived non-linear regression models. The final magnitude composition is an improvement in terms of magnitude homogeneity compared to previous catalogues. The magnitude completeness is not homogeneous over the 110 years covered by the ISC-GEM catalogue. Therefore, seismicity rate estimates may be strongly affected without a careful time window selection. In particular, the ISC-GEM catalogue appears to be complete down to MW 5.6 starting from 1964, whereas for the early instrumental period the completeness varies from ∼7.5 to 6.2. Further time and resources would be necessary to homogenize the magnitude of completeness over the entire catalogue length.

  11. Derivation of the Freundlich Adsorption Isotherm from Kinetics

    ERIC Educational Resources Information Center

    Skopp, Joseph

    2009-01-01

    The Freundlich adsorption isotherm is a useful description of adsorption phenomena. It is frequently presented as an empirical equation with little theoretical basis. In fact, a variety of derivations exist. Here a new derivation is presented using the concepts of fractal reaction kinetics. This derivation provides an alternative basis for…

  12. On the degrees of freedom of reduced-rank estimators in multivariate regression

    PubMed Central

    Mukherjee, A.; Chen, K.; Wang, N.; Zhu, J.

    2015-01-01

    Summary We study the effective degrees of freedom of a general class of reduced-rank estimators for multivariate regression in the framework of Stein's unbiased risk estimation. A finite-sample exact unbiased estimator is derived that admits a closed-form expression in terms of the thresholded singular values of the least-squares solution and hence is readily computable. The results continue to hold in the high-dimensional setting where both the predictor and the response dimensions may be larger than the sample size. The derived analytical form facilitates the investigation of theoretical properties and provides new insights into the empirical behaviour of the degrees of freedom. In particular, we examine the differences and connections between the proposed estimator and a commonly-used naive estimator. The use of the proposed estimator leads to efficient and accurate prediction risk estimation and model selection, as demonstrated by simulation studies and a data example. PMID:26702155

  13. Ab initio predictions on the rotational spectra of carbon-chain carbene molecules.

    PubMed

    Maluendes, S A; McLean, A D

    1992-12-18

    We predict rotational constants for the carbon-chain molecules H2C=(C=)nC, n=3-8, using ab initio computations, observed values for the earlier members in the series, H2CCC and H2CCCC with n=1 and 2, and empirical geometry corrections derived from comparison of computation and experiment on related molecules. H2CCC and H2CCCC have already been observed by radioastronomy; higher members in the series, because of their large dipole moments, which we have calculated, are candidates for astronomical searches. Our predictions can guide searches and assist in both astronomical and laboratory detection.

  14. Ab initio predictions on the rotational spectra of carbon-chain carbene molecules

    NASA Technical Reports Server (NTRS)

    Maluendes, S. A.; McLean, A. D.; Loew, G. H. (Principal Investigator)

    1992-01-01

    We predict rotational constants for the carbon-chain molecules H2C=(C=)nC, n=3-8, using ab initio computations, observed values for the earlier members in the series, H2CCC and H2CCCC with n=1 and 2, and empirical geometry corrections derived from comparison of computation and experiment on related molecules. H2CCC and H2CCCC have already been observed by radioastronomy; higher members in the series, because of their large dipole moments, which we have calculated, are candidates for astronomical searches. Our predictions can guide searches and assist in both astronomical and laboratory detection.

  15. Design optimization of a brush turbine with a cleaner/water based solution

    NASA Technical Reports Server (NTRS)

    Kim, Rhyn H.

    1995-01-01

    Recently, a turbine-brush was analyzed based on the energy conservation and the force momentum equation with an empirical relationship of the drag coefficient. An equation was derived to predict the rotational speed of the turbine-brush in terms of the blade angle, number of blades, rest of geometries of the turbine-brush and the incoming velocity. Using the observed flow conditions, drag coefficients were determined. Based on the experimental values as boundary conditions, the turbine-brush flows were numerically simulated to understand first the nature of the flows, and to extend the observed drag coefficient to a flow without holding the turbine-brush.

  16. Thermodynamic properties of semiconductor compounds studied based on Debye-Waller factors

    NASA Astrophysics Data System (ADS)

    Van Hung, Nguyen; Toan, Nguyen Cong; Ba Duc, Nguyen; Vuong, Dinh Quoc

    2015-08-01

    Thermodynamic properties of semiconductor compounds have been studied based on Debye-Waller factors (DWFs) described by the mean square displacement (MSD) which has close relation with the mean square relative displacement (MSRD). Their analytical expressions have been derived based on the statistical moment method (SMM) and the empirical many-body Stillinger-Weber potentials. Numerical results for the MSDs of GaAs, GaP, InP, InSb, which have zinc-blende structure, are found to be in reasonable agreement with experiment and other theories. This paper shows that an elements value for MSD is dependent on the binary semiconductor compound within which it resides.

  17. Violent Crime in Post-Civil War Guatemala: Causes and Policy Implications

    DTIC Science & Technology

    2015-03-01

    on field research and case studies in Honduras, Bolivia, and Argentina. Bailey’s Security Trap theory is comprehensive in nature and derived from... research question. The second phase uses empirical data and comparative case studies to validate or challenge selected arguments that potentially...Contextual relevancy, historical inference, Tools: Empirics and case conclusions empirical data studies Figme2. Sample Research Methodology E

  18. Semi-empirical spectrophotometric (SESp) method for the indirect determination of the ratio of cationic micellar binding constants of counterions X⁻ and Br⁻(K(X)/K(Br)).

    PubMed

    Khan, Mohammad Niyaz; Yusof, Nor Saadah Mohd; Razak, Norazizah Abdul

    2013-01-01

    The semi-empirical spectrophotometric (SESp) method, for the indirect determination of ion exchange constants (K(X)(Br)) of ion exchange processes occurring between counterions (X⁻ and Br⁻) at the cationic micellar surface, is described in this article. The method uses an anionic spectrophotometric probe molecule, N-(2-methoxyphenyl)phthalamate ion (1⁻), which measures the effects of varying concentrations of inert inorganic or organic salt (Na(v)X, v = 1, 2) on absorbance, (A(ob)) at 310 nm, of samples containing constant concentrations of 1⁻, NaOH and cationic micelles. The observed data fit satisfactorily to an empirical equation which gives the values of two empirical constants. These empirical constants lead to the determination of K(X)(Br) (= K(X)/K(Br) with K(X) and K(Br) representing cationic micellar binding constants of counterions X and Br⁻). This method gives values of K(X)(Br) for both moderately hydrophobic and hydrophilic X⁻. The values of K(X)(Br), obtained by using this method, are comparable with the corresponding values of K(X)(Br), obtained by the use of semi-empirical kinetic (SEK) method, for different moderately hydrophobic X. The values of K(X)(Br) for X = Cl⁻ and 2,6-Cl₂C6H₃CO₂⁻, obtained by the use of SESp and SEK methods, are similar to those obtained by the use of other different conventional methods.

  19. Directional effects on NDVI and LAI retrievals from MODIS: A case study in Brazil with soybean

    NASA Astrophysics Data System (ADS)

    Breunig, Fábio Marcelo; Galvão, Lênio Soares; Formaggio, Antônio Roberto; Epiphanio, José Carlos Neves

    2011-02-01

    The Moderate Resolution Imaging Spectroradiometer (MODIS) is largely used to estimate Leaf Area Index (LAI) using radiative transfer modeling (the "main" algorithm). When this algorithm fails for a pixel, which frequently occurs over Brazilian soybean areas, an empirical model (the "backup" algorithm) based on the relationship between the Normalized Difference Vegetation Index (NDVI) and LAI is utilized. The objective of this study is to evaluate directional effects on NDVI and subsequent LAI estimates using global (biome 3) and local empirical models, as a function of the soybean development in two growing seasons (2004-2005 and 2005-2006). The local model was derived from the pixels that had LAI values retrieved from the main algorithm. In order to keep the reproductive stage for a given cultivar as a constant factor while varying the viewing geometry, pairs of MODIS images acquired in close dates from opposite directions (backscattering and forward scattering) were selected. Linear regression relationships between the NDVI values calculated from these two directions were evaluated for different view angles (0-25°; 25-45°; 45-60°) and development stages (<45; 45-90; >90 days after planting). Impacts on LAI retrievals were analyzed. Results showed higher reflectance values in backscattering direction due to the predominance of sunlit soybean canopy components towards the sensor and higher NDVI values in forward scattering direction due to stronger shadow effects in the red waveband. NDVI differences between the two directions were statistically significant for view angles larger than 25°. The main algorithm for LAI estimation failed in the two growing seasons with gradual crop development. As a result, up to 94% of the pixels had LAI values calculated from the backup algorithm at the peak of canopy closure. Most of the pixels selected to compose the 8-day MODIS LAI product came from the forward scattering view because it displayed larger LAI values than the backscattering. Directional effects on the subsequent LAI retrievals were stronger at the peak of the soybean development (NDVI values between 0.70 and 0.85). When the global empirical model was used, LAI differences up to 3.2 for consecutive days and opposite viewing directions were observed. Such differences were reduced to values up to 1.5 with the local model. Because of the predominance of LAI retrievals from the MODIS backup algorithm during the Brazilian soybean development, care is necessary if one considers using these data in agronomic growing/yield models.

  20. Empirical Data Fusion for Convective Weather Hazard Nowcasting

    NASA Astrophysics Data System (ADS)

    Williams, J.; Ahijevych, D.; Steiner, M.; Dettling, S.

    2009-09-01

    This paper describes a statistical analysis approach to developing an automated convective weather hazard nowcast system suitable for use by aviation users in strategic route planning and air traffic management. The analysis makes use of numerical weather prediction model fields and radar, satellite, and lightning observations and derived features along with observed thunderstorm evolution data, which are aligned using radar-derived motion vectors. Using a dataset collected during the summers of 2007 and 2008 over the eastern U.S., the predictive contributions of the various potential predictor fields are analyzed for various spatial scales, lead-times and scenarios using a technique called random forests (RFs). A minimal, skillful set of predictors is selected for each scenario requiring distinct forecast logic, and RFs are used to construct an empirical probabilistic model for each. The resulting data fusion system, which ran in real-time at the National Center for Atmospheric Research during the summer of 2009, produces probabilistic and deterministic nowcasts of the convective weather hazard and assessments of the prediction uncertainty. The nowcasts' performance and results for several case studies are presented to demonstrate the value of this approach. This research has been funded by the U.S. Federal Aviation Administration to support the development of the Consolidated Storm Prediction for Aviation (CoSPA) system, which is intended to provide convective hazard nowcasts and forecasts for the U.S. Next Generation Air Transportation System (NextGen).

  1. Determining the non-inferiority margin for patient reported outcomes.

    PubMed

    Gerlinger, Christoph; Schmelter, Thomas

    2011-01-01

    One of the cornerstones of any non-inferiority trial is the choice of the non-inferiority margin delta. This threshold of clinical relevance is very difficult to determine, and in practice, delta is often "negotiated" between the sponsor of the trial and the regulatory agencies. However, for patient reported, or more precisely patient observed outcomes, the patients' minimal clinically important difference (MCID) can be determined empirically by relating the treatment effect, for example, a change on a 100-mm visual analogue scale, to the patient's satisfaction with the change. This MCID can then be used to define delta. We used an anchor-based approach with non-parametric discriminant analysis and ROC analysis and a distribution-based approach with Norman's half standard deviation rule to determine delta in three examples endometriosis-related pelvic pain measured on a 100-mm visual analogue scale, facial acne measured by lesion counts, and hot flush counts. For each of these examples, all three methods yielded quite similar results. In two of the cases, the empirically derived MCIDs were smaller or similar of deltas used before in non-inferiority trials, and in the third case, the empirically derived MCID was used to derive a responder definition that was accepted by the FDA. In conclusion, for patient-observed endpoints, the delta can be derived empirically. In our view, this is a better approach than that of asking the clinician for a "nice round number" for delta, such as 10, 50%, π, e, or i. Copyright © 2011 John Wiley & Sons, Ltd.

  2. Economic valuation of mangrove ecosystem: empirical studies in Timbulsloko Village, Sayung, Demak, Indonesia

    NASA Astrophysics Data System (ADS)

    Perdana, T. A.; Suprijanto, J.; Pribadi, R.; Collet, C. R.; Bailly, D.

    2018-03-01

    Ecosystem resilience is the capacity of ecosystems to tolerate disorders without collapsing into different circumstances qualitatively controlled by a different set of processes. A robust ecosystem is one that can withstand shocks and rebuild itself when necessary. This study aims to identify the value of use-based economy and non-use value of current economy; calculating the total economic value of mangrove resources; and provide suggestions and recommendations based on observations in Timbulsloko, Sayung, Demak. The method used is economic valuation with total economic value technique. The sampling technique used non-probability and purposive sampling method. The results showed that the direct use value of mangroves was utilized by fisherman, fish pond farmers, branjang catchers, oystercatchers, trap makers, shop owner, grilled fish makers and shrimp chip makers. Indirect use value was derived from function as the breakwater, beach belt and hybrid engineering. Existing value was not less than 10 % of the direct use value. The total economic value was Rp. 6,361,430,639/year or about Rp. 202,335,580.1/ha/year. It is need to improve the community awareness to mangrove ecosystem and to the role of breakwater in order to reduce risk disaster and to develop an ecotourism in the area.

  3. Economic selection indexes for Hereford and Braford cattle raised in southern Brazil.

    PubMed

    Costa, R F; Teixeira, B B M; Yokoo, M J; Cardoso, F F

    2017-07-01

    Economic selection indexes (EI) are considered the best way to select the most profitable animals for specific production systems. Nevertheless, in Brazil, few genetic evaluation programs deliver such indexes to their breeders. The aims of this study were to determine the breeding goals (BG) and economic values (EV, in US$) for typical beef cattle production systems in southern Brazil, to propose EI aimed to maximize profitability, and to compare the proposed EI with the currently used empirical index. Bioeconomic models were developed to characterize 3 typical production systems, identifying traits of economic impact and their respective EV. The first was called the calf-crop system and included the birth rate (BR), direct weaning weight (WWd), and mature cow weight (MCW) as selection goals. The second system was called the full-cycle system, and its breeding goals were BR, WWd, MCW, and carcass weight (CW). Finally, the third was called the stocking and finishing system, which had WWd and CW as breeding goals. To generate the EI, we adopted the selection criteria currently measured and used in the empirical index of PampaPlus, which is the genetic evaluation program of the Brazilian Hereford and Braford Association. The comparison between the EI and the current PampaPlus index was made by the aggregated genetic-economic gain per generation (Δ). Therefore, for each production system an index was developed using the derived economic weights, and it was compared with the current empirical index. The relative importance (RI) for BR, WWd, and MCW for the calf-crop system was 68.03%, 19.35%, and 12.62%, respectively. For the full-cycle system, the RI for BR, WWd, MCW, and CW were 69.63%, 7.31%, 5.01%, and 18.06%, respectively. For the stocking and finishing production system, the RI for WWd and CW was 34.20% and 65.80%, respectively. The Δ for the calf-crop system were US$6.12 and US$4.36, using the proposed economic and empirical indexes, respectively. Respective values were US$19.87 and US$18.22 for the full-cycle system and US$20.52 and US$18.52 in the stocking and finishing system. The efficiency of the proposed EI had low sensitivity to changes in the values of the economic and genetic parameters. The 3 EI generated higher Δ when using the proposed economic weight compared to the Δ provided by a PampaPlus index, suggesting the use of proposed EI to obtain greater economic profitability in relation to the current empirical PampaPlus index.

  4. Protein structure refinement using a quantum mechanics-based chemical shielding predictor.

    PubMed

    Bratholm, Lars A; Jensen, Jan H

    2017-03-01

    The accurate prediction of protein chemical shifts using a quantum mechanics (QM)-based method has been the subject of intense research for more than 20 years but so far empirical methods for chemical shift prediction have proven more accurate. In this paper we show that a QM-based predictor of a protein backbone and CB chemical shifts (ProCS15, PeerJ , 2016, 3, e1344) is of comparable accuracy to empirical chemical shift predictors after chemical shift-based structural refinement that removes small structural errors. We present a method by which quantum chemistry based predictions of isotropic chemical shielding values (ProCS15) can be used to refine protein structures using Markov Chain Monte Carlo (MCMC) simulations, relating the chemical shielding values to the experimental chemical shifts probabilistically. Two kinds of MCMC structural refinement simulations were performed using force field geometry optimized X-ray structures as starting points: simulated annealing of the starting structure and constant temperature MCMC simulation followed by simulated annealing of a representative ensemble structure. Annealing of the CHARMM structure changes the CA-RMSD by an average of 0.4 Å but lowers the chemical shift RMSD by 1.0 and 0.7 ppm for CA and N. Conformational averaging has a relatively small effect (0.1-0.2 ppm) on the overall agreement with carbon chemical shifts but lowers the error for nitrogen chemical shifts by 0.4 ppm. If an amino acid specific offset is included the ProCS15 predicted chemical shifts have RMSD values relative to experiments that are comparable to popular empirical chemical shift predictors. The annealed representative ensemble structures differ in CA-RMSD relative to the initial structures by an average of 2.0 Å, with >2.0 Å difference for six proteins. In four of the cases, the largest structural differences arise in structurally flexible regions of the protein as determined by NMR, and in the remaining two cases, the large structural change may be due to force field deficiencies. The overall accuracy of the empirical methods are slightly improved by annealing the CHARMM structure with ProCS15, which may suggest that the minor structural changes introduced by ProCS15-based annealing improves the accuracy of the protein structures. Having established that QM-based chemical shift prediction can deliver the same accuracy as empirical shift predictors we hope this can help increase the accuracy of related approaches such as QM/MM or linear scaling approaches or interpreting protein structural dynamics from QM-derived chemical shift.

  5. BEHAVIORAL HAZARD IN HEALTH INSURANCE*

    PubMed Central

    Baicker, Katherine; Mullainathan, Sendhil; Schwartzstein, Joshua

    2015-01-01

    A fundamental implication of standard moral hazard models is overuse of low-value medical care because copays are lower than costs. In these models, the demand curve alone can be used to make welfare statements, a fact relied on by much empirical work. There is ample evidence, though, that people misuse care for a different reason: mistakes, or “behavioral hazard.” Much high-value care is underused even when patient costs are low, and some useless care is bought even when patients face the full cost. In the presence of behavioral hazard, welfare calculations using only the demand curve can be off by orders of magnitude or even be the wrong sign. We derive optimal copay formulas that incorporate both moral and behavioral hazard, providing a theoretical foundation for value-based insurance design and a way to interpret behavioral “nudges.” Once behavioral hazard is taken into account, health insurance can do more than just provide financial protection—it can also improve health care efficiency. PMID:23930294

  6. Simulating the value of electric-vehicle-grid integration using a behaviourally realistic model

    NASA Astrophysics Data System (ADS)

    Wolinetz, Michael; Axsen, Jonn; Peters, Jotham; Crawford, Curran

    2018-02-01

    Vehicle-grid integration (VGI) uses the interaction between electric vehicles and the electrical grid to provide benefits that may include reducing the cost of using intermittent renwable electricity or providing a financial incentive for electric vehicle ownerhip. However, studies that estimate the value of VGI benefits have largely ignored how consumer behaviour will affect the magnitude of the impact. Here, we simulate the long-term impact of VGI using behaviourally realistic and empirically derived models of vehicle adoption and charging combined with an electricity system model. We focus on the case where a central entity manages the charging rate and timing for participating electric vehicles. VGI is found not to increase the adoption of electric vehicles, but does have a a small beneficial impact on electricity prices. By 2050, VGI reduces wholesale electricity prices by 0.6-0.7% (0.7 MWh-1, 2010 CAD) relative to an equivalent scenario without VGI. Excluding consumer behaviour from the analysis inflates the value of VGI.

  7. A reevaluation of spectral ratios for lunar mare TiO2 mapping

    NASA Technical Reports Server (NTRS)

    Johnson, Jeffrey R.; Larson, Stephen M.; Singer, Robert B.

    1991-01-01

    The empirical relation established by Charette et al. (1974) between the 400/560-nm spectral ratio of mature mare soils and weight percent TiO2 has been used extensively to map titanium content in the lunar maria. Relative reflectance spectra of mare regions show that a reference wavelength further into the near-IR, e.g., above 700 nm, could be used in place of the 560-nm band to provide greater contrast (a greater range of ratio values) and hence a more sensitive indicator of titanium content. An analysis of 400/730-nm ratio values derived from both laboratory and telescopic relative reflectance spectra suggests that this ratio provides greater sensitivity to TiO2 content than the 400/560-nm ratio. The increased range of ratio values is manifested in higher contrast 400/730-nm ratio images compared to 400/560-nm ratio images. This potential improvement in sensivity encourages a reevaluation of the original Charette et al. (1974) relation using the 400/730-nm ratio.

  8. Impact of rapid methicillin-resistant Staphylococcus aureus polymerase chain reaction testing on mortality and cost effectiveness in hospitalized patients with bacteraemia: a decision model.

    PubMed

    Brown, Jack; Paladino, Joseph A

    2010-01-01

    Patients hospitalized with Staphylococcus aureus bacteraemia have an unacceptably high mortality rate. Literature available to date has shown that timely selection of the most appropriate antibacterial may reduce mortality. One tool that may help with this selection is a polymerase chain reaction (PCR) assay that distinguishes methicillin (meticillin)-resistant S. aureus (MRSA) from methicillin-susceptible S. aureus (MSSA) in less than 1 hour. To date, no information is available evaluating the impact of this PCR technique on clinical or economic outcomes. To evaluate the effect of a rapid PCR assay on mortality and economics compared with traditional empiric therapy, using a literature-derived model. A literature search for peer-reviewed European (EU) and US publications regarding treatment regimens, outcomes and costs was conducted. Information detailing the rates of infection, as well as the specificity and sensitivity of a rapid PCR assay (Xpert MRSA/SA Blood Culture PCR) were obtained from the peer-reviewed literature. Sensitivity analysis varied the prevalence rate of MRSA from 5% to 80%, while threshold analysis was applied to the cost of the PCR test. Hospital and testing resource consumption were valued with direct medical costs, adjusted to year 2009 values. Adjusted life-years were determined using US and WHO life tables. The cost-effectiveness ratio was defined as the cost per life-year saved. Incremental cost-effectiveness ratios (ICERs) were calculated to determine the additional cost necessary to produce additional effectiveness. All analyses were performed using TreeAge Software (2008). The mean mortality rates were 23% for patients receiving empiric vancomycin subsequently switched to semi-synthetic penicillin (SSP) for MSSA, 36% for patients receiving empiric vancomycin treatment for MRSA, 59% for patients receiving empiric SSP subsequently switched to vancomycin for MRSA and 12% for patients receiving empiric SSP for MSSA. Furthermore, with an MRSA prevalence of 30%, the numbers of patients needed to test in order to save one life were 14 and 16 compared with empiric vancomycin and SSP, respectively. The absolute mortality difference for MRSA prevalence rates of 80% and 5% favoured the PCR testing group at 2% and 10%, respectively, compared with empiric vancomycin and 18% and 1%, respectively, compared with empiric SSP. In the EU, the cost-effectiveness ratios for empiric vancomycin- and SSP-treated patients were Euro 695 and Euro 687 per life-year saved, respectively, compared with Euro 636 per life-year saved for rapid PCR testing. In the US, the cost-effectiveness ratio was $US 898 per life-year saved for empiric vancomycin and $US 820 per life-year saved for rapid PCR testing. ICERs demonstrated dominance of the PCR test in all instances. Threshold analysis revealed that PCR testing would be less costly overall, even at greatly inflated assay prices. Rapid PCR testing for MRSA appears to have the potential to reduce mortality rates while being less costly than empiric therapy in the EU and US, across a wide range of MRSA prevalence rates and PCR test costs.

  9. A Comparison of Full and Empirical Bayes Techniques for Inferring Sea Level Changes from Tide Gauge Records

    NASA Astrophysics Data System (ADS)

    Piecuch, C. G.; Huybers, P. J.; Tingley, M.

    2016-12-01

    Sea level observations from coastal tide gauges are some of the longest instrumental records of the ocean. However, these data can be noisy, biased, and gappy, featuring missing values, and reflecting land motion and local effects. Coping with these issues in a formal manner is a challenging task. Some studies use Bayesian approaches to estimate sea level from tide gauge records, making inference probabilistically. Such methods are typically empirically Bayesian in nature: model parameters are treated as known and assigned point values. But, in reality, parameters are not perfectly known. Empirical Bayes methods thus neglect a potentially important source of uncertainty, and so may overestimate the precision (i.e., underestimate the uncertainty) of sea level estimates. We consider whether empirical Bayes methods underestimate uncertainty in sea level from tide gauge data, comparing to a full Bayes method that treats parameters as unknowns to be solved for along with the sea level field. We develop a hierarchical algorithm that we apply to tide gauge data on the North American northeast coast over 1893-2015. The algorithm is run in full Bayes mode, solving for the sea level process and parameters, and in empirical mode, solving only for the process using fixed parameter values. Error bars on sea level from the empirical method are smaller than from the full Bayes method, and the relative discrepancies increase with time; the 95% credible interval on sea level values from the empirical Bayes method in 1910 and 2010 is 23% and 56% narrower, respectively, than from the full Bayes approach. To evaluate the representativeness of the credible intervals, empirical Bayes and full Bayes methods are applied to corrupted data of a known surrogate field. Using rank histograms to evaluate the solutions, we find that the full Bayes method produces generally reliable error bars, whereas the empirical Bayes method gives too-narrow error bars, such that the 90% credible interval only encompasses 70% of true process values. Results demonstrate that parameter uncertainty is an important source of process uncertainty, and advocate for the fully Bayesian treatment of tide gauge records in ocean circulation and climate studies.

  10. Psychosocial stressors and the prognosis of major depression: a test of Axis IV

    PubMed Central

    Gilman, Stephen E.; Trinh, Nhi-Ha; Smoller, Jordan W.; Fava, Maurizio; Murphy, Jane M.; Breslau, Joshua

    2013-01-01

    Background Axis IV is for reporting “psychosocial and environmental problems that may affect the diagnosis, treatment, and prognosis of mental disorders.” No studies have examined the prognostic value of Axis IV in DSM-IV. Method We analyzed data from 2,497 participants in the National Epidemiologic Survey on Alcohol and Related Conditions with major depressive episode (MDE). We hypothesized that psychosocial stressors predict a poor prognosis of MDE. Secondarily, we hypothesized that psychosocial stressors predict a poor prognosis of anxiety and substance use disorders. Stressors were defined according to DSM-IV’s taxonomy, and empirically using latent class analysis. Results Primary support group problems, occupational problems, and childhood adversity increased the risks of depressive episodes and suicidal ideation by 20–30%. Associations of the empirically derived classes of stressors with depression were larger in magnitude. Economic stressors conferred a 1.5-fold increase in risk for a depressive episode (CI=1.2–1.9); financial and interpersonal instability conferred a 1.3-fold increased risk of recurrent depression (CI=1.1–1.6). These two classes of stressors also predicted the recurrence of anxiety and substance use disorders. Stressors were not related to suicidal ideation independent from depression severity. Conclusions Psychosocial and environmental problems are associated with the prognosis of MDE and other Axis I disorders. Though DSM-IV’s taxonomy of stressors stands to be improved, these results provide empirical support for the prognostic value of Axis IV. Future work is needed to determine the reliability of Axis IV assessments in clinical practice, and the usefulness of this information to improving the clinical course of mental disorders. PMID:22640506

  11. First-order approximation for the pressure-flow relationship of spontaneously contracting lymphangions.

    PubMed

    Quick, Christopher M; Venugopal, Arun M; Dongaonkar, Ranjeet M; Laine, Glen A; Stewart, Randolph H

    2008-05-01

    To return lymph to the great veins of the neck, it must be actively pumped against a pressure gradient. Mean lymph flow in a portion of a lymphatic network has been characterized by an empirical relationship (P(in) - P(out) = -P(p) + R(L)Q(L)), where P(in) - P(out) is the axial pressure gradient and Q(L) is mean lymph flow. R(L) and P(p) are empirical parameters characterizing the effective lymphatic resistance and pump pressure, respectively. The relation of these global empirical parameters to the properties of lymphangions, the segments of a lymphatic vessel bounded by valves, has been problematic. Lymphangions have a structure like blood vessels but cyclically contract like cardiac ventricles; they are characterized by a contraction frequency (f) and the slopes of the end-diastolic pressure-volume relationship [minimum value of resulting elastance (E(min))] and end-systolic pressure-volume relationship [maximum value of resulting elastance (E(max))]. Poiseuille's law provides a first-order approximation relating the pressure-flow relationship to the fundamental properties of a blood vessel. No analogous formula exists for a pumping lymphangion. We therefore derived an algebraic formula predicting lymphangion flow from fundamental physical principles and known lymphangion properties. Quantitative analysis revealed that lymph inertia and resistance to lymph flow are negligible and that lymphangions act like a series of interconnected ventricles. For a single lymphangion, P(p) = P(in) (E(max) - E(min))/E(min) and R(L) = E(max)/f. The formula was tested against a validated, realistic mathematical model of a lymphangion and found to be accurate. Predicted flows were within the range of flows measured in vitro. The present work therefore provides a general solution that makes it possible to relate fundamental lymphangion properties to lymphatic system function.

  12. The Typicality Ranking Task: A New Method to Derive Typicality Judgments from Children.

    PubMed

    Djalal, Farah Mutiasari; Ameel, Eef; Storms, Gert

    2016-01-01

    An alternative method for deriving typicality judgments, applicable in young children that are not familiar with numerical values yet, is introduced, allowing researchers to study gradedness at younger ages in concept development. Contrary to the long tradition of using rating-based procedures to derive typicality judgments, we propose a method that is based on typicality ranking rather than rating, in which items are gradually sorted according to their typicality, and that requires a minimum of linguistic knowledge. The validity of the method is investigated and the method is compared to the traditional typicality rating measurement in a large empirical study with eight different semantic concepts. The results show that the typicality ranking task can be used to assess children's category knowledge and to evaluate how this knowledge evolves over time. Contrary to earlier held assumptions in studies on typicality in young children, our results also show that preference is not so much a confounding variable to be avoided, but that both variables are often significantly correlated in older children and even in adults.

  13. The Typicality Ranking Task: A New Method to Derive Typicality Judgments from Children

    PubMed Central

    Ameel, Eef; Storms, Gert

    2016-01-01

    An alternative method for deriving typicality judgments, applicable in young children that are not familiar with numerical values yet, is introduced, allowing researchers to study gradedness at younger ages in concept development. Contrary to the long tradition of using rating-based procedures to derive typicality judgments, we propose a method that is based on typicality ranking rather than rating, in which items are gradually sorted according to their typicality, and that requires a minimum of linguistic knowledge. The validity of the method is investigated and the method is compared to the traditional typicality rating measurement in a large empirical study with eight different semantic concepts. The results show that the typicality ranking task can be used to assess children’s category knowledge and to evaluate how this knowledge evolves over time. Contrary to earlier held assumptions in studies on typicality in young children, our results also show that preference is not so much a confounding variable to be avoided, but that both variables are often significantly correlated in older children and even in adults. PMID:27322371

  14. A Behavior-Analytic Account of Motivational Interviewing

    ERIC Educational Resources Information Center

    Christopher, Paulette J.; Dougher, Michael J.

    2009-01-01

    Several published reports have now documented the clinical effectiveness of motivational interviewing (MI). Despite its effectiveness, there are no generally accepted or empirically supported theoretical accounts of its effects. The theoretical accounts that do exist are mentalistic, descriptive, and not based on empirically derived behavioral…

  15. Classification of Marital Relationships: An Empirical Approach.

    ERIC Educational Resources Information Center

    Snyder, Douglas K.; Smith, Gregory T.

    1986-01-01

    Derives an empirically based classification system of marital relationships, employing a multidimensional self-report measure of marital interaction. Spouses' profiles on the Marital Satisfaction Inventory for samples of clinic and nonclinic couples were subjected to cluster analysis, resulting in separate five-group typologies for husbands and…

  16. Energy loss straggling in Aluminium foils for Li and C ions in fractional energy loss limits (ΔE/E) ∼10-60%

    NASA Astrophysics Data System (ADS)

    Diwan, P. K.; Kumar, Sunil; Kumar, Shyam; Sharma, V.; Khan, S. A.; Avasthi, D. K.

    2016-02-01

    The energy loss straggling of Li and C ions in Al foils of various thicknesses has been measured, within the fractional energy loss limit (∆E/E) ∼ 10-60%. These measurements have been performed using the 15UD Pelletron accelerator facility available at Inter University Accelerator Centre (IUAC), New Delhi, India. The measured straggling values have been compared with the corresponding predicted values adopting popularly used collisional straggling formulations viz Bohr, Lindhard and Scharff, Bethe-Livingston, Titeica. In addition, the experimental data has been compared to the Yang et al. empirical formula and Close Form Model, recently proposed by Montanari et al. The straggling values derived by Titeica theory were found to be in better agreement with the measured values as compared to other straggling formulations. The charge-exchange straggling component has been estimated from the measured data based on Titeica's theory. Finally, a function of the ion effective charge and the energy loss fraction within the target has been fitted to the latter straggling component.

  17. Incorporating the Cultural Value of Respeto Into a Framework of Latino Parenting

    PubMed Central

    Calzada, Esther J.; Fernandez, Yenny; Cortes, Dharma E.

    2015-01-01

    Latino families face multiple stressors associated with adjusting to United States mainstream culture that, along with poverty and residence in inner-city communities, may further predispose their children to risk for negative developmental outcomes. Evidence-based mental health treatments may require culturally informed modifications to best address the unique needs of the Latino population, yet few empirical studies have assessed these cultural elements. The current study examined cultural values of 48 Dominican and Mexican mothers of preschoolers through focus groups in which they described their core values as related to their parenting role. Results showed that respeto, family and religion were the most important values that mothers sought to transmit to their children. Respeto is manifested in several domains, including obedience to authority, deference, decorum, and public behavior. The authors describe the socialization messages that Latina mothers use to teach their children respeto and present a culturally derived framework of how these messages may relate to child development. The authors discuss how findings may inform the cultural adaptation of evidence-based mental health treatments such as parent training programs. PMID:20099967

  18. A satellite AOT derived from the ground sky transmittance measurements

    NASA Astrophysics Data System (ADS)

    Lim, H. S.; MatJafri, M. Z.; Abdullah, K.; Tan, K. C.; Wong, C. J.; Saleh, N. Mohd.

    2008-10-01

    The optical properties of aerosols such as smoke from burning vary due to aging processes and these particles reach larger sizes at high concentrations. The objectives of this study are to develop and evaluate an algorithm for estimating atmospheric optical thickness from Landsat TM image. This study measured the sky transmittance at the ground using a handheld spectroradiometer in a wide wavelength spectrum to retrieve atmospheric optical thickness. The in situ measurement of atmospheric transmittance data were collected simultaneously with the acquisition of remotely sensed satellite data. The digital numbers for the three visible bands corresponding to the in situ locations were extracted and then converted into reflectance values. The reflectance measured from the satellite was subtracted by the amount given by the surface reflectance to obtain the atmospheric reflectance. These atmospheric reflectance values were used for calibration of the AOT algorithm. This study developed an empirical method to estimate the AOT values from the sky transmittance values. Finally, a AOT map was generated using the proposed algorithm and colour-coded for visual interpretation.

  19. Incorporating the cultural value of respeto into a framework of Latino parenting.

    PubMed

    Calzada, Esther J; Fernandez, Yenny; Cortes, Dharma E

    2010-01-01

    Latino families face multiple stressors associated with adjusting to United States mainstream culture that, along with poverty and residence in inner-city communities, may further predispose their children to risk for negative developmental outcomes. Evidence-based mental health treatments may require culturally informed modifications to best address the unique needs of the Latino population, yet few empirical studies have assessed these cultural elements. The current study examined cultural values of 48 Dominican and Mexican mothers of preschoolers through focus groups in which they described their core values as related to their parenting role. Results showed that respeto, family and religion were the most important values that mothers sought to transmit to their children. Respeto is manifested in several domains, including obedience to authority, deference, decorum, and public behavior. The authors describe the socialization messages that Latina mothers use to teach their children respeto and present a culturally derived framework of how these messages may relate to child development. The authors discuss how findings may inform the cultural adaptation of evidence-based mental health treatments such as parent training programs. (c) 2009 APA, all rights reserved.

  20. Development of a detector model for generation of synthetic radiographs of cargo containers

    NASA Astrophysics Data System (ADS)

    White, Timothy A.; Bredt, Ofelia P.; Schweppe, John E.; Runkle, Robert C.

    2008-05-01

    Creation of synthetic cargo-container radiographs that possess attributes of their empirical counterparts requires accurate models of the imaging-system response. Synthetic radiographs serve as surrogate data in studies aimed at determining system effectiveness for detecting target objects when it is impractical to collect a large set of empirical radiographs. In the case where a detailed understanding of the detector system is available, an accurate detector model can be derived from first-principles. In the absence of this detail, it is necessary to derive empirical models of the imaging-system response from radiographs of well-characterized objects. Such a case is the topic of this work, where we demonstrate the development of an empirical model of a gamma-ray radiography system with the intent of creating a detector-response model that translates uncollided photon transport calculations into realistic synthetic radiographs. The detector-response model is calibrated to field measurements of well-characterized objects thus incorporating properties such as system sensitivity, spatial resolution, contrast and noise.

  1. Flow properties of the solar wind obtained from white light data, Ulysses observations and a two-fluid model

    NASA Technical Reports Server (NTRS)

    Habbal, Shadia Rifai; Esser, Ruth; Guhathakurta, Madhulika; Fisher, Richard

    1995-01-01

    Using the empirical constraints provided by observations in the inner corona and in interplanetary space. we derive the flow properties of the solar wind using a two fluid model. Density and scale height temperatures are derived from White Light coronagraph observations on SPARTAN 201-1 and at Mauna Loa, from 1.16 to 5.5 R, in the two polar coronal holes on 11-12 Apr. 1993. Interplanetary measurements of the flow speed and proton mass flux are taken from the Ulysses south polar passage. By comparing the results of the model computations that fit the empirical constraints in the two coronal hole regions, we show how the effects of the line of sight influence the empirical inferences and subsequently the corresponding numerical results.

  2. Empirical ethics and its alleged meta-ethical fallacies.

    PubMed

    de Vries, Rob; Gordijn, Bert

    2009-05-01

    This paper analyses the concept of empirical ethics as well as three meta-ethical fallacies that empirical ethics is said to face: the is-ought problem, the naturalistic fallacy and violation of the fact-value distinction. Moreover, it answers the question of whether empirical ethics (necessarily) commits these three basic meta-ethical fallacies.

  3. SU-F-T-158: Experimental Characterization of Field Size Dependence of Dose and Lateral Beam Profiles of Scanning Proton and Carbon Ion Beams for Empirical Model in Air

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Y; Hsi, W; Zhao, J

    2016-06-15

    Purpose: The Gaussian model for the lateral profiles in air is crucial for an accurate treatment planning system. The field size dependence of dose and the lateral beam profiles of scanning proton and carbon ion beams are due mainly to particles undergoing multiple Coulomb scattering in the beam line components and secondary particles produced by nuclear interactions in the target, both of which depend upon the energy and species of the beam. In this work, lateral profile shape parameters were fitted to measurements of field size dependence dose at the center of field size in air. Methods: Previous studies havemore » employed empirical fits to measured profile data to significantly reduce the QA time required for measurements. From this approach to derive the weight and sigma of lateral profiles in air, empirical model formulations were simulated for three selected energies for both proton and carbon beams. Results: The 20%–80% lateral penumbras predicted by the double model for proton and single model for carbon with the error functions agreed with the measurements within 1 mm. The standard deviation between measured and fitted field size dependence of dose for empirical model in air has a maximum accuracy of 0.74% for proton with double Gaussian, and of 0.57% for carbon with single Gaussian. Conclusion: We have demonstrated that the double Gaussian model of lateral beam profiles is significantly better than the single Gaussian model for proton while a single Gaussian model is sufficient for carbon. The empirical equation may be used to double check the separately obtained model that is currently used by the planning system. The empirical model in air for dose of spot scanning proton and carbon ion beams cannot be directly used for irregular shaped patient fields, but can be to provide reference values for clinical use and quality assurance.« less

  4. Values Education in Ottoman Empire in the Second Constitutional Period: A Sample Lesson

    ERIC Educational Resources Information Center

    Oruc, Sahin; Ilhan, Genc Osman

    2015-01-01

    Values education holds a significant place in an education environment and many studies are carried out about this new subject area. The aim of this study is to define how the subject of "values education" is handled in a sample lesson designed in the period of Constitution II in the Ottoman Empire. In this study, the lesson plan in the…

  5. Value-oriented citizenship index: New extensions of Kelman and Hamilton's theory to prevent autocracy.

    PubMed

    Morselli, Davide; Passini, Stefano

    2015-11-01

    In Crimes of obedience, Kelman and Hamilton argue that societies can be protected by the degeneration of authority only when citizenship is based on a strong values orientation. This reference to values may be the weakest point in their theory because they do not explicitly define these values. Nevertheless, their empirical findings suggest that the authors are referring to specific democratic principles and universal values (e.g., equality, fairness, harmlessness). In this article, a composite index known as the value-oriented citizenship (VOC) index is introduced and empirically analysed. The results confirm that the VOC index discriminates between people who relate to authority based on values rather than based on their role or on rules in general. The article discusses the utility of the VOC index to develop Kelman and Hamilton's framework further empirically as well as its implications for the analysis of the relationship between individuals and authority. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. Empire's recent history, as seen from the Special Advisory Review Panel on Blue Cross.

    PubMed

    Barba, J J

    1997-01-01

    Empire is a smaller and more financially stable company that no longer has an externally imposed social mission. The board and management of Empire have decided to convert to a for-profit company, to compete in the marketplace. In light of this decision, they also decided to turn over the company's charitable value to a new foundation. Because Empire's board has chosen not to maintain a social mission, the Panel strongly supports its proposal to turn over the full value of the charitable asset. This will allow the asset to be used for purposes that are in keeping with Empire's original social mission. Exactly how this asset should be valued, what form it should take, when it should be turned over, who should control the assets, and what activities it should support are just a few of the many important issues that must be resolved during the next few months. Empire will not and should not remain stagnant during the next few months. Given the rapidly evolving health-care market, Empire's board and management must continue to pursue a market strategy that strengthens the company. However, given the factors discussed earlier--hospital deregulation, the increasingly competitive managed-care market, and other pressures in the health-care environment--it is clear that the road ahead for Empire will not be a smooth one and that the company's financial resurgence is no guarantee of continued stability. Much hard work remains. I am confident that Empire's board and its management will continue to do its part, that the Panel will continue to do likewise.

  7. Empire's recent history, as seen from the Special Advisory Review Panel on Blue Cross.

    PubMed Central

    Barba, J. J.

    1997-01-01

    Empire is a smaller and more financially stable company that no longer has an externally imposed social mission. The board and management of Empire have decided to convert to a for-profit company, to compete in the marketplace. In light of this decision, they also decided to turn over the company's charitable value to a new foundation. Because Empire's board has chosen not to maintain a social mission, the Panel strongly supports its proposal to turn over the full value of the charitable asset. This will allow the asset to be used for purposes that are in keeping with Empire's original social mission. Exactly how this asset should be valued, what form it should take, when it should be turned over, who should control the assets, and what activities it should support are just a few of the many important issues that must be resolved during the next few months. Empire will not and should not remain stagnant during the next few months. Given the rapidly evolving health-care market, Empire's board and management must continue to pursue a market strategy that strengthens the company. However, given the factors discussed earlier--hospital deregulation, the increasingly competitive managed-care market, and other pressures in the health-care environment--it is clear that the road ahead for Empire will not be a smooth one and that the company's financial resurgence is no guarantee of continued stability. Much hard work remains. I am confident that Empire's board and its management will continue to do its part, that the Panel will continue to do likewise. PMID:9439862

  8. Mechanistic quantitative structure-activity relationship model for the photoinduced toxicity of polycyclic aromatic hydrocarbons. 2: An empirical model for the toxicity of 16 polycyclic aromatic hydrocarbons to the duckweed Lemna gibba L. G-3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, X.D.; Krylov, S.N.; Ren, L.

    1997-11-01

    Photoinduced toxicity of polycyclic aromatic hydrocarbons (PAHs) occurs via photosensitization reactions (e.g., generation of singlet-state oxygen) and by photomodification (photooxidation and/or photolysis) of the chemicals to more toxic species. The quantitative structure-activity relationship (QSAR) described in the companion paper predicted, in theory, that photosensitization and photomodification additively contribute to toxicity. To substantiate this QSAR modeling exercise it was necessary to show that toxicity can be described by empirically derived parameters. The toxicity of 16 PAHs to the duckweed Lemna gibba was measured as inhibition of leaf production in simulated solar radiation (a light source with a spectrum similar to thatmore » of sunlight). A predictive model for toxicity was generated based on the theoretical model developed in the companion paper. The photophysical descriptors required of each PAH for modeling were efficiency of photon absorbance, relative uptake, quantum yield for triplet-state formation, and the rate of photomodification. The photomodification rates of the PAHs showed a moderate correlation to toxicity, whereas a derived photosensitization factor (PSF; based on absorbance, triplet-state quantum yield, and uptake) for each PAH showed only a weak, complex correlation to toxicity. However, summing the rate of photomodification and the PSF resulted in a strong correlation to toxicity that had predictive value. When the PSF and a derived photomodification factor (PMF; based on the photomodification rate and toxicity of the photomodified PAHs) were summed, an excellent explanatory model of toxicity was produced, substantiating the additive contributions of the two factors.« less

  9. A better sequence-read simulator program for metagenomics.

    PubMed

    Johnson, Stephen; Trost, Brett; Long, Jeffrey R; Pittet, Vanessa; Kusalik, Anthony

    2014-01-01

    There are many programs available for generating simulated whole-genome shotgun sequence reads. The data generated by many of these programs follow predefined models, which limits their use to the authors' original intentions. For example, many models assume that read lengths follow a uniform or normal distribution. Other programs generate models from actual sequencing data, but are limited to reads from single-genome studies. To our knowledge, there are no programs that allow a user to generate simulated data following non-parametric read-length distributions and quality profiles based on empirically-derived information from metagenomics sequencing data. We present BEAR (Better Emulation for Artificial Reads), a program that uses a machine-learning approach to generate reads with lengths and quality values that closely match empirically-derived distributions. BEAR can emulate reads from various sequencing platforms, including Illumina, 454, and Ion Torrent. BEAR requires minimal user input, as it automatically determines appropriate parameter settings from user-supplied data. BEAR also uses a unique method for deriving run-specific error rates, and extracts useful statistics from the metagenomic data itself, such as quality-error models. Many existing simulators are specific to a particular sequencing technology; however, BEAR is not restricted in this way. Because of its flexibility, BEAR is particularly useful for emulating the behaviour of technologies like Ion Torrent, for which no dedicated sequencing simulators are currently available. BEAR is also the first metagenomic sequencing simulator program that automates the process of generating abundances, which can be an arduous task. BEAR is useful for evaluating data processing tools in genomics. It has many advantages over existing comparable software, such as generating more realistic reads and being independent of sequencing technology, and has features particularly useful for metagenomics work.

  10. A hydrogeologic framework for characterizing summer streamflow sensitivity to climate warming in the Pacific Northwest, USA

    NASA Astrophysics Data System (ADS)

    Safeeq, M.; Grant, G. E.; Lewis, S. L.; Kramer, M. G.; Staab, B.

    2014-09-01

    Summer streamflows in the Pacific Northwest are largely derived from melting snow and groundwater discharge. As the climate warms, diminishing snowpack and earlier snowmelt will cause reductions in summer streamflow. Most regional-scale assessments of climate change impacts on streamflow use downscaled temperature and precipitation projections from general circulation models (GCMs) coupled with large-scale hydrologic models. Here we develop and apply an analytical hydrogeologic framework for characterizing summer streamflow sensitivity to a change in the timing and magnitude of recharge in a spatially explicit fashion. In particular, we incorporate the role of deep groundwater, which large-scale hydrologic models generally fail to capture, into streamflow sensitivity assessments. We validate our analytical streamflow sensitivities against two empirical measures of sensitivity derived using historical observations of temperature, precipitation, and streamflow from 217 watersheds. In general, empirically and analytically derived streamflow sensitivity values correspond. Although the selected watersheds cover a range of hydrologic regimes (e.g., rain-dominated, mixture of rain and snow, and snow-dominated), sensitivity validation was primarily driven by the snow-dominated watersheds, which are subjected to a wider range of change in recharge timing and magnitude as a result of increased temperature. Overall, two patterns emerge from this analysis: first, areas with high streamflow sensitivity also have higher summer streamflows as compared to low-sensitivity areas. Second, the level of sensitivity and spatial extent of highly sensitive areas diminishes over time as the summer progresses. Results of this analysis point to a robust, practical, and scalable approach that can help assess risk at the landscape scale, complement the downscaling approach, be applied to any climate scenario of interest, and provide a framework to assist land and water managers in adapting to an uncertain and potentially challenging future.

  11. Tree Guidelines for Inland Empire Communities

    Treesearch

    E.G. McPherson; J.R. Simpson; P.J. Peper; Q. Xiao; D.R. Pittenger; D.R. Hodel

    2001-01-01

    Communities in the Inland Empire region of California contain over 8 million people, or about 25% of the state’s population. The region’s inhabitants derive great benefit from trees because compared to coastal areas, the summers are hotter and air pollution levels are higher. The region’s climate is still mild enough to grow a diverse mix of trees. The Inland Empire’s...

  12. Perspectives on empirical approaches for ocean color remote sensing of chlorophyll in a changing climate.

    PubMed

    Dierssen, Heidi M

    2010-10-05

    Phytoplankton biomass and productivity have been continuously monitored from ocean color satellites for over a decade. Yet, the most widely used empirical approach for estimating chlorophyll a (Chl) from satellites can be in error by a factor of 5 or more. Such variability is due to differences in absorption and backscattering properties of phytoplankton and related concentrations of colored-dissolved organic matter (CDOM) and minerals. The empirical algorithms have built-in assumptions that follow the basic precept of biological oceanography--namely, oligotrophic regions with low phytoplankton biomass are populated with small phytoplankton, whereas more productive regions contain larger bloom-forming phytoplankton. With a changing world ocean, phytoplankton composition may shift in response to altered environmental forcing, and CDOM and mineral concentrations may become uncoupled from phytoplankton stocks, creating further uncertainty and error in the empirical approaches. Hence, caution is warranted when using empirically derived Chl to infer climate-related changes in ocean biology. The Southern Ocean is already experiencing climatic shifts and shows substantial errors in satellite-derived Chl for different phytoplankton assemblages. Accurate global assessments of phytoplankton will require improved technology and modeling, enhanced field observations, and ongoing validation of our "eyes in space."

  13. Acculturation, enculturation, and Asian American college students' mental health and attitudes toward seeking professional psychological help.

    PubMed

    Miller, Matthew J; Yang, Minji; Hui, Kayi; Choi, Na-Yeun; Lim, Robert H

    2011-07-01

    In the present study, we tested a theoretically and empirically derived partially indirect effects acculturation and enculturation model of Asian American college students' mental health and attitudes toward seeking professional psychological help. Latent variable path analysis with 296 self-identified Asian American college students supported the partially indirect effects model and demonstrated the ways in which behavioral acculturation, behavioral enculturation, values acculturation, values enculturation, and acculturation gap family conflict related to mental health and attitudes toward seeking professional psychological help directly and indirectly through acculturative stress. We also tested a generational status moderator hypothesis to determine whether differences in model-implied relationships emerged across U.S.- (n = 185) and foreign-born (n = 107) participants. Consistent with this hypothesis, statistically significant differences in structural coefficients emerged across generational status. Limitations, future directions for research, and counseling implications are discussed.

  14. Specification of ISS Plasma Environment Variability

    NASA Technical Reports Server (NTRS)

    Minow, Joseph I.; Neergaard, Linda F.; Bui, Them H.; Mikatarian, Ronald R.; Barsamian, H.; Koontz, Steven L.

    2004-01-01

    Quantifying spacecraft charging risks and associated hazards for the International Space Station (ISS) requires a plasma environment specification for the natural variability of ionospheric temperature (Te) and density (Ne). Empirical ionospheric specification and forecast models such as the International Reference Ionosphere (IRI) model typically only provide long term (seasonal) mean Te and Ne values for the low Earth orbit environment. This paper describes a statistical analysis of historical ionospheric low Earth orbit plasma measurements from the AE-C, AE-D, and DE-2 satellites used to derive a model of deviations of observed data values from IRI-2001 estimates of Ne, Te parameters for each data point to provide a statistical basis for modeling the deviations of the plasma environment from the IRI model output. Application of the deviation model with the IRI-2001 output yields a method for estimating extreme environments for the ISS spacecraft charging analysis.

  15. Frequency equation for the submicron CMOS ring oscillator using the first order characterization

    NASA Astrophysics Data System (ADS)

    Koithyar, Aravinda; Ramesh, T. K.

    2018-05-01

    By utilizing the first order behavior of the device, an equation for the frequency of operation of the submicron CMOS ring oscillator is presented. A 5-stage ring oscillator is utilized as the initial design, with different Beta ratios, for the computation of the operating frequency. Later on, the circuit simulation is performed from 5-stage till 23-stage, with the range of oscillating frequency being 3.0817 and 0.6705 GHz respectively. It is noted that the output frequency is inversely proportional to the square of the device length, and when the value of Beta ratio is used as 2.3, a difference of 3.64% is observed on an average, in between the computed and the simulated values of frequency. As an outcome, the derived equation can be utilized, with the inclusion of an empirical constant in general, for arriving at the ring oscillator circuit’s output frequency.

  16. When is Chemical Similarity Significant? The Statistical Distribution of Chemical Similarity Scores and Its Extreme Values

    PubMed Central

    Baldi, Pierre

    2010-01-01

    As repositories of chemical molecules continue to expand and become more open, it becomes increasingly important to develop tools to search them efficiently and assess the statistical significance of chemical similarity scores. Here we develop a general framework for understanding, modeling, predicting, and approximating the distribution of chemical similarity scores and its extreme values in large databases. The framework can be applied to different chemical representations and similarity measures but is demonstrated here using the most common binary fingerprints with the Tanimoto similarity measure. After introducing several probabilistic models of fingerprints, including the Conditional Gaussian Uniform model, we show that the distribution of Tanimoto scores can be approximated by the distribution of the ratio of two correlated Normal random variables associated with the corresponding unions and intersections. This remains true also when the distribution of similarity scores is conditioned on the size of the query molecules in order to derive more fine-grained results and improve chemical retrieval. The corresponding extreme value distributions for the maximum scores are approximated by Weibull distributions. From these various distributions and their analytical forms, Z-scores, E-values, and p-values are derived to assess the significance of similarity scores. In addition, the framework allows one to predict also the value of standard chemical retrieval metrics, such as Sensitivity and Specificity at fixed thresholds, or ROC (Receiver Operating Characteristic) curves at multiple thresholds, and to detect outliers in the form of atypical molecules. Numerous and diverse experiments carried in part with large sets of molecules from the ChemDB show remarkable agreement between theory and empirical results. PMID:20540577

  17. Drought Dynamics and Food Security in Ukraine

    NASA Astrophysics Data System (ADS)

    Kussul, N. M.; Kogan, F.; Adamenko, T. I.; Skakun, S. V.; Kravchenko, O. M.; Kryvobok, O. A.; Shelestov, A. Y.; Kolotii, A. V.; Kussul, O. M.; Lavrenyuk, A. M.

    2012-12-01

    In recent years food security became a problem of great importance at global, national and regional scale. Ukraine is one of the most developed agriculture countries and one of the biggest crop producers in the world. According to the 2011 statistics provided by the USDA FAS, Ukraine was the 8th largest exporter and 10th largest producer of wheat in the world. Therefore, identifying current and projecting future trends in climate and agriculture parameters is a key element in providing support to policy makers in food security. This paper combines remote sensing, meteorological, and modeling data to investigate dynamics of extreme events, such as droughts, and its impact on agriculture production in Ukraine. Two main problems have been considered in the study: investigation of drought dynamics in Ukraine and its impact on crop production; and investigation of crop growth models for yield and production forecasting and its comparison with empirical models that use as a predictor satellite-derived parameters and meteorological observations. Large-scale weather disasters in Ukraine such as drought were assessed using vegetation health index (VHI) derived from satellite data. The method is based on estimation of green canopy stress/no stress from indices, characterizing moisture and thermal conditions of vegetation canopy. These conditions are derived from the reflectance/emission in the red, near infrared and infrared parts of solar spectrum measured by the AVHRR flown on the NOAA afternoon polar-orbiting satellites since 1981. Droughts were categorized into exceptional, extreme, severe and moderate. Drought area (DA, in % from total Ukrainian area) was calculated for each category. It was found that maximum DA over past 20 years was 10% for exceptional droughts, 20% for extreme droughts, 50% for severe droughts, and 80% for moderate droughts. Also, it was shown that in general the drought intensity and area did not increase considerably over past 10 years. Analysis of interrelation between DA of different categories at oblast level with agriculture production will be discussed as well. A comparative study was carried out to assess three approaches to forecast winter wheat yield in Ukraine at oblast level: (i) empirical regression-based model that uses as a predictor 16-day NDVI composites derived from MODIS at the 250 m resolution, (ii) empirical regression-based model that uses as predictors meteorological parameters, and (iii) adapted for Ukraine Crop Growth Monitoring System (CGMS) that is based on WOFOST crop growth simulation model and meteorological parameters. These three approaches were calibrated for 2000-2009 and 2000-2010 data, and compared while performing forecasts on independent data for 2010 and 2011. For 2010, the best results in terms of root mean square error (RMSE, by oblast, deviation of predicted values from official statistics) were achieved using CGMS models: 0.3 t/ha. For NDVI and meteorological models RMSE values were 0.79 and 0.77 t/ha, respectively. When forecasting winter wheat yield for 2011, the following RMSE values were obtained: 0.58 t/ha for CGMS, 0.56 t/ha for meteorological model, and 0.62 t/ha for NDVI. In this case performance of all three approaches was relatively the same. Acknowledgements. This work was supported by the U.S. CRDF Grant "Analysis of climate change & food security based on remote sensing & in situ data sets" (UKB2-2972-KV-09).

  18. Deriving Empirically-Based Design Guidelines for Advanced Learning Technologies that Foster Disciplinary Comprehension

    ERIC Educational Resources Information Center

    Poitras, Eric; Trevors, Gregory

    2012-01-01

    Planning, conducting, and reporting leading-edge research requires professionals who are capable of highly skilled reading. This study reports the development of an empirically informed computer-based learning environment designed to foster the acquisition of reading comprehension strategies that mediate expertise in the social sciences. Empirical…

  19. An original method for characterizing internal waves

    NASA Astrophysics Data System (ADS)

    Casagrande, Gaëlle; Varnas, Alex Warn; Folégot, Thomas; Stéphan, Yann

    This study consisted in the characterization of internal waves in the south of the Strait of Messina (Italy). The observational data consisted in thermistor string profiles from the Coastal Ocean Acoustic Changes at High frequencies (COACH06) sea trial. An empirical orthogonal function analysis is applied to the data. The first two spatial empirical modes represent over 99% of the variability, and their corresponding time-dependent expansion coefficients take higher absolute values during internal wave events. In order to check how the expansion coefficients vary during an internal wave event, their time derivative, called here changing rates, are computed. It shows that each wave of an internal wave train is characterized by a double oscillation of the changing rates. At the front of the wave, both changing rates increase in absolute value with opposite sign, and then decrease to become null at the maximum amplitude of the wave. At the rear of the wave, the changing rates describe another period, again with opposite sign. This double oscillation can be used as a detector of internal waves, but it can also give information on the width of the wave, by measuring the length of the oscillation, as this information may sometimes be hard to read straight out of the data. When plotting the changing rates one versus another, the resulting scatter diagram puts on a butterfly shape that illustrates well this behaviour.

  20. Establishing the kinetics of ballistic-to-diffusive transition using directional statistics

    NASA Astrophysics Data System (ADS)

    Liu, Pai; Heinson, William R.; Sumlin, Benjamin J.; Shen, Kuan-Yu; Chakrabarty, Rajan K.

    2018-04-01

    We establish the kinetics of ballistic-to-diffusive (BD) transition observed in two-dimensional random walk using directional statistics. Directional correlation is parameterized using the walker's turning angle distribution, which follows the commonly adopted wrapped Cauchy distribution (WCD) function. During the BD transition, the concentration factor (ρ) governing the WCD shape is observed to decrease from its initial value. We next analytically derive the relationship between effective ρ and time, which essentially quantifies the BD transition rate. The prediction of our kinetic expression agrees well with the empirical datasets obtained from correlated random walk simulation. We further connect our formulation with the conventionally used scaling relationship between the walker's mean-square displacement and time.

  1. Mapping islands, reefs and shoals in the oceans surrounding Australia

    NASA Technical Reports Server (NTRS)

    Turner, L. G. (Principal Investigator)

    1976-01-01

    The author has identified the following significant results. Contours of residual errors were depicted in east and north directions. Contours were constructed from residuals which were determined at 22 ground control points. Residuals at two control points were rejected from contour determination, as their magnitudes were not in keeping with surrounding values. Results obtained so far from depth measurement tests are only tentative. Both sucessful and unsuccessful correlations were depicted between the imagery intensities and bathymetric data. Using the results from nine profile comparisons abstracted from a scene over Torres Strait, where water was generally very clear, an empirical relationship between image intensity (1) and water depth (d) was derived: 1 = 30 - 0.75 d.

  2. Remotely sensed MODIS wetland components for assessing the variability of methane emissions in Indian tropical/subtropical wetlands

    NASA Astrophysics Data System (ADS)

    Bansal, Sangeeta; Katyal, Deeksha; Saluja, Ridhi; Chakraborty, Monojit; Garg, J. K.

    2018-02-01

    Temperature and area fluctuations in wetlands greatly influence its various physico-chemical characteristics, nutrients dynamic, rates of biomass generation and decomposition, floral and faunal composition which in turn influence methane (CH4) emission rates. In view of this, the present study attempts to up-scale point CH4 flux from the wetlands of Uttar Pradesh (UP) by modifying two-factor empirical process based CH4 emission model for tropical wetlands by incorporating MODIS derived wetland components viz. wetland areal extent and corresponding temperature factors (Ft). This study further focuses on the utility of remotely sensed temperature response of CH4 emission in terms of Ft. Ft is generated using MODIS land surface temperature products and provides an important semi-empirical input for up-scaling CH4 emissions in wetlands. Results reveal that annual mean Ft values for UP wetlands vary from 0.69 (2010-2011) to 0.71(2011-2012). The total estimated area-wise CH4 emissions from the wetlands of UP varies from 66.47 Gg yr-1with wetland areal extent and Ft value of 2564.04 km2 and 0.69 respectively in 2010-2011 to 88.39 Gg yr-1with wetland areal extent and Ft value of 2720.16 km2 and 0.71 respectively in 2011-2012. Temporal analysis of estimated CH4 emissions showed that in monsoon season estimated CH4 emissions are more sensitive to wetland areal extent while in summer season sensitivity of estimated CH4 emissions is chiefly controlled by augmented methanogenic activities at high wetland surface temperatures.

  3. Global Analysis of Empirical Relationships Between Annual Climate and Seasonality of NDVI

    NASA Technical Reports Server (NTRS)

    Potter, C. S.

    1997-01-01

    This study describes the use of satellite data to calibrate a new climate-vegetation greenness function for global change studies. We examined statistical relationships between annual climate indexes (temperature, precipitation, and surface radiation) and seasonal attributes of the AVHRR Normalized Difference Vegetation Index (NDVI) time series for the mid-1980s in order to refine our empirical understanding of intraannual patterns and global abiotic controls on natural vegetation dynamics. Multiple linear regression results using global l(sup o) gridded data sets suggest that three climate indexes: growing degree days, annual precipitation total, and an annual moisture index together can account to 70-80 percent of the variation in the NDVI seasonal extremes (maximum and minimum values) for the calibration year 1984. Inclusion of the same climate index values from the previous year explained no significant additional portion of the global scale variation in NDVI seasonal extremes. The monthly timing of NDVI extremes was closely associated with seasonal patterns in maximum and minimum temperature and rainfall, with lag times of 1 to 2 months. We separated well-drained areas from l(sup o) grid cells mapped as greater than 25 percent inundated coverage for estimation of both the magnitude and timing of seasonal NDVI maximum values. Predicted monthly NDVI, derived from our climate-based regression equations and Fourier smoothing algorithms, shows good agreement with observed NDVI at a series of ecosystem test locations from around the globe. Regions in which NDVI seasonal extremes were not accurately predicted are mainly high latitude ecosystems and other remote locations where climate station data are sparse.

  4. Data-driven regions of interest for longitudinal change in frontotemporal lobar degeneration.

    PubMed

    Pankov, Aleksandr; Binney, Richard J; Staffaroni, Adam M; Kornak, John; Attygalle, Suneth; Schuff, Norbert; Weiner, Michael W; Kramer, Joel H; Dickerson, Bradford C; Miller, Bruce L; Rosen, Howard J

    2016-01-01

    Current research is investigating the potential utility of longitudinal measurement of brain structure as a marker of drug effect in clinical trials for neurodegenerative disease. Recent studies in Alzheimer's disease (AD) have shown that measurement of change in empirically derived regions of interest (ROIs) allows more reliable measurement of change over time compared with regions chosen a-priori based on known effects of AD on brain anatomy. Frontotemporal lobar degeneration (FTLD) is a devastating neurodegenerative disorder for which there are no approved treatments. The goal of this study was to identify an empirical ROI that maximizes the effect size for the annual rate of brain atrophy in FTLD compared with healthy age matched controls, and to estimate the effect size and associated power estimates for a theoretical study that would use change within this ROI as an outcome measure. Eighty six patients with FTLD were studied, including 43 who were imaged twice at 1.5 T and 43 at 3 T, along with 105 controls (37 imaged at 1.5 T and 67 at 3 T). Empirically-derived maps of change were generated separately for each field strength and included the bilateral insula, dorsolateral, medial and orbital frontal, basal ganglia and lateral and inferior temporal regions. The extent of regions included in the 3 T map was larger than that in the 1.5 T map. At both field strengths, the effect sizes for imaging were larger than for any clinical measures. At 3 T, the effect size for longitudinal change measured within the empirically derived ROI was larger than the effect sizes derived from frontal lobe, temporal lobe or whole brain ROIs. The effect size derived from the data-driven 1.5 T map was smaller than at 3 T, and was not larger than the effect size derived from a-priori ROIs. It was estimated that measurement of longitudinal change using 1.5 T MR systems requires approximately a 3-fold increase in sample size to obtain effect sizes equivalent to those seen at 3 T. While the results should be confirmed in additional datasets, these results indicate that empirically derived ROIs can reduce the number of subjects needed for a longitudinal study of drug effects in FTLD compared with a-priori ROIs. Field strength may have a significant impact on the utility of imaging for measuring longitudinal change.

  5. Flood Change Assessment and Attribution in Austrian alpine Basins

    NASA Astrophysics Data System (ADS)

    Claps, Pierluigi; Allamano, Paola; Como, Anastasia; Viglione, Alberto

    2016-04-01

    The present paper aims to investigate the sensitivity of flood peaks to global warming in the Austrian alpine basins. A group of 97 Austrian watersheds, with areas ranging from 14 to 6000 km2 and with average elevation ranging from 1000 to 2900 m a.s.l. have been considered. Annual maximum floods are available for the basins from 1890 to 2007 with two densities of observation. In a first period, until 1950, an average of 42 records of flood peaks are available. From 1951 to 2007 the density of observation increases to an average amount of contemporary peaks of 85. This information is very important with reference to the statistical tools used for the empirical assessment of change over time, that is linear quantile regressions. Application of this tool to the data set unveils trends in extreme events, confirmed by statistical testing, for the 0.75 and 0.95 empirical quantiles. All applications are made with specific (discharges/area) values . Similarly of what done in a previous approach, multiple quantile regressions have also been applied, confirming the presence of trends even when the possible interference of the specific discharge and morphoclimatic parameters (i.e. mean elevation and catchment area). Application of a geomorphoclimatic model by Allamano et al (2009) can allow to mimic to which extent the empirically available increase in air temperature and annual rainfall can justify the attribution of change derived by the empirical statistical tools. An comparison with data from Swiss alpine basins treated in a previous paper is finally undertaken.

  6. A Review of Empirical Analyses of Disinvestment Initiatives.

    PubMed

    Chambers, James D; Salem, Mark N; D'Cruz, Brittany N; Subedi, Prasun; Kamal-Bahl, Sachin J; Neumann, Peter J

    Disinvesting in low-value health care services provides opportunities for investment in higher value care and thus an increase in health care efficiency. To identify international experience with disinvestment initiatives and to review empirical analyses of disinvestment initiatives. We performed a literature search using the PubMed database to identify international experience with disinvestment initiatives. We also reviewed empirical analyses of disinvestment initiatives. We identified 26 unique disinvestment initiatives implemented across 11 countries. Nineteen addressed multiple intervention types, six addressed only drugs, and one addressed only devices. We reviewed 18 empirical analyses of disinvestment initiatives: 7 reported that the initiative was successful, 8 reported that the initiative was unsuccessful, and 3 reported that findings were mixed; that is, the study considered multiple services and reported a decrease in the use of some but not others. Thirty-seven low-value services were evaluated across the 18 empirical analyses, for 14 (38%) of which the disinvestment initiative led to a decline in use. Six of the seven studies that reported the disinvestment initiative to be successful included an attempt to promote the disinvestment initiative among participating clinicians. The success of disinvestment initiatives has been mixed, with fewer than half the identified empirical studies reporting that use of the low-value service was reduced. Our findings suggest that promotion of the disinvestment initiative among clinicians is a key component to the success of the disinvestment initiative. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  7. PolyWaTT: A polynomial water travel time estimator based on Derivative Dynamic Time Warping and Perceptually Important Points

    NASA Astrophysics Data System (ADS)

    Claure, Yuri Navarro; Matsubara, Edson Takashi; Padovani, Carlos; Prati, Ronaldo Cristiano

    2018-03-01

    Traditional methods for estimating timing parameters in hydrological science require a rigorous study of the relations of flow resistance, slope, flow regime, watershed size, water velocity, and other local variables. These studies are mostly based on empirical observations, where the timing parameter is estimated using empirically derived formulas. The application of these studies to other locations is not always direct. The locations in which equations are used should have comparable characteristics to the locations from which such equations have been derived. To overcome this barrier, in this work, we developed a data-driven approach to estimate timing parameters such as travel time. Our proposal estimates timing parameters using historical data of the location without the need of adapting or using empirical formulas from other locations. The proposal only uses one variable measured at two different locations on the same river (for instance, two river-level measurements, one upstream and the other downstream on the same river). The recorded data from each location generates two time series. Our method aligns these two time series using derivative dynamic time warping (DDTW) and perceptually important points (PIP). Using data from timing parameters, a polynomial function generalizes the data by inducing a polynomial water travel time estimator, called PolyWaTT. To evaluate the potential of our proposal, we applied PolyWaTT to three different watersheds: a floodplain ecosystem located in the part of Brazil known as Pantanal, the world's largest tropical wetland area; and the Missouri River and the Pearl River, in United States of America. We compared our proposal with empirical formulas and a data-driven state-of-the-art method. The experimental results demonstrate that PolyWaTT showed a lower mean absolute error than all other methods tested in this study, and for longer distances the mean absolute error achieved by PolyWaTT is three times smaller than empirical formulas.

  8. Empirical algorithms for ocean optics parameters

    NASA Astrophysics Data System (ADS)

    Smart, Jeffrey H.

    2007-06-01

    As part of the Worldwide Ocean Optics Database (WOOD) Project, The Johns Hopkins University Applied Physics Laboratory has developed and evaluated a variety of empirical models that can predict ocean optical properties, such as profiles of the beam attenuation coefficient computed from profiles of the diffuse attenuation coefficient. In this paper, we briefly summarize published empirical optical algorithms and assess their accuracy for estimating derived profiles. We also provide new algorithms and discuss their applicability for deriving optical profiles based on data collected from a variety of locations, including the Yellow Sea, the Sea of Japan, and the North Atlantic Ocean. We show that the scattering coefficient (b) can be computed from the beam attenuation coefficient (c) to about 10% accuracy. The availability of such relatively accurate predictions is important in the many situations where the set of data is incomplete.

  9. An empirical inferential method of estimating nitrogen deposition to Mediterranean-type ecosystems: the San Bernardino Mountains case study.

    PubMed

    Bytnerowicz, A; Johnson, R F; Zhang, L; Jenerette, G D; Fenn, M E; Schilling, S L; Gonzalez-Fernandez, I

    2015-08-01

    The empirical inferential method (EIM) allows for spatially and temporally-dense estimates of atmospheric nitrogen (N) deposition to Mediterranean ecosystems. This method, set within a GIS platform, is based on ambient concentrations of NH3, NO, NO2 and HNO3; surface conductance of NH4(+) and NO3(-); stomatal conductance of NH3, NO, NO2 and HNO3; and satellite-derived LAI. Estimated deposition is based on data collected during 2002-2006 in the San Bernardino Mountains (SBM) of southern California. Approximately 2/3 of dry N deposition was to plant surfaces and 1/3 as stomatal uptake. Summer-season N deposition ranged from <3 kg ha(-1) in the eastern SBM to ∼ 60 kg ha(-1) in the western SBM near the Los Angeles Basin and compared well with the throughfall and big-leaf micrometeorological inferential methods. Extrapolating summertime N deposition estimates to annual values showed large areas of the SBM exceeding critical loads for nutrient N in chaparral and mixed conifer forests. Published by Elsevier Ltd.

  10. Statistical Mechanical Model for Adsorption Coupled with SAFT-VR Mie Equation of State.

    PubMed

    Franco, Luís F M; Economou, Ioannis G; Castier, Marcelo

    2017-10-24

    We extend the SAFT-VR Mie equation of state to calculate adsorption isotherms by considering explicitly the residual energy due to the confinement effect. Assuming a square-well potential for the fluid-solid interactions, the structure imposed by the fluid-solid interface is calculated using two different approaches: an empirical expression proposed by Travalloni et al. ( Chem. Eng. Sci. 65 , 3088 - 3099 , 2010 ), and a new theoretical expression derived by applying the mean value theorem. Adopting the SAFT-VR Mie ( Lafitte et al. J. Chem. Phys. , 139 , 154504 , 2013 ) equation of state to describe the fluid-fluid interactions, and solving the phase equilibrium criteria, we calculate adsorption isotherms for light hydrocarbons adsorbed in a carbon molecular sieve and for carbon dioxide, nitrogen, and water adsorbed in a zeolite. Good results are obtained from the model using either approach. Nonetheless, the theoretical expression seems to correlate better the experimental data than the empirical one, possibly implying that a more reliable way to describe the structure ensures a better description of the thermodynamic behavior.

  11. Acoustic properties of reticulated plastic foams

    NASA Astrophysics Data System (ADS)

    Cummings, A.; Beadle, S. P.

    1994-08-01

    Some general aspects of sound propagation in rigid porous media are discussed, particularly with reference to the use of a single - dimensionless - frequency parameter and the role of this, in the light of the possibility of varying gas properties, is examined. Steady flow resistance coefficients of porous media are also considered, and simple scaling relationships between these coefficients and `system parameters' are derived. The results of a series of measurements of the bulk acoustic properties of 12 geometrically similar, fully reticulated, polyurethane foams are presented, and empirical curve-fitting coefficients are found; the curve-fitting formulae are valid within the experimental range of values of the frequency parameter. Comparison is made between the measured data and an alternative, fairly recently published, semi-empirical set of formulae. Measurements of the steady flow-resistive coefficients are also given and both the acoustical and flow-resistive data are shown to be consistent with theoretical ideas. The acoustical and flow-resistive data should be of use in predicting the acoustic bulk properties of open-celled foams of types similar to those used in the experimental tests.

  12. Geometric Mechanics for Continuous Swimmers on Granular Material

    NASA Astrophysics Data System (ADS)

    Dai, Jin; Faraji, Hossein; Schiebel, Perrin; Gong, Chaohui; Travers, Matthew; Hatton, Ross; Goldman, Daniel; Choset, Howie; Biorobotics Lab Collaboration; LaboratoryRobotics; Applied Mechanics (LRAM) Collaboration; Complex Rheology; Biomechanics Lab Collaboration

    Animal experiments have shown that Chionactis occipitalis(N =10) effectively undulating on granular substrates exhibits a particular set of waveforms which can be approximated by a sinusoidal variation in curvature, i.e., a serpenoid wave. Furthermore, all snakes tested used a narrow subset of all available waveform parameters, measured as the relative curvature equal to 5.0+/-0.3, and number of waves on the body equal to1.8+/-0.1. We hypothesize that the serpenoid wave of a particular choice of parameters offers distinct benefit for locomotion on granular material. To test this hypothesis, we used a physical model (snake robot) to empirically explore the space of serpenoid motions, which is linearly spanned with two independent continuous serpenoid basis functions. The empirically derived height function map, which is a geometric mechanics tool for analyzing movements of cyclic gaits, showed that displacement per gait cycle increases with amplitude at small amplitudes, but reaches a peak value of 0.55 body-lengths at relative curvature equal to 6.0. This work signifies that with shape basis functions, geometric mechanics tools can be extended for continuous swimmers.

  13. Model uncertainty of various settlement estimation methods in shallow tunnels excavation; case study: Qom subway tunnel

    NASA Astrophysics Data System (ADS)

    Khademian, Amir; Abdollahipour, Hamed; Bagherpour, Raheb; Faramarzi, Lohrasb

    2017-10-01

    In addition to the numerous planning and executive challenges, underground excavation in urban areas is always followed by certain destructive effects especially on the ground surface; ground settlement is the most important of these effects for which estimation there exist different empirical, analytical and numerical methods. Since geotechnical models are associated with considerable model uncertainty, this study characterized the model uncertainty of settlement estimation models through a systematic comparison between model predictions and past performance data derived from instrumentation. To do so, the amount of surface settlement induced by excavation of the Qom subway tunnel was estimated via empirical (Peck), analytical (Loganathan and Poulos) and numerical (FDM) methods; the resulting maximum settlement value of each model were 1.86, 2.02 and 1.52 cm, respectively. The comparison of these predicted amounts with the actual data from instrumentation was employed to specify the uncertainty of each model. The numerical model outcomes, with a relative error of 3.8%, best matched the reality and the analytical method, with a relative error of 27.8%, yielded the highest level of model uncertainty.

  14. An empirical description of the dispersion of 5th and 95th percentiles in worldwide anthropometric data applied to estimating accommodation with unknown correlation values.

    PubMed

    Albin, Thomas J; Vink, Peter

    2015-01-01

    Anthropometric data are assumed to have a Gaussian (Normal) distribution, but if non-Gaussian, accommodation estimates are affected. When data are limited, users may choose to combine anthropometric elements by Combining Percentiles (CP) (adding or subtracting), despite known adverse effects. This study examined whether global anthropometric data are Gaussian distributed. It compared the Median Correlation Method (MCM) of combining anthropometric elements with unknown correlations to CP to determine if MCM provides better estimates of percentile values and accommodation. Percentile values of 604 male and female anthropometric data drawn from seven countries worldwide were expressed as standard scores. The standard scores were tested to determine if they were consistent with a Gaussian distribution. Empirical multipliers for determining percentile values were developed.In a test case, five anthropometric elements descriptive of seating were combined in addition and subtraction models. Percentile values were estimated for each model by CP, MCM with Gaussian distributed data, or MCM with empirically distributed data. The 5th and 95th percentile values of a dataset of global anthropometric data are shown to be asymmetrically distributed. MCM with empirical multipliers gave more accurate estimates of 5th and 95th percentiles values. Anthropometric data are not Gaussian distributed. The MCM method is more accurate than adding or subtracting percentiles.

  15. Feynman perturbation expansion for the price of coupon bond options and swaptions in quantum finance. II. Empirical

    NASA Astrophysics Data System (ADS)

    Baaquie, Belal E.; Liang, Cui

    2007-01-01

    The quantum finance pricing formulas for coupon bond options and swaptions derived by Baaquie [Phys. Rev. E 75, 016703 (2006)] are reviewed. We empirically study the swaption market and propose an efficient computational procedure for analyzing the data. Empirical results of the swaption price, volatility, and swaption correlation are compared with the predictions of quantum finance. The quantum finance model generates the market swaption price to over 90% accuracy.

  16. Feynman perturbation expansion for the price of coupon bond options and swaptions in quantum finance. II. Empirical.

    PubMed

    Baaquie, Belal E; Liang, Cui

    2007-01-01

    The quantum finance pricing formulas for coupon bond options and swaptions derived by Baaquie [Phys. Rev. E 75, 016703 (2006)] are reviewed. We empirically study the swaption market and propose an efficient computational procedure for analyzing the data. Empirical results of the swaption price, volatility, and swaption correlation are compared with the predictions of quantum finance. The quantum finance model generates the market swaption price to over 90% accuracy.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    N. Seth Carpenter; Suzette J. Payne; Annette L. Schafer

    We recognize a discrepancy in magnitudes estimated for several Basin and Range, U.S.A. faults. For example, magnitudes predicted for the Wasatch (Utah), Lost River (Idaho), and Lemhi (Idaho) faults from fault segment lengths (L{sub seg}) where lengths are defined between geometrical, structural, and/or behavioral discontinuities assumed to persistently arrest rupture, are consistently less than magnitudes calculated from displacements (D) along these same segments. For self-similarity, empirical relationships (e.g. Wells and Coppersmith, 1994) should predict consistent magnitudes (M) using diverse fault dimension values for a given fault (i.e. M {approx} L{sub seg}, should equal M {approx} D). Typically, the empirical relationshipsmore » are derived from historical earthquake data and parameter values used as input into these relationships are determined from field investigations of paleoearthquakes. A commonly used assumption - grounded in the characteristic-earthquake model of Schwartz and Coppersmith (1984) - is equating L{sub seg} with surface rupture length (SRL). Many large historical events yielded secondary and/or sympathetic faulting (e.g. 1983 Borah Peak, Idaho earthquake) which are included in the measurement of SRL and used to derive empirical relationships. Therefore, calculating magnitude from the M {approx} SRL relationship using L{sub seg} as SRL leads to an underestimation of magnitude and the M {approx} L{sub seg} and M {approx} D discrepancy. Here, we propose an alternative approach to earthquake magnitude estimation involving a relationship between moment magnitude (Mw) and length, where length is L{sub seg} instead of SRL. We analyze seven historical, surface-rupturing, strike-slip and normal faulting earthquakes for which segmentation of the causative fault and displacement data are available and whose rupture included at least one entire fault segment, but not two or more. The preliminary Mw {approx} L{sub seg} results are strikingly consistent with Mw {approx} D calculations using paleoearthquake data for the Wasatch, Lost River, and Lemhi faults, demonstrating self-similarity and implying that the Mw {approx} L{sub seg} relationship should supplant M {approx} SRL relationships currently employed in seismic hazard analyses. The relationship will permit reliable use of L{sub seg} data from field investigations and proper use and weighting of multiple-segment-rupture scenarios in seismic hazard analyses, and eliminate the need to reconcile the Mw {approx} SRL and Mw {approx} D differences in a multiple-parameter relationship for segmented faults.« less

  18. How We Value Contemporary Poetry: An Empirical Inquiry

    ERIC Educational Resources Information Center

    Broad, Bob; Theune, Michael

    2010-01-01

    Although evaluation is at the core of many of the practices associated with poetry--including teaching, editing, selecting, judging, and even writing--and although there have been involved discussions of the assessment of verse, there has been no empirical investigation of the specific values which, one supposes, lie at the heart of such…

  19. Reference Values for Body Composition and Anthropometric Measurements in Athletes

    PubMed Central

    Santos, Diana A.; Dawson, John A.; Matias, Catarina N.; Rocha, Paulo M.; Minderico, Cláudia S.; Allison, David B.; Sardinha, Luís B.; Silva, Analiza M.

    2014-01-01

    Background Despite the importance of body composition in athletes, reference sex- and sport-specific body composition data are lacking. We aim to develop reference values for body composition and anthropometric measurements in athletes. Methods Body weight and height were measured in 898 athletes (264 female, 634 male), anthropometric variables were assessed in 798 athletes (240 female and 558 male), and in 481 athletes (142 female and 339 male) with dual-energy X-ray absorptiometry (DXA). A total of 21 different sports were represented. Reference percentiles (5th, 25th, 50th, 75th, and 95th) were calculated for each measured value, stratified by sex and sport. Because sample sizes within a sport were often very low for some outcomes, the percentiles were estimated using a parametric, empirical Bayesian framework that allowed sharing information across sports. Results We derived sex- and sport-specific reference percentiles for the following DXA outcomes: total (whole body scan) and regional (subtotal, trunk, and appendicular) bone mineral content, bone mineral density, absolute and percentage fat mass, fat-free mass, and lean soft tissue. Additionally, we derived reference percentiles for height-normalized indexes by dividing fat mass, fat-free mass, and appendicular lean soft tissue by height squared. We also derived sex- and sport-specific reference percentiles for the following anthropometry outcomes: weight, height, body mass index, sum of skinfold thicknesses (7 skinfolds, appendicular skinfolds, trunk skinfolds, arm skinfolds, and leg skinfolds), circumferences (hip, arm, midthigh, calf, and abdominal circumferences), and muscle circumferences (arm, thigh, and calf muscle circumferences). Conclusions These reference percentiles will be a helpful tool for sports professionals, in both clinical and field settings, for body composition assessment in athletes. PMID:24830292

  20. Modeling NAPL dissolution from pendular rings in idealized porous media

    NASA Astrophysics Data System (ADS)

    Huang, Junqi; Christ, John A.; Goltz, Mark N.; Demond, Avery H.

    2015-10-01

    The dissolution rate of nonaqueous phase liquid (NAPL) often governs the remediation time frame at subsurface hazardous waste sites. Most formulations for estimating this rate are empirical and assume that the NAPL is the nonwetting fluid. However, field evidence suggests that some waste sites might be organic wet. Thus, formulations that assume the NAPL is nonwetting may be inappropriate for estimating the rates of NAPL dissolution. An exact solution to the Young-Laplace equation, assuming NAPL resides as pendular rings around the contact points of porous media idealized as spherical particles in a hexagonal close packing arrangement, is presented in this work to provide a theoretical prediction for NAPL-water interfacial area. This analytic expression for interfacial area is then coupled with an exact solution to the advection-diffusion equation in a capillary tube assuming Hagen-Poiseuille flow to provide a theoretical means of calculating the mass transfer rate coefficient for dissolution at the NAPL-water interface in an organic-wet system. A comparison of the predictions from this theoretical model with predictions from empirically derived formulations from the literature for water-wet systems showed a consistent range of values for the mass transfer rate coefficient, despite the significant differences in model foundations (water wetting versus NAPL wetting, theoretical versus empirical). This finding implies that, under these system conditions, the important parameter is interfacial area, with a lesser role played by NAPL configuration.

  1. Bayesian methods to estimate urban growth potential

    USGS Publications Warehouse

    Smith, Jordan W.; Smart, Lindsey S.; Dorning, Monica; Dupéy, Lauren Nicole; Méley, Andréanne; Meentemeyer, Ross K.

    2017-01-01

    Urban growth often influences the production of ecosystem services. The impacts of urbanization on landscapes can subsequently affect landowners’ perceptions, values and decisions regarding their land. Within land-use and land-change research, very few models of dynamic landscape-scale processes like urbanization incorporate empirically-grounded landowner decision-making processes. Very little attention has focused on the heterogeneous decision-making processes that aggregate to influence broader-scale patterns of urbanization. We examine the land-use tradeoffs faced by individual landowners in one of the United States’ most rapidly urbanizing regions − the urban area surrounding Charlotte, North Carolina. We focus on the land-use decisions of non-industrial private forest owners located across the region’s development gradient. A discrete choice experiment is used to determine the critical factors influencing individual forest owners’ intent to sell their undeveloped properties across a series of experimentally varied scenarios of urban growth. Data are analyzed using a hierarchical Bayesian approach. The estimates derived from the survey data are used to modify a spatially-explicit trend-based urban development potential model, derived from remotely-sensed imagery and observed changes in the region’s socioeconomic and infrastructural characteristics between 2000 and 2011. This modeling approach combines the theoretical underpinnings of behavioral economics with spatiotemporal data describing a region’s historical development patterns. By integrating empirical social preference data into spatially-explicit urban growth models, we begin to more realistically capture processes as well as patterns that drive the location, magnitude and rates of urban growth.

  2. Empirical Development of an MMPI Subscale for the Assessment of Combat-Related Posttraumatic Stress Disorder.

    ERIC Educational Resources Information Center

    Keane, Terence M.; And Others

    1984-01-01

    Developed empirically based criteria for use of the Minnesota Multiphasic Personality Inventory (MMPI) to aid in the assessment and diagnosis of Posttraumatic Stress Disorder (PTSD) in patients (N=200). Analysis based on an empircally derived decision rule correctly classified 74 percent of the patients in each group. (LLL)

  3. An Empirical Typology of Narcissism and Mental Health in Late Adolescence

    ERIC Educational Resources Information Center

    Lapsley, Daniel K.; Aalsma, Matthew C.

    2006-01-01

    A two-step cluster analytic strategy was used in two studies to identify an empirically derived typology of narcissism in late adolescence. In Study 1, late adolescents (N=204) responded to the profile of narcissistic dispositions and measures of grandiosity (''superiority'') and idealization (''goal instability'') inspired by Kohut's theory,…

  4. Untangling the Evidence: Introducing an Empirical Model for Evidence-Based Library and Information Practice

    ERIC Educational Resources Information Center

    Gillespie, Ann

    2014-01-01

    Introduction: This research is the first to investigate the experiences of teacher-librarians as evidence-based practice. An empirically derived model is presented in this paper. Method: This qualitative study utilised the expanded critical incident approach, and investigated the real-life experiences of fifteen Australian teacher-librarians,…

  5. Evaluating the intersection of a regional wildlife connectivity network with highways

    Treesearch

    Samuel A. Cushman; Jesse S. Lewis; Erin L. Landguth

    2013-01-01

    Reliable predictions of regional-scale population connectivity are needed to prioritize conservation actions. However, there have been few examples of regional connectivity models that are empirically derived and validated. The central goals of this paper were to (1) evaluate the effectiveness of factorial least cost path corridor mapping on an empirical...

  6. Improving the Accuracy of Urban Environmental Quality Assessment Using Geographically-Weighted Regression Techniques.

    PubMed

    Faisal, Kamil; Shaker, Ahmed

    2017-03-07

    Urban Environmental Quality (UEQ) can be treated as a generic indicator that objectively represents the physical and socio-economic condition of the urban and built environment. The value of UEQ illustrates a sense of satisfaction to its population through assessing different environmental, urban and socio-economic parameters. This paper elucidates the use of the Geographic Information System (GIS), Principal Component Analysis (PCA) and Geographically-Weighted Regression (GWR) techniques to integrate various parameters and estimate the UEQ of two major cities in Ontario, Canada. Remote sensing, GIS and census data were first obtained to derive various environmental, urban and socio-economic parameters. The aforementioned techniques were used to integrate all of these environmental, urban and socio-economic parameters. Three key indicators, including family income, higher level of education and land value, were used as a reference to validate the outcomes derived from the integration techniques. The results were evaluated by assessing the relationship between the extracted UEQ results and the reference layers. Initial findings showed that the GWR with the spatial lag model represents an improved precision and accuracy by up to 20% with respect to those derived by using GIS overlay and PCA techniques for the City of Toronto and the City of Ottawa. The findings of the research can help the authorities and decision makers to understand the empirical relationships among environmental factors, urban morphology and real estate and decide for more environmental justice.

  7. Improving the Accuracy of Urban Environmental Quality Assessment Using Geographically-Weighted Regression Techniques

    PubMed Central

    Faisal, Kamil; Shaker, Ahmed

    2017-01-01

    Urban Environmental Quality (UEQ) can be treated as a generic indicator that objectively represents the physical and socio-economic condition of the urban and built environment. The value of UEQ illustrates a sense of satisfaction to its population through assessing different environmental, urban and socio-economic parameters. This paper elucidates the use of the Geographic Information System (GIS), Principal Component Analysis (PCA) and Geographically-Weighted Regression (GWR) techniques to integrate various parameters and estimate the UEQ of two major cities in Ontario, Canada. Remote sensing, GIS and census data were first obtained to derive various environmental, urban and socio-economic parameters. The aforementioned techniques were used to integrate all of these environmental, urban and socio-economic parameters. Three key indicators, including family income, higher level of education and land value, were used as a reference to validate the outcomes derived from the integration techniques. The results were evaluated by assessing the relationship between the extracted UEQ results and the reference layers. Initial findings showed that the GWR with the spatial lag model represents an improved precision and accuracy by up to 20% with respect to those derived by using GIS overlay and PCA techniques for the City of Toronto and the City of Ottawa. The findings of the research can help the authorities and decision makers to understand the empirical relationships among environmental factors, urban morphology and real estate and decide for more environmental justice. PMID:28272334

  8. Bias corrections of GOSAT SWIR XCO2 and XCH4 with TCCON data and their evaluation using aircraft measurement data

    NASA Astrophysics Data System (ADS)

    Inoue, Makoto; Morino, Isamu; Uchino, Osamu; Nakatsuru, Takahiro; Yoshida, Yukio; Yokota, Tatsuya; Wunch, Debra; Wennberg, Paul O.; Roehl, Coleen M.; Griffith, David W. T.; Velazco, Voltaire A.; Deutscher, Nicholas M.; Warneke, Thorsten; Notholt, Justus; Robinson, John; Sherlock, Vanessa; Hase, Frank; Blumenstock, Thomas; Rettinger, Markus; Sussmann, Ralf; Kyrö, Esko; Kivi, Rigel; Shiomi, Kei; Kawakami, Shuji; De Mazière, Martine; Arnold, Sabrina G.; Feist, Dietrich G.; Barrow, Erica A.; Barney, James; Dubey, Manvendra; Schneider, Matthias; Iraci, Laura T.; Podolske, James R.; Hillyard, Patrick W.; Machida, Toshinobu; Sawa, Yousuke; Tsuboi, Kazuhiro; Matsueda, Hidekazu; Sweeney, Colm; Tans, Pieter P.; Andrews, Arlyn E.; Biraud, Sebastien C.; Fukuyama, Yukio; Pittman, Jasna V.; Kort, Eric A.; Tanaka, Tomoaki

    2016-08-01

    We describe a method for removing systematic biases of column-averaged dry air mole fractions of CO2 (XCO2) and CH4 (XCH4) derived from short-wavelength infrared (SWIR) spectra of the Greenhouse gases Observing SATellite (GOSAT). We conduct correlation analyses between the GOSAT biases and simultaneously retrieved auxiliary parameters. We use these correlations to bias correct the GOSAT data, removing these spurious correlations. Data from the Total Carbon Column Observing Network (TCCON) were used as reference values for this regression analysis. To evaluate the effectiveness of this correction method, the uncorrected/corrected GOSAT data were compared to independent XCO2 and XCH4 data derived from aircraft measurements taken for the Comprehensive Observation Network for TRace gases by AIrLiner (CONTRAIL) project, the National Oceanic and Atmospheric Administration (NOAA), the US Department of Energy (DOE), the National Institute for Environmental Studies (NIES), the Japan Meteorological Agency (JMA), the HIAPER Pole-to-Pole observations (HIPPO) program, and the GOSAT validation aircraft observation campaign over Japan. These comparisons demonstrate that the empirically derived bias correction improves the agreement between GOSAT XCO2/XCH4 and the aircraft data. Finally, we present spatial distributions and temporal variations of the derived GOSAT biases.

  9. On the derivation of empirical limits on the helium abundance in coronal holes below 1.5 solar radius

    NASA Technical Reports Server (NTRS)

    Habbal, Shadia Rifai; Esser, Ruth

    1994-01-01

    We present a simple technique describing how limits on the helium abundance, alpha, defined as the ratio of helium to proton number density, can be inferred from measurements of the electron density and temperature below 1.5 solar radius. As an illustration, we apply this technique to two different data sets: emission-line intensities in the extreme ultraviolet (EUV) and white-light observations, both measured in polar coronal holes. For the EUV data, the temperature gradient is derived from line intensity ratios, and the density gradient is replaced by the gradient of the line intensity. The lower limit on alpha derived from these data is 0.2-0.3 at 1 solar radius and drops very sharply to interplanetary values of a few percent below 1.06 solar radius. The white-light observations yield density gradients in the inner corona beyond 1.25 solar radius but do not have corresponding temperature gradients. In this case we consider an isothermal atmosphere, and derive an upper limit of 0.2 for alpha. These examples are used to illustrate how this technique could be applicable to the more extensive data to be obtained with the upcoming SOHO mission. Although only ranges on alpha can be derived, the application of the technique to data currently available merely points to the fact that alpha can be significantly large in the inner corona.

  10. Number of independent parameters in the potentiometric titration of humic substances.

    PubMed

    Lenoir, Thomas; Manceau, Alain

    2010-03-16

    With the advent of high-precision automatic titrators operating in pH stat mode, measuring the mass balance of protons in solid-solution mixtures against the pH of natural and synthetic polyelectrolytes is now routine. However, titration curves of complex molecules typically lack obvious inflection points, which complicates their analysis despite the high-precision measurements. The calculation of site densities and median proton affinity constants (pK) from such data can lead to considerable covariance between fit parameters. Knowing the number of independent parameters that can be freely varied during the least-squares minimization of a model fit to titration data is necessary to improve the model's applicability. This number was calculated for natural organic matter by applying principal component analysis (PCA) to a reference data set of 47 independent titration curves from fulvic and humic acids measured at I = 0.1 M. The complete data set was reconstructed statistically from pH 3.5 to 9.8 with only six parameters, compared to seven or eight generally adjusted with common semi-empirical speciation models for organic matter, and explains correlations that occur with the higher number of parameters. Existing proton-binding models are not necessarily overparametrized, but instead titration data lack the sensitivity needed to quantify the full set of binding properties of humic materials. Model-independent conditional pK values can be obtained directly from the derivative of titration data, and this approach is the most conservative. The apparent proton-binding constants of the 23 fulvic acids (FA) and 24 humic acids (HA) derived from a high-quality polynomial parametrization of the data set are pK(H,COOH)(FA) = 4.18 +/- 0.21, pK(H,Ph-OH)(FA) = 9.29 +/- 0.33, pK(H,COOH)(HA) = 4.49 +/- 0.18, and pK(H,Ph-OH)(HA) = 9.29 +/- 0.38. Their values at other ionic strengths are more reliably calculated with the empirical Davies equation than any existing model fit.

  11. Trapped Proton Environment in Medium-Earth Orbit (2000-2010)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yue; Friedel, Reinhard Hans; Kippen, Richard Marc

    This report describes the method used to derive fluxes of the trapped proton belt along the GPS orbit (i.e., a Medium-Earth Orbit) during 2000 – 2010, a period almost covering a solar cycle. This method utilizes a newly developed empirical proton radiation-belt model, with the model output scaled by GPS in-situ measurements, to generate proton fluxes that cover a wide range of energies (50keV- 6MeV) and keep temporal features as well. The new proton radiation-belt model is developed based upon CEPPAD proton measurements from the Polar mission (1996 – 2007). Comparing to the de-facto standard empirical model of AP8, thismore » model is not only based upon a new data set representative of the proton belt during the same period covered by GPS, but can also provide statistical information of flux values such as worst cases and occurrence percentiles instead of solely the mean values. The comparison shows quite different results from the two models and suggests that the commonly accepted error factor of 2 on the AP8 flux output over-simplifies and thus underestimates variations of the proton belt. Output fluxes from this new model along the GPS orbit are further scaled by the ns41 in-situ data so as to reflect the dynamic nature of protons in the outer radiation belt at geomagnetically active times. Derived daily proton fluxes along the GPS ns41 orbit, whose data files are delivered along with this report, are depicted to illustrate the trapped proton environment in the Medium-Earth Orbit. Uncertainties on those daily proton fluxes from two sources are evaluated: One is from the new proton-belt model that has error factors < ~3; the other is from the in-situ measurements and the error factors could be ~ 5.« less

  12. Understanding latent structures of clinical information logistics: A bottom-up approach for model building and validating the workflow composite score.

    PubMed

    Esdar, Moritz; Hübner, Ursula; Liebe, Jan-David; Hüsers, Jens; Thye, Johannes

    2017-01-01

    Clinical information logistics is a construct that aims to describe and explain various phenomena of information provision to drive clinical processes. It can be measured by the workflow composite score, an aggregated indicator of the degree of IT support in clinical processes. This study primarily aimed to investigate the yet unknown empirical patterns constituting this construct. The second goal was to derive a data-driven weighting scheme for the constituents of the workflow composite score and to contrast this scheme with a literature based, top-down procedure. This approach should finally test the validity and robustness of the workflow composite score. Based on secondary data from 183 German hospitals, a tiered factor analytic approach (confirmatory and subsequent exploratory factor analysis) was pursued. A weighting scheme, which was based on factor loadings obtained in the analyses, was put into practice. We were able to identify five statistically significant factors of clinical information logistics that accounted for 63% of the overall variance. These factors were "flow of data and information", "mobility", "clinical decision support and patient safety", "electronic patient record" and "integration and distribution". The system of weights derived from the factor loadings resulted in values for the workflow composite score that differed only slightly from the score values that had been previously published based on a top-down approach. Our findings give insight into the internal composition of clinical information logistics both in terms of factors and weights. They also allowed us to propose a coherent model of clinical information logistics from a technical perspective that joins empirical findings with theoretical knowledge. Despite the new scheme of weights applied to the calculation of the workflow composite score, the score behaved robustly, which is yet another hint of its validity and therefore its usefulness. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  13. Entropy production in photovoltaic-thermoelectric nanodevices from the non-equilibrium Green’s function formalism

    NASA Astrophysics Data System (ADS)

    Michelini, Fabienne; Crépieux, Adeline; Beltako, Katawoura

    2017-05-01

    We discuss some thermodynamic aspects of energy conversion in electronic nanosystems able to convert light energy into electrical or/and thermal energy using the non-equilibrium Green’s function formalism. In a first part, we derive the photon energy and particle currents inside a nanosystem interacting with light and in contact with two electron reservoirs at different temperatures. Energy conservation is verified, and radiation laws are discussed from electron non-equilibrium Green’s functions. We further use the photon currents to formulate the rate of entropy production for steady-state nanosystems, and we recast this rate in terms of efficiency for specific photovoltaic-thermoelectric nanodevices. In a second part, a quantum dot based nanojunction is closely examined using a two-level model. We show analytically that the rate of entropy production is always positive, but we find numerically that it can reach negative values when the derived particule and energy currents are empirically modified as it is usually done for modeling realistic photovoltaic systems.

  14. Entropy production in photovoltaic-thermoelectric nanodevices from the non-equilibrium Green's function formalism.

    PubMed

    Michelini, Fabienne; Crépieux, Adeline; Beltako, Katawoura

    2017-05-04

    We discuss some thermodynamic aspects of energy conversion in electronic nanosystems able to convert light energy into electrical or/and thermal energy using the non-equilibrium Green's function formalism. In a first part, we derive the photon energy and particle currents inside a nanosystem interacting with light and in contact with two electron reservoirs at different temperatures. Energy conservation is verified, and radiation laws are discussed from electron non-equilibrium Green's functions. We further use the photon currents to formulate the rate of entropy production for steady-state nanosystems, and we recast this rate in terms of efficiency for specific photovoltaic-thermoelectric nanodevices. In a second part, a quantum dot based nanojunction is closely examined using a two-level model. We show analytically that the rate of entropy production is always positive, but we find numerically that it can reach negative values when the derived particule and energy currents are empirically modified as it is usually done for modeling realistic photovoltaic systems.

  15. Geological and geothermal investigations for HCMM-derived data. [hydrothermally altered areas in Yerington, Nevada

    NASA Technical Reports Server (NTRS)

    Lyon, R. J. P.; Prelat, A. E.; Kirk, R. (Principal Investigator)

    1981-01-01

    An attempt was made to match HCMM- and U2HCMR-derived temperature data over two test sites of very local size to similar data collected in the field at nearly the same times. Results indicate that HCMM investigations using resolutions cells of 500 m or so are best conducted with areally-extensive sites, rather than point observations. The excellent quality day-VIS imagery is particularly useful for lineament studies, as is the DELTA-T imagery. Attempts to register the ground observed temperatures (even for 0.5 sq mile targets) were unsuccessful due to excessive pixel-to-pixel noise on the HCMM data. Several computer models were explored and related to thermal parameter value changes with observed data. Unless quite complex models, with many parameters which can be observed (perhaps not even measured (perhaps not even measured) only under remote sensing conditions (e.g., roughness, wind shear, etc) are used, the model outputs do not match the observed data. Empirical relationship may be most readily studied.

  16. Protein model discrimination using mutational sensitivity derived from deep sequencing.

    PubMed

    Adkar, Bharat V; Tripathi, Arti; Sahoo, Anusmita; Bajaj, Kanika; Goswami, Devrishi; Chakrabarti, Purbani; Swarnkar, Mohit K; Gokhale, Rajesh S; Varadarajan, Raghavan

    2012-02-08

    A major bottleneck in protein structure prediction is the selection of correct models from a pool of decoys. Relative activities of ∼1,200 individual single-site mutants in a saturation library of the bacterial toxin CcdB were estimated by determining their relative populations using deep sequencing. This phenotypic information was used to define an empirical score for each residue (RankScore), which correlated with the residue depth, and identify active-site residues. Using these correlations, ∼98% of correct models of CcdB (RMSD ≤ 4Å) were identified from a large set of decoys. The model-discrimination methodology was further validated on eleven different monomeric proteins using simulated RankScore values. The methodology is also a rapid, accurate way to obtain relative activities of each mutant in a large pool and derive sequence-structure-function relationships without protein isolation or characterization. It can be applied to any system in which mutational effects can be monitored by a phenotypic readout. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. The use of operant technology to measure behavioral priorities in captive animals.

    PubMed

    Cooper, J J; Mason, G J

    2001-08-01

    Addressing the behavioral priorities of captive animals and the development of practical, objective measures of the value of environmental resources is a principal objective of animal welfare science. In theory, consumer demand approaches derived from human microeconomics should provide valid measures of the value of environmental resources. In practice, however, a number of empirical and theoretical problems have rendered these measures difficult to interpret in studies with animals. A common approach has been to impose a cost on access to resources and to use time with each resource as a measure of consumption to construct demand curves. This can be recorded easily by automatic means, but in a number of studies, it has been found that animals compensate for increased cost of access with longer visit time. Furthermore, direct observation of the test animals' behavior has shown that resource interaction is more intense once the animals have overcome higher costs. As a consequence, measures based on time with the resource may underestimate resource consumption at higher access costs, and demand curves derived from these measures may not be a true reflection of the value of different resources. An alternative approach to demand curves is reservation price, which is the maximum price individual animals are prepared to pay to gain access to resources. In studies using this approach, farmed mink (Mustela vison) paid higher prices for food and swimming water than for resources such as tunnels, water bowls, pet toys, and empty compartments. This indicates that the mink placed a higher value on food and swimming water than on other resources.

  18. A Proposed Integration Environment for Enhanced User Interaction and Value-Adding of Electronic Documents: An Empirical Evaluation.

    ERIC Educational Resources Information Center

    Liew, Chern Li; Chennupati, K. R.; Foo, Schubert

    2001-01-01

    Explores the potential and impact of an innovative information environment in enhancing user activities in using electronic documents for various tasks, and to support the value-adding of these e-documents. Discusses the conceptual design and prototyping of a proposed environment, PROPIE. Presents an empirical and formative evaluation of the…

  19. The Role of Social Science in Action-Guiding Philosophy: The Case of Educational Equity

    ERIC Educational Resources Information Center

    Bischoff, Kendra; Shores, Kenneth

    2014-01-01

    Education policy decisions are both normatively and empirically challenging. These decisions require the consideration of both relevant values and empirical facts. Values tell us what we have reason to care about, and facts can be used to describe what is possible. Following Hamlin and Stemplowska, we distinguish between a theory of ideals and…

  20. The Impact of Student Composition on Schools' Value-Added Performance: A Comparison of Seven Empirical Studies

    ERIC Educational Resources Information Center

    Timmermans, Anneke C.; Thomas, Sally M.

    2015-01-01

    In many countries, policy makers struggle with the development of value-added indicators of school performance for educational accountability purposes and in particular with the choice whether school context measured in the form of student composition variables should be included. This study investigates differences between 7 empirical studies…

  1. Probabilistic inference of ecohydrological parameters using observations from point to satellite scales

    NASA Astrophysics Data System (ADS)

    Bassiouni, Maoya; Higgins, Chad W.; Still, Christopher J.; Good, Stephen P.

    2018-06-01

    Vegetation controls on soil moisture dynamics are challenging to measure and translate into scale- and site-specific ecohydrological parameters for simple soil water balance models. We hypothesize that empirical probability density functions (pdfs) of relative soil moisture or soil saturation encode sufficient information to determine these ecohydrological parameters. Further, these parameters can be estimated through inverse modeling of the analytical equation for soil saturation pdfs, derived from the commonly used stochastic soil water balance framework. We developed a generalizable Bayesian inference framework to estimate ecohydrological parameters consistent with empirical soil saturation pdfs derived from observations at point, footprint, and satellite scales. We applied the inference method to four sites with different land cover and climate assuming (i) an annual rainfall pattern and (ii) a wet season rainfall pattern with a dry season of negligible rainfall. The Nash-Sutcliffe efficiencies of the analytical model's fit to soil observations ranged from 0.89 to 0.99. The coefficient of variation of posterior parameter distributions ranged from < 1 to 15 %. The parameter identifiability was not significantly improved in the more complex seasonal model; however, small differences in parameter values indicate that the annual model may have absorbed dry season dynamics. Parameter estimates were most constrained for scales and locations at which soil water dynamics are more sensitive to the fitted ecohydrological parameters of interest. In these cases, model inversion converged more slowly but ultimately provided better goodness of fit and lower uncertainty. Results were robust using as few as 100 daily observations randomly sampled from the full records, demonstrating the advantage of analyzing soil saturation pdfs instead of time series to estimate ecohydrological parameters from sparse records. Our work combines modeling and empirical approaches in ecohydrology and provides a simple framework to obtain scale- and site-specific analytical descriptions of soil moisture dynamics consistent with soil moisture observations.

  2. Upgrades to the Mars Initial Reference Ionosphere (MIRI) Model Due to Observations from MAVEN, MEX and MRO.

    NASA Astrophysics Data System (ADS)

    Narvaez, C.; Mendillo, M.; Trovato, J.

    2017-12-01

    A semi-empirical model of the maximum electron density (Nmax) of the martian ionosphere [MIRI-mark-1](1) was derived from an initial set radar observations by the MEX/MARSIS instrument. To extend the model to full electron density profiles, normalized shapes of Ne(h) from a theoretical model(2) were calibrated by MIRI's Nmax. Subsequent topside ionosphere observations from MAVEN indicated that topside shapes from MEX/MARSIS(3) offered improved morphology. The MEX topside shapes were then merged to the bottomside shapes from the theoretical model. Using a larger set of MEX/MARSIS observations (07/31/2005 - 05/24/2015), a new specification of Nmax as a function of solar zenith angle and solar flux is now used to calibrate the normalized Ne(h) profiles. The MIRI-mark-2 model includes the integral with height of Ne(h) to form total electron content (TEC) values. Validation of the MIRI TEC was accomplished using an independent set of TEC derived from the SHARAD(4) experiment on MRO. (1) M. Mendillo, A. Marusiak, P. Withers, D. Morgan and D. Gurnett, A New Semi-empirical Model of the Peak Electron Density of the Martian Ionosphere, Geophysical Research Letters, 40, 1-5, doi:10.1002/2013GL057631, 2013. (2) Mayyasi, M. and M. Mendillo (2015), Why the Viking descent probes found only one ionospheric layer at Mars, Geophys. Res. Lett., 42, 7359-7365, doi:10.1002/2015GL065575 (3) Němec, F., D. Morgan, D. Gurnett, and D. Andrews (2016), Empirical model of the Martian dayside ionosphere: Effects of crustal magnetic fields and solar ionizing flux at higher altitudes, J. Geophys. Res. Space Physics, 121, 1760-1771, doi:10.1002/2015/A022060.(4) Campbell, B., and T. Watters (2016), Phase compensation of MARSIS subsurface sounding and estimation of ionospheric properties: New insights from SHARAD results, J.Geophys. Res. Planets, 121, 180-193, doi:10.1002/2015JE004917.

  3. An Empirical Spectroscopic Database for Acetylene in the Regions of 5850-9415 CM^{-1}

    NASA Astrophysics Data System (ADS)

    Campargue, Alain; Lyulin, Oleg

    2017-06-01

    Six studies have been recently devoted to a systematic analysis of the high-resolution near infrared absorption spectrum of acetylene recorded by Cavity Ring Down spectroscopy (CRDS) in Grenoble and by Fourier-transform spectroscopy (FTS) in Brussels and Hefei. On the basis of these works, in the present contribution, we construct an empirical database for acetylene in the 5850 - 9415 \\wn region excluding the 6341-7000 \\wn interval corresponding to the very strong νb{1}+ νb{3} manifold. The database gathers and extends information included in our CRDS and FTS studies. In particular, the intensities of about 1700 lines measured by CRDS in the 7244-7920 \\wn are reported for the first time together with those of several bands of ^{12}C^{13}CH_{2} present in natural isotopic abundance in the acetylene sample. The Herman-Wallis coefficients of most of the bands are derived from a fit of the measured intensity values. A recommended line list is provided with positions calculated using empirical spectroscopic parameters of the lower and upper energy vibrational levels and intensities calculated using the derived Herman-Wallis coefficients. This approach allows completing the experimental list by adding missing lines and improving poorly determined positions and intensities. As a result the constructed line list includes a total of 10973 lines belonging to 146 bands of ^{12}C_{2}H_{2} and 29 bands of ^{12}C^{13}CH_{2}. For comparison the HITRAN2012 database in the same region includes 869 lines of 14 bands, all belonging to ^{12}C_{2}H_{2}. Our weakest lines have an intensity on the order of 10^{-29} cm/molecule,about three orders of magnitude smaller than the HITRAN intensity cut off. Line profile parameters are added to the line list which is provided in HITRAN format. The comparison to the HITRAN2012 line list or to results obtained using the global effective operator approach is discussed in terms of completeness and accuracy.

  4. Using LANDSAT to provide potato production estimates to Columbia Basin farmers and processors

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The estimation of potato yields in the Columbia basin is described. The fundamental objective is to provide CROPIX with working models of potato production. A two-pronged approach was used to yield estimation: (1) using simulation models, and (2) using purely empirical models. The simulation modeling approach used satellite observations to determine certain key dates in the development of the crop for each field identified as potatoes. In particular, these include planting dates, emergence dates, and harvest dates. These critical dates are fed into simulation models of crop growth and development to derive yield forecasts. Purely empirical models were developed to relate yield to some spectrally derived measure of crop development. Two empirical approaches are presented: one relates tuber yield to estimates of cumulative intercepted solar radiation, the other relates tuber yield to the integral under GVI (Global Vegetation Index) curve.

  5. Fire risk in San Diego County, California: A weighted Bayesian model approach

    USGS Publications Warehouse

    Kolden, Crystal A.; Weigel, Timothy J.

    2007-01-01

    Fire risk models are widely utilized to mitigate wildfire hazards, but models are often based on expert opinions of less understood fire-ignition and spread processes. In this study, we used an empirically derived weights-of-evidence model to assess what factors produce fire ignitions east of San Diego, California. We created and validated a dynamic model of fire-ignition risk based on land characteristics and existing fire-ignition history data, and predicted ignition risk for a future urbanization scenario. We then combined our empirical ignition-risk model with a fuzzy fire behavior-risk model developed by wildfire experts to create a hybrid model of overall fire risk. We found that roads influence fire ignitions and that future growth will increase risk in new rural development areas. We conclude that empirically derived risk models and hybrid models offer an alternative method to assess current and future fire risk based on management actions.

  6. A protocol for the creation of useful geometric shape metrics illustrated with a newly derived geometric measure of leaf circularity.

    PubMed

    Krieger, Jonathan D

    2014-08-01

    I present a protocol for creating geometric leaf shape metrics to facilitate widespread application of geometric morphometric methods to leaf shape measurement. • To quantify circularity, I created a novel shape metric in the form of the vector between a circle and a line, termed geometric circularity. Using leaves from 17 fern taxa, I performed a coordinate-point eigenshape analysis to empirically identify patterns of shape covariation. I then compared the geometric circularity metric to the empirically derived shape space and the standard metric, circularity shape factor. • The geometric circularity metric was consistent with empirical patterns of shape covariation and appeared more biologically meaningful than the standard approach, the circularity shape factor. The protocol described here has the potential to make geometric morphometrics more accessible to plant biologists by generalizing the approach to developing synthetic shape metrics based on classic, qualitative shape descriptors.

  7. Evapotranspiration Calculations for an Alpine Marsh Meadow Site in Three-river Headwater Region

    NASA Astrophysics Data System (ADS)

    Zhou, B.; Xiao, H.

    2016-12-01

    Daily radiation and meteorological data were collected at an alpine marsh meadow site in the Three-river Headwater Region(THR). Use them to assess radiation models determined after comparing the performance between Zuo model and the model recommend by FAO56P-M.Four methods, FAO56P-M, Priestley-Taylor, Hargreaves, and Makkink methods were applied to determine daily reference evapotranspiration( ETr) for the growing season and built the empirical models for estimating daily actual evapotranspiration ETa between ETr derived from the four methods and evapotranspiration derived from Bowen Ratio method on alpine marsh meadow in this region. After comparing the performance of four empirical models by RMSE, MAE and AI, it showed these models all can get the better estimated daily ETaon alpine marsh meadow in this region, and the best performance of the FAO56 P-M, Makkink empirical model were better than Priestley-Taylor and Hargreaves model.

  8. Mapping wildfire susceptibility in Southern California using live and dead fractions of vegetation derived from Multiple Endmember Spectral Mixture Analysis of MODIS imagery

    NASA Astrophysics Data System (ADS)

    Schneider, P.; Roberts, D. A.

    2008-12-01

    Wildfire is a significant natural disturbance mechanism in Southern California. Assessing spatial patterns of wildfire susceptibility requires estimates of the live and dead fractions of vegetation. The Fire Potential Index (FPI), which is currently the only operationally computed fire susceptibility index incorporating remote sensing data, estimates such fractions using a relative greenness measure based on time series of vegetation index images. This contribution assesses the potential of Multiple Endmember Spectral Mixture Analysis (MESMA) for deriving such fractions from single MODIS images without the need for a long remote sensing time series, and investigates the applicability of such MESMA-derived fractions for mapping dynamic fire susceptibility in Southern California. Endmembers for MESMA were selected from a library of reference endmembers using Constrained Reference Endmember Selection (CRES), which uses field estimates of fractions to guide the selection process. Fraction images of green vegetation, non-photosynthetic vegetation, soil, and shade were then computed for all available 16-day MODIS composites between 2000 and 2006 using MESMA. Initial results indicate that MESMA of MODIS imagery is capable of providing reliable estimates of live and dead vegetation fraction. Validation against in situ observations in the Santa Ynez Mountains near Santa Barbara, California, shows that the average fraction error for two tested species was around 10%. Further validation of MODIS-derived fractions was performed against fractions from high-resolution hyperspectral data. It was shown that the fractions derived from data of both sensors correlate with R2 values greater than 0.95. MESMA-derived live and dead vegetation fractions were subsequently tested as a substitute to relative greenness in the FPI algorithm. FPI was computed for every day between 2000 and 2006 using the derived fractions. Model performance was then tested by extracting FPI values for historical fire events and random no-fire events in Southern California for the same period and developing a logistic regression model. Preliminary results show that an FPI based on MESMA-derived fractions has the potential to deliver similar performance as the traditional FPI but requiring a greatly reduced data volume and using an approach based on physical rather than empirical relationships.

  9. Protein structure refinement using a quantum mechanics-based chemical shielding predictor† †Electronic supplementary information (ESI) available. See DOI: 10.1039/c6sc04344e Click here for additional data file.

    PubMed Central

    2017-01-01

    The accurate prediction of protein chemical shifts using a quantum mechanics (QM)-based method has been the subject of intense research for more than 20 years but so far empirical methods for chemical shift prediction have proven more accurate. In this paper we show that a QM-based predictor of a protein backbone and CB chemical shifts (ProCS15, PeerJ, 2016, 3, e1344) is of comparable accuracy to empirical chemical shift predictors after chemical shift-based structural refinement that removes small structural errors. We present a method by which quantum chemistry based predictions of isotropic chemical shielding values (ProCS15) can be used to refine protein structures using Markov Chain Monte Carlo (MCMC) simulations, relating the chemical shielding values to the experimental chemical shifts probabilistically. Two kinds of MCMC structural refinement simulations were performed using force field geometry optimized X-ray structures as starting points: simulated annealing of the starting structure and constant temperature MCMC simulation followed by simulated annealing of a representative ensemble structure. Annealing of the CHARMM structure changes the CA-RMSD by an average of 0.4 Å but lowers the chemical shift RMSD by 1.0 and 0.7 ppm for CA and N. Conformational averaging has a relatively small effect (0.1–0.2 ppm) on the overall agreement with carbon chemical shifts but lowers the error for nitrogen chemical shifts by 0.4 ppm. If an amino acid specific offset is included the ProCS15 predicted chemical shifts have RMSD values relative to experiments that are comparable to popular empirical chemical shift predictors. The annealed representative ensemble structures differ in CA-RMSD relative to the initial structures by an average of 2.0 Å, with >2.0 Å difference for six proteins. In four of the cases, the largest structural differences arise in structurally flexible regions of the protein as determined by NMR, and in the remaining two cases, the large structural change may be due to force field deficiencies. The overall accuracy of the empirical methods are slightly improved by annealing the CHARMM structure with ProCS15, which may suggest that the minor structural changes introduced by ProCS15-based annealing improves the accuracy of the protein structures. Having established that QM-based chemical shift prediction can deliver the same accuracy as empirical shift predictors we hope this can help increase the accuracy of related approaches such as QM/MM or linear scaling approaches or interpreting protein structural dynamics from QM-derived chemical shift. PMID:28451325

  10. Valuing Informal Arguments and Empirical Investigations during Collective Argumentation

    ERIC Educational Resources Information Center

    Yopp, David A.

    2012-01-01

    Considerable literature has documented both the pros and cons of students' use of empirical evidence during proving activities. This article presents an analysis of a classroom episode involving in-service middle school, high school, and college teachers that demonstrates that learners need not be steered away from empirical investigations during…

  11. An experimental investigation of wall-interference effects for parachutes in closed wind tunnels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Macha, J.M.; Buffington, R.J.

    1989-09-01

    A set of 6-ft-diameter ribbon parachutes (geometric porosities of 7%, 15%, and 30%) was tested in various subsonic wind tunnels covering a range of geometric blockages from 2% to 35%. Drag, base pressure, and inflated geometry were measured under full-open, steady-flow conditions. The result drag areas and pressure coefficients were correlated with the bluff-body blockage parameter (i.e., drag area divided by tunnel cross-sectional area) according to the blockage theory of Maskell. The data show that the Maskell theory provides a simple, accurate correction for the effective increase in dynamic pressure caused by wall constraint for both single parachutes and clusters.more » For single parachutes, the empirically derived blockage factor K{sub M} has the value of 1.85, independent of canopy porosity. Derived values of K{sub M} for two- and three-parachute clusters are 1.35 and 1.59, respectively. Based on the photometric data, there was no deformation of the inflated shape of the single parachutes up to a geometric blockage of 22%. In the case of the three-parachute cluster, decreases in both the inflated diameter and the spacing among member parachutes were observed at a geometric blockage of 35%. 11 refs., 9 figs., 3 tabs.« less

  12. QCD topological susceptibility from the nonlocal chiral quark model

    NASA Astrophysics Data System (ADS)

    Nam, Seung-Il; Kao, Chung-Wen

    2017-06-01

    We investigate the quantum chromodynamics (QCD) topological susceptibility χ by using the semi-bosonized nonlocal chiral-quark model (SB-NLχQM) for the leading large- N c contributions. This model is based on the liquid-instanton QCD-vacuum configuration, in which SU(3) flavor symmetry is explicitly broken by the finite current-quark mass ( m u,d, m s) ≈ (5, 135) MeV. To compute χ, we derive the local topological charge-density operator Q t( x) from the effective action of SB-NLχQM. We verify that the derived expression for χ in our model satisfies the Witten- Veneziano (WV) and the Leutwyler-Smilga (LS) formulae, and the Crewther theorem in the chiral limit by construction. Once the average instanton size and the inter-instanton distance are fixed with ρ¯ = 1/3 fm and R¯ = 1 fm, respectively, all the other parameters are determined self-consistently within the model. We obtain χ = (167.67MeV)4, which is comparable with the empirical value χ = (175±5MeV)4 whereas it turns out that χ QL = (194.30MeV)4 in the quenched limit. Thus, we conclude that the value of χ will be reduced around 10 20% by the dynamical-quark contribution.

  13. The Lyman-Continuum Fluxes and Stellar Parameters of O and Early B-Type Stars

    NASA Technical Reports Server (NTRS)

    Vacca, William D.; Garmany, Catherine D.; Shull, J. Michael

    1996-01-01

    Using the results of the most recent stellar atmosphere models applied to a sample of hot stars, we construct calibrations of effective temperature (T(sub eff)), and gravity (log(sub g)) with a spectral type and luminosity class for Galactic 0-type and early B-type stars. From the model results we also derive an empirical relation between the bolometric correction and T(sub eff) and log g. Using a sample of stars with known distances located in OB associations in the Galaxy and the Large Magellanic Cloud, we derive a new calibration of M(sub v) with spectral class. With these new calibrations and the stellar atmosphere models of Kurucz, we calculate the physical parameters and ionizing photon luminosities in the H(0) and He(0) continua for O and early B-type stars. We find substantial differences between our values of the Lyman- continuum luminosity and those reported in the literature. We also discuss the systematic discrepancy between O-type stellar masses derived from spectroscopic models and those derived from evolutionary tracks. Most likely, the cause of this 'mass discrepancy' lies primarily in the atmospheric models, which are plane parallel and hydrostatic and therefore do not account for an extended atmosphere and the velocity fields in a stellar wind. Finally, we present a new computation of the Lyman-continuum luminosity from 429 known O stars located within 2.5 kpc of the Sun. We find the total ionizing luminosity from this population ((Q(sub 0)(sup T(sub ot))) = 7.0 x 10(exp 51) photons/s) to be 47% larger than that determined using the Lyman continuum values tabulated by Panagia.

  14. Lyman-α Models for LRO LAMP from MESSENGER MASCS and SOHO SWAN Data

    NASA Astrophysics Data System (ADS)

    Pryor, Wayne R.; Holsclaw, Gregory M.; McClintock, William E.; Snow, Martin; Vervack, Ronald J.; Gladstone, G. Randall; Stern, S. Alan; Retherford, Kurt D.; Miles, Paul F.

    From models of the interplanetary Lyman-α glow derived from observations by the Mercury Atmospheric and Surface Composition Spectrometer (MASCS) interplanetary Lyman-α data obtained in 2009-2011 on the MErcury Surface, Space ENvironment, GEochemistry, and Ranging (MESSENGER) spacecraft mission, daily all-sky Lyman-α maps were generated for use by the Lunar Reconnaissance Orbiter (LRO) LAMP Lyman-Alpha Mapping Project (LAMP) experiment. These models were then compared with Solar and Heliospheric Observatory (SOHO) Solar Wind ANistropy (SWAN) Lyman-α maps when available. Although the empirical agreement across the sky between the scaled model and the SWAN maps is adequate for LAMP mapping purposes, the model brightness values best agree with the SWAN values in 2008 and 2009. SWAN's observations show a systematic decline in 2010 and 2011 relative to the model. It is not clear if the decline represents a failure of the model or a decline in sensitivity in SWAN in 2010 and 2011. MESSENGER MASCS and SOHO SWAN Lyman-α calibrations systematically differ in comparison with the model, with MASCS reporting Lyman-α values some 30 % lower than SWAN.

  15. The validation of a human force model to predict dynamic forces resulting from multi-joint motions

    NASA Technical Reports Server (NTRS)

    Pandya, Abhilash K.; Maida, James C.; Aldridge, Ann M.; Hasson, Scott M.; Woolford, Barbara J.

    1992-01-01

    The development and validation is examined of a dynamic strength model for humans. This model is based on empirical data. The shoulder, elbow, and wrist joints were characterized in terms of maximum isolated torque, or position and velocity, in all rotational planes. This data was reduced by a least squares regression technique into a table of single variable second degree polynomial equations determining torque as a function of position and velocity. The isolated joint torque equations were then used to compute forces resulting from a composite motion, in this case, a ratchet wrench push and pull operation. A comparison of the predicted results of the model with the actual measured values for the composite motion indicates that forces derived from a composite motion of joints (ratcheting) can be predicted from isolated joint measures. Calculated T values comparing model versus measured values for 14 subjects were well within the statistically acceptable limits and regression analysis revealed coefficient of variation between actual and measured to be within 0.72 and 0.80.

  16. Empirically-Derived, Person-Oriented Patterns of School Readiness in Typically-Developing Children: Description and Prediction to First-Grade Achievement

    ERIC Educational Resources Information Center

    Konold, Timothy R.; Pianta, Robert C.

    2005-01-01

    School readiness assessment is a prominent feature of early childhood education. Because the construct of readiness is multifaceted, we examined children's patterns on multiple indicators previously found to be both theoretically and empirically linked to school readiness: social skill, interactions with parents, problem behavior, and performance…

  17. GPP in Loblolly Pine: A Monthly Comparison of Empirical and Process Models

    Treesearch

    Christopher Gough; John Seiler; Kurt Johnsen; David Arthur Sampson

    2002-01-01

    Monthly and yearly gross primary productivity (GPP) estimates derived from an empirical and two process based models (3PG and BIOMASS) were compared. Spatial and temporal variation in foliar gas photosynthesis was examined and used to develop GPP prediction models for fertilized nine-year-old loblolly pine (Pinus taeda) stands located in the North...

  18. Community Participation of People with an Intellectual Disability: A Review of Empirical Findings

    ERIC Educational Resources Information Center

    Verdonschot, M. M. L.; de Witte, L. P.; Reichrath, E.; Buntinx, W. H. E.; Curfs, L. M. G.

    2009-01-01

    Study design: A systematic review of the literature. Objectives: To investigate community participation of persons with an intellectual disability (ID) as reported in empirical research studies. Method: A systematic literature search was conducted for the period of 1996-2006 on PubMed, CINAHL and PSYCINFO. Search terms were derived from the…

  19. Pedagogising the University: On Higher Education Policy Implementation and Its Effects on Social Relations

    ERIC Educational Resources Information Center

    Stavrou, Sophia

    2016-01-01

    This paper aims at providing a theoretical and empirical discussion on the concept of pedagogisation which derives from the hypothesis of a new era of "totally pedagogised society" in Basil Bernstein's work. The article is based on empirical research on higher education policy, with a focus on the implementation of curriculum change…

  20. An empirical InSAR-optical fusion approach to mapping vegetation canopy height

    Treesearch

    Wayne S. Walker; Josef M. Kellndorfer; Elizabeth LaPoint; Michael Hoppus; James Westfall

    2007-01-01

    Exploiting synergies afforded by a host of recently available national-scale data sets derived from interferometric synthetic aperture radar (InSAR) and passive optical remote sensing, this paper describes the development of a novel empirical approach for the provision of regional- to continental-scale estimates of vegetation canopy height. Supported by data from the...

  1. Profiles of equilibrium constants for self-association of aromatic molecules

    NASA Astrophysics Data System (ADS)

    Beshnova, Daria A.; Lantushenko, Anastasia O.; Davies, David B.; Evstigneev, Maxim P.

    2009-04-01

    Analysis of the noncovalent, noncooperative self-association of identical aromatic molecules assumes that the equilibrium self-association constants are either independent of the number of molecules (the EK-model) or change progressively with increasing aggregation (the AK-model). The dependence of the self-association constant on the number of molecules in the aggregate (i.e., the profile of the equilibrium constant) was empirically derived in the AK-model but, in order to provide some physical understanding of the profile, it is proposed that the sources for attenuation of the equilibrium constant are the loss of translational and rotational degrees of freedom, the ordering of molecules in the aggregates and the electrostatic contribution (for charged units). Expressions are derived for the profiles of the equilibrium constants for both neutral and charged molecules. Although the EK-model has been widely used in the analysis of experimental data, it is shown in this work that the derived equilibrium constant, KEK, depends on the concentration range used and hence, on the experimental method employed. The relationship has also been demonstrated between the equilibrium constant KEK and the real dimerization constant, KD, which shows that the value of KEK is always lower than KD.

  2. Unraveling spurious properties of interaction networks with tailored random networks.

    PubMed

    Bialonski, Stephan; Wendler, Martin; Lehnertz, Klaus

    2011-01-01

    We investigate interaction networks that we derive from multivariate time series with methods frequently employed in diverse scientific fields such as biology, quantitative finance, physics, earth and climate sciences, and the neurosciences. Mimicking experimental situations, we generate time series with finite length and varying frequency content but from independent stochastic processes. Using the correlation coefficient and the maximum cross-correlation, we estimate interdependencies between these time series. With clustering coefficient and average shortest path length, we observe unweighted interaction networks, derived via thresholding the values of interdependence, to possess non-trivial topologies as compared to Erdös-Rényi networks, which would indicate small-world characteristics. These topologies reflect the mostly unavoidable finiteness of the data, which limits the reliability of typically used estimators of signal interdependence. We propose random networks that are tailored to the way interaction networks are derived from empirical data. Through an exemplary investigation of multichannel electroencephalographic recordings of epileptic seizures--known for their complex spatial and temporal dynamics--we show that such random networks help to distinguish network properties of interdependence structures related to seizure dynamics from those spuriously induced by the applied methods of analysis.

  3. Unraveling Spurious Properties of Interaction Networks with Tailored Random Networks

    PubMed Central

    Bialonski, Stephan; Wendler, Martin; Lehnertz, Klaus

    2011-01-01

    We investigate interaction networks that we derive from multivariate time series with methods frequently employed in diverse scientific fields such as biology, quantitative finance, physics, earth and climate sciences, and the neurosciences. Mimicking experimental situations, we generate time series with finite length and varying frequency content but from independent stochastic processes. Using the correlation coefficient and the maximum cross-correlation, we estimate interdependencies between these time series. With clustering coefficient and average shortest path length, we observe unweighted interaction networks, derived via thresholding the values of interdependence, to possess non-trivial topologies as compared to Erdös-Rényi networks, which would indicate small-world characteristics. These topologies reflect the mostly unavoidable finiteness of the data, which limits the reliability of typically used estimators of signal interdependence. We propose random networks that are tailored to the way interaction networks are derived from empirical data. Through an exemplary investigation of multichannel electroencephalographic recordings of epileptic seizures – known for their complex spatial and temporal dynamics – we show that such random networks help to distinguish network properties of interdependence structures related to seizure dynamics from those spuriously induced by the applied methods of analysis. PMID:21850239

  4. Comparison of empirical estimate of clinical pretest probability with the Wells score for diagnosis of deep vein thrombosis.

    PubMed

    Wang, Bo; Lin, Yin; Pan, Fu-shun; Yao, Chen; Zheng, Zi-Yu; Cai, Dan; Xu, Xiang-dong

    2013-01-01

    Wells score has been validated for estimation of pretest probability in patients with suspected deep vein thrombosis (DVT). In clinical practice, many clinicians prefer to use empirical estimation rather than Wells score. However, which method is better to increase the accuracy of clinical evaluation is not well understood. Our present study compared empirical estimation of pretest probability with the Wells score to investigate the efficiency of empirical estimation in the diagnostic process of DVT. Five hundred and fifty-five patients were enrolled in this study. One hundred and fifty patients were assigned to examine the interobserver agreement for Wells score between emergency and vascular clinicians. The other 405 patients were assigned to evaluate the pretest probability of DVT on the basis of the empirical estimation and Wells score, respectively, and plasma D-dimer levels were then determined in the low-risk patients. All patients underwent venous duplex scans and had a 45-day follow up. Weighted Cohen's κ value for interobserver agreement between emergency and vascular clinicians of the Wells score was 0.836. Compared with Wells score evaluation, empirical assessment increased the sensitivity, specificity, Youden's index, positive likelihood ratio, and positive and negative predictive values, but decreased negative likelihood ratio. In addition, the appropriate D-dimer cutoff value based on Wells score was 175 μg/l and 108 patients were excluded. Empirical assessment increased the appropriate D-dimer cutoff point to 225 μg/l and 162 patients were ruled out. Our findings indicated that empirical estimation not only improves D-dimer assay efficiency for exclusion of DVT but also increases clinical judgement accuracy in the diagnosis of DVT.

  5. A one-layer satellite surface energy balance for estimating evapotranspiration rates and crop water stress indexes.

    PubMed

    Barbagallo, Salvatore; Consoli, Simona; Russo, Alfonso

    2009-01-01

    Daily evapotranspiration fluxes over the semi-arid Catania Plain area (Eastern Sicily, Italy) were evaluated using remotely sensed data from Landsat Thematic Mapper TM5 images. A one-source parameterization of the surface sensible heat flux exchange using satellite surface temperature has been used. The transfer of sensible and latent heat is described by aerodynamic resistance and surface resistance. Required model inputs are brightness, temperature, fractional vegetation cover or leaf area index, albedo, crop height, roughness lengths, net radiation, air temperature, air humidity and wind speed. The aerodynamic resistance (r(ah)) is formulated on the basis of the Monin-Obukhov surface layer similarity theory and the surface resistance (r(s)) is evaluated from the energy balance equation. The instantaneous surface flux values were converted into evaporative fraction (EF) over the heterogeneous land surface to derive daily evapotranspiration values. Remote sensing-based assessments of crop water stress (CWSI) were also made in order to identify local irrigation requirements. Evapotranspiration data and crop coefficient values obtained from the approach were compared with: (i) data from the semi-empirical approach "K(c) reflectance-based", which integrates satellite data in the visible and NIR regions of the electromagnetic spectrum with ground-based measurements and (ii) surface energy flux measurements collected from a micrometeorological tower located in the experiment area. The expected variability associated with ET flux measurements suggests that the approach-derived surface fluxes were in acceptable agreement with the observations.

  6. Estimating the effects of 17α-ethinylestradiol on stochastic population growth rate of fathead minnows: a population synthesis of empirically derived vital rates.

    PubMed

    Schwindt, Adam R; Winkelman, Dana L

    2016-09-01

    Urban freshwater streams in arid climates are wastewater effluent dominated ecosystems particularly impacted by bioactive chemicals including steroid estrogens that disrupt vertebrate reproduction. However, more understanding of the population and ecological consequences of exposure to wastewater effluent is needed. We used empirically derived vital rate estimates from a mesocosm study to develop a stochastic stage-structured population model and evaluated the effect of 17α-ethinylestradiol (EE2), the estrogen in human contraceptive pills, on fathead minnow Pimephales promelas stochastic population growth rate. Tested EE2 concentrations ranged from 3.2 to 10.9 ng L(-1) and produced stochastic population growth rates (λ S ) below 1 at the lowest concentration, indicating potential for population decline. Declines in λ S compared to controls were evident in treatments that were lethal to adult males despite statistically insignificant effects on egg production and juvenile recruitment. In fact, results indicated that λ S was most sensitive to the survival of juveniles and female egg production. More broadly, our results document that population model results may differ even when empirically derived estimates of vital rates are similar among experimental treatments, and demonstrate how population models integrate and project the effects of stressors throughout the life cycle. Thus, stochastic population models can more effectively evaluate the ecological consequences of experimentally derived vital rates.

  7. Do Farm Advisory Services Improve Adoption of Rural Development Policies? An Empirical Analysis in GI Areas

    ERIC Educational Resources Information Center

    De Rosa, Marcello; Bartoli, Luca

    2017-01-01

    Purpose: The aim of the paper is to evaluate how advisory services stimulate the adoption of rural development policies (RDP) aiming at value creation. Design/methodology/approach: By linking the use of agricultural extension services (AES) to policies for value creation, we will put forward an empirical analysis in Italy, with the aim of…

  8. An Evaluation of Empirical Bayes's Estimation of Value-Added Teacher Performance Measures

    ERIC Educational Resources Information Center

    Guarino, Cassandra M.; Maxfield, Michelle; Reckase, Mark D.; Thompson, Paul N.; Wooldridge, Jeffrey M.

    2015-01-01

    Empirical Bayes's (EB) estimation has become a popular procedure used to calculate teacher value added, often as a way to make imprecise estimates more reliable. In this article, we review the theory of EB estimation and use simulated and real student achievement data to study the ability of EB estimators to properly rank teachers. We compare the…

  9. Exchangeability, extreme returns and Value-at-Risk forecasts

    NASA Astrophysics Data System (ADS)

    Huang, Chun-Kai; North, Delia; Zewotir, Temesgen

    2017-07-01

    In this paper, we propose a new approach to extreme value modelling for the forecasting of Value-at-Risk (VaR). In particular, the block maxima and the peaks-over-threshold methods are generalised to exchangeable random sequences. This caters for the dependencies, such as serial autocorrelation, of financial returns observed empirically. In addition, this approach allows for parameter variations within each VaR estimation window. Empirical prior distributions of the extreme value parameters are attained by using resampling procedures. We compare the results of our VaR forecasts to that of the unconditional extreme value theory (EVT) approach and the conditional GARCH-EVT model for robust conclusions.

  10. Modeling of Inverted Annular Film Boiling using an integral method

    NASA Astrophysics Data System (ADS)

    Sridharan, Arunkumar

    In modeling Inverted Annular Film Boiling (IAFB), several important phenomena such as interaction between the liquid and the vapor phases and irregular nature of the interface, which greatly influence the momentum and heat transfer at the interface, need to be accounted for. However, due to the complexity of these phenomena, they were not modeled in previous studies. Since two-phase heat transfer equations and relationships rely heavily on experimental data, many closure relationships that were used in previous studies to solve the problem are empirical in nature. Also, in deriving the relationships, the experimental data were often extrapolated beyond the intended range of conditions, causing errors in predictions. In some cases, empirical correlations that were derived from situations other than IAFB, and whose applicability to IAFB was questionable, were used. Moreover, arbitrary constants were introduced in the model developed in previous studies to provide good fit to the experimental data. These constants have no physical basis, thereby leading to questionable accuracy in the model predictions. In the present work, modeling of Inverted Annular Film Boiling (IAFB) is done using Integral Method. Two-dimensional formulation of IAFB is presented. Separate equations for the conservation of mass, momentum and energy are derived from first principles, for the vapor film and the liquid core. Turbulence is incorporated in the formulation. The system of second-order partial differential equations is integrated over the radial direction to obtain a system of integral differential equations. In order to solve the system of equations, second order polynomial profiles are used to describe the nondimensional velocity and temperatures. The unknown coefficients in the profiles are functions of the axial direction alone. Using the boundary conditions that govern the physical problem, equations for the unknown coefficients are derived in terms of the primary dependent variables: wall shear stress, interfacial shear stress, film thickness, pressure, wall temperature and the mass transfer rate due to evaporation. A system of non-linear first order coupled ordinary differential equations is obtained. Due to the inherent mathematical complexity of the system of equations, simplifying assumptions are made to obtain a numerical solution. The system of equations is solved numerically to obtain values of the unknown quantities at each subsequent axial location. Derived quantities like void fraction and heat transfer coefficient are calculated at each axial location. The calculation is terminated when the void fraction reaches a value of 0.6, the upper limit of IAFB. The results obtained agree with the experimental trends observed. Void fraction increases along the heated length, while the heat transfer coefficient drops due to the increased resistance of the vapor film as expected.

  11. Empirical relations between large wood transport and catchment characteristics

    NASA Astrophysics Data System (ADS)

    Steeb, Nicolas; Rickenmann, Dieter; Rickli, Christian; Badoux, Alexandre

    2017-04-01

    The transport of vast amounts of large wood (LW) in water courses can considerably aggravate hazardous situations during flood events, and often strongly affects resulting flood damage. Large wood recruitment and transport are controlled by various factors which are difficult to assess and the prediction of transported LW volumes is difficult. Such information are, however, important for engineers and river managers to adequately dimension retention structures or to identify critical stream cross-sections. In this context, empirical formulas have been developed to estimate the volume of transported LW during a flood event (Rickenmann, 1997; Steeb et al., 2017). The data base of existing empirical wood load equations is, however, limited. The objective of the present study is to test and refine existing empirical equations, and to derive new relationships to reveal trends in wood loading. Data have been collected for flood events with LW occurrence in Swiss catchments of various sizes. This extended data set allows us to derive statistically more significant results. LW volumes were found to be related to catchment and transport characteristics, such as catchment size, forested area, forested stream length, water discharge, sediment load, or Melton ratio. Both the potential wood load and the fraction that is effectively mobilized during a flood event (effective wood load) are estimated. The difference of potential and effective wood load allows us to derive typical reduction coefficients that can be used to refine spatially explicit GIS models for potential LW recruitment.

  12. Strong Asymmetric Limit of the Quasi-Potential of the Boundary Driven Weakly Asymmetric Exclusion Process

    NASA Astrophysics Data System (ADS)

    Bertini, Lorenzo; Gabrielli, Davide; Landim, Claudio

    2009-07-01

    We consider the weakly asymmetric exclusion process on a bounded interval with particles reservoirs at the endpoints. The hydrodynamic limit for the empirical density, obtained in the diffusive scaling, is given by the viscous Burgers equation with Dirichlet boundary conditions. In the case in which the bulk asymmetry is in the same direction as the drift due to the boundary reservoirs, we prove that the quasi-potential can be expressed in terms of the solution to a one-dimensional boundary value problem which has been introduced by Enaud and Derrida [16]. We consider the strong asymmetric limit of the quasi-potential and recover the functional derived by Derrida, Lebowitz, and Speer [15] for the asymmetric exclusion process.

  13. A new method to determine the interstellar reddening towards WN stars

    NASA Technical Reports Server (NTRS)

    Conti, Peter S.; Morris, Patrick W.

    1990-01-01

    An empirical approach to determine the redding in WN stars is presented, in which the measured strengths of the emission lines of He II at 1640 and 4686 A are used to estimate the extinction. The He II emission lines at these wavelengths are compared for a number of WN stars in the Galaxy and the LMC. It is shown that the equivalent width ratios are single valued and are independent of the spectral subtypes. The reddening for stars in the Galaxy is derived using a Galactic extinction law and observed line flux ratios, showing good agreement with previous determinations of reddening. The possible application of the method to study the absorption properties of the interstellar medium in more distant galaxies is discussed.

  14. Effect of quantity and composition of waste on the prediction of annual methane potential from landfills.

    PubMed

    Cho, Han Sang; Moon, Hee Sun; Kim, Jae Young

    2012-04-01

    A study was conducted to investigate the effect of waste composition change on the methane production in landfills. An empirical equation for the methane potential of the mixed waste is derived based on the methane potential values of individual waste components and the compositional ratio of waste components. A correction factor was introduced in the equation and was determined from the BMP and lysimeter tests. The equation and LandGEM were applied for a full size landfill and the annual methane potential was estimated. Results showed that the changes in quantity of waste affected the annual methane potential from the landfill more than the changes of waste composition. Copyright © 2012 Elsevier Ltd. All rights reserved.

  15. A Time-dependent Heliospheric Model Driven by Empirical Boundary Conditions

    NASA Astrophysics Data System (ADS)

    Kim, T. K.; Arge, C. N.; Pogorelov, N. V.

    2017-12-01

    Consisting of charged particles originating from the Sun, the solar wind carries the Sun's energy and magnetic field outward through interplanetary space. The solar wind is the predominant source of space weather events, and modeling the solar wind propagation to Earth is a critical component of space weather research. Solar wind models are typically separated into coronal and heliospheric parts to account for the different physical processes and scales characterizing each region. Coronal models are often coupled with heliospheric models to propagate the solar wind out to Earth's orbit and beyond. The Wang-Sheeley-Arge (WSA) model is a semi-empirical coronal model consisting of a potential field source surface model and a current sheet model that takes synoptic magnetograms as input to estimate the magnetic field and solar wind speed at any distance above the coronal region. The current version of the WSA model takes the Air Force Data Assimilative Photospheric Flux Transport (ADAPT) model as input to provide improved time-varying solutions for the ambient solar wind structure. When heliospheric MHD models are coupled with the WSA model, density and temperature at the inner boundary are treated as free parameters that are tuned to optimal values. For example, the WSA-ENLIL model prescribes density and temperature assuming momentum flux and thermal pressure balance across the inner boundary of the ENLIL heliospheric MHD model. We consider an alternative approach of prescribing density and temperature using empirical correlations derived from Ulysses and OMNI data. We use our own modeling software (Multi-scale Fluid-kinetic Simulation Suite) to drive a heliospheric MHD model with ADAPT-WSA input. The modeling results using the two different approaches of density and temperature prescription suggest that the use of empirical correlations may be a more straightforward, consistent method.

  16. Debt Illusion among Local Taxpayers: An Empirical Investigation.

    ERIC Educational Resources Information Center

    Landers, James R.; Byrnes, Patricia E.

    This paper reports on a multijurisdictional study of the influence of school district long-term guaranteed debt liabilities on housing values. The empirical setting for the study was the Columbus, Ohio, metropolitan area. The objective of the research was to empirically test the debt-illusion hypothesis by examining the extent to which long-term…

  17. 17 CFR 240.15c3-1f - Optional market and credit risk requirements for OTC derivatives dealers (Appendix F to 17 CFR...

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... charges. An OTC derivatives dealer shall provide a description of all statistical models used for pricing... controls over those models, and a statement regarding whether the firm has developed its own internal VAR models. If the OTC derivatives dealer's VAR model incorporates empirical correlations across risk...

  18. Universal equation for estimating ideal body weight and body weight at any BMI1

    PubMed Central

    Peterson, Courtney M; Thomas, Diana M; Blackburn, George L; Heymsfield, Steven B

    2016-01-01

    Background: Ideal body weight (IBW) equations and body mass index (BMI) ranges have both been used to delineate healthy or normal weight ranges, although these 2 different approaches are at odds with each other. In particular, past IBW equations are misaligned with BMI values, and unlike BMI, the equations have failed to recognize that there is a range of ideal or target body weights. Objective: For the first time, to our knowledge, we merged the concepts of a linear IBW equation and of defining target body weights in terms of BMI. Design: With the use of calculus and approximations, we derived an easy-to-use linear equation that clinicians can use to calculate both IBW and body weight at any target BMI value. We measured the empirical accuracy of the equation with the use of NHANES data and performed a comparative analysis with past IBW equations. Results: Our linear equation allowed us to calculate body weights for any BMI and height with a mean empirical accuracy of 0.5–0.7% on the basis of NHANES data. Moreover, we showed that our body weight equation directly aligns with BMI values for both men and women, which avoids the overestimation and underestimation problems at the upper and lower ends of the height spectrum that have plagued past IBW equations. Conclusions: Our linear equation increases the sophistication of IBW equations by replacing them with a single universal equation that calculates both IBW and body weight at any target BMI and height. Therefore, our equation is compatible with BMI and can be applied with the use of mental math or a calculator without the need for an app, which makes it a useful tool for both health practitioners and the general public. PMID:27030535

  19. Universal equation for estimating ideal body weight and body weight at any BMI.

    PubMed

    Peterson, Courtney M; Thomas, Diana M; Blackburn, George L; Heymsfield, Steven B

    2016-05-01

    Ideal body weight (IBW) equations and body mass index (BMI) ranges have both been used to delineate healthy or normal weight ranges, although these 2 different approaches are at odds with each other. In particular, past IBW equations are misaligned with BMI values, and unlike BMI, the equations have failed to recognize that there is a range of ideal or target body weights. For the first time, to our knowledge, we merged the concepts of a linear IBW equation and of defining target body weights in terms of BMI. With the use of calculus and approximations, we derived an easy-to-use linear equation that clinicians can use to calculate both IBW and body weight at any target BMI value. We measured the empirical accuracy of the equation with the use of NHANES data and performed a comparative analysis with past IBW equations. Our linear equation allowed us to calculate body weights for any BMI and height with a mean empirical accuracy of 0.5-0.7% on the basis of NHANES data. Moreover, we showed that our body weight equation directly aligns with BMI values for both men and women, which avoids the overestimation and underestimation problems at the upper and lower ends of the height spectrum that have plagued past IBW equations. Our linear equation increases the sophistication of IBW equations by replacing them with a single universal equation that calculates both IBW and body weight at any target BMI and height. Therefore, our equation is compatible with BMI and can be applied with the use of mental math or a calculator without the need for an app, which makes it a useful tool for both health practitioners and the general public. © 2016 American Society for Nutrition.

  20. Spectrophotometric Determination of Carbonate Ion Concentrations: Elimination of Instrument-Dependent Offsets and Calculation of In Situ Saturation States.

    PubMed

    Sharp, Jonathan D; Byrne, Robert H; Liu, Xuewu; Feely, Richard A; Cuyler, Erin E; Wanninkhof, Rik; Alin, Simone R

    2017-08-15

    This work describes an improved algorithm for spectrophotometric determinations of seawater carbonate ion concentrations ([CO 3 2- ] spec ) derived from observations of ultraviolet absorbance spectra in lead-enriched seawater. Quality-control assessments of [CO 3 2- ] spec data obtained on two NOAA research cruises (2012 and 2016) revealed a substantial intercruise difference in average Δ[CO 3 2- ] (the difference between a sample's [CO 3 2- ] spec value and the corresponding [CO 3 2- ] value calculated from paired measurements of pH and dissolved inorganic carbon). Follow-up investigation determined that this discordance was due to the use of two different spectrophotometers, even though both had been properly calibrated. Here we present an essential methodological refinement to correct [CO 3 2- ] spec absorbance data for small but significant instrumental differences. After applying the correction (which, notably, is not necessary for pH determinations from sulfonephthalein dye absorbances) to the shipboard absorbance data, we fit the combined-cruise data set to produce empirically updated parameters for use in processing future (and historical) [CO 3 2- ] spec absorbance measurements. With the new procedure, the average Δ[CO 3 2- ] offset between the two aforementioned cruises was reduced from 3.7 μmol kg -1 to 0.7 μmol kg -1 , which is well within the standard deviation of the measurements (1.9 μmol kg -1 ). We also introduce an empirical model to calculate in situ carbonate ion concentrations from [CO 3 2- ] spec . We demonstrate that these in situ values can be used to determine calcium carbonate saturation states that are in good agreement with those determined by more laborious and expensive conventional methods.

  1. Empirical effective temperatures and bolometric corrections for early-type stars

    NASA Technical Reports Server (NTRS)

    Code, A. D.; Bless, R. C.; Davis, J.; Brown, R. H.

    1976-01-01

    An empirical effective temperature for a star can be found by measuring its apparent angular diameter and absolute flux distribution. The angular diameters of 32 bright stars in the spectral range O5f to F8 have recently been measured with the stellar interferometer at Narrabri Observatory, and their absolute flux distributions have been found by combining observations of ultraviolet flux from the Orbiting Astronomical Observatory (OAO-2) with ground-based photometry. In this paper, these data have been combined to derive empirical effective temperatures and bolometric corrections for these 32 stars.

  2. A statistical test of the stability assumption inherent in empirical estimates of economic depreciation.

    PubMed

    Shriver, K A

    1986-01-01

    Realistic estimates of economic depreciation are required for analyses of tax policy, economic growth and production, and national income and wealth. THe purpose of this paper is to examine the stability assumption underlying the econometric derivation of empirical estimates of economic depreciation for industrial machinery and and equipment. The results suggest that a reasonable stability of economic depreciation rates of decline may exist over time. Thus, the assumption of a constant rate of economic depreciation may be a reasonable approximation for further empirical economic analyses.

  3. Effects of incubation time and filtration method on Kd of indigenous selenium and iodine in temperate soils.

    PubMed

    Almahayni, T; Bailey, E; Crout, N M J; Shaw, G

    2017-10-01

    In this study, the effects of incubation time and the method of soil solution extraction and filtration on the empirical distribution coefficient (K d ) obtained by de-sorbing indigenous selenium (Se) and iodine (I) from arable and woodland soils under temperate conditions were investigated. Incubation time had a significant soil- and element-dependent effect on the K d values, which tended to decrease with the incubation time. Generally, a four-week period was sufficient for the desorption K d value to stabilise. Concurrent solubilisation of soil organic matter (OM) and release of organically-bound Se and I was probably responsible for the observed decrease in K d with time. This contrasts with the conventional view of OM as a sink for Se and I in soils. Selenium and I K d values were not significantly affected by the method of soil solution extraction and filtration. The results suggest that incubation time is a key criterion when selecting Se and I K d values from the literature for risk assessments. Values derived from desorption of indigenous soil Se and I might be most appropriate for long-term assessments since they reflect the quasi-equilibrium state of their partitioning in soils. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. The interprofessional socialization and valuing scale: a tool for evaluating the shift toward collaborative care approaches in health care settings.

    PubMed

    King, Gillian; Shaw, Lynn; Orchard, Carole A; Miller, Stacy

    2010-01-01

    There is a need for tools by which to evaluate the beliefs, behaviors, and attitudes that underlie interprofessional socialization and collaborative practice in health care settings. This paper introduces the Interprofessional Socialization and Valuing Scale (ISVS), a 24-item self-report measure based on concepts in the interprofessional literature concerning shifts in beliefs, behaviors, and attitudes that underlie interprofessional socialization. The ISVS was designed to measure the degree to which transformative learning takes place, as evidenced by changed assumptions and worldviews, enhanced knowledge and skills concerning interprofessional collaborative teamwork, and shifts in values and identities. The scales of the ISVS were determined using principal components analysis. The principal components analysis revealed three scales accounting for approximately 49% of the variance in responses: (a) Self-Perceived Ability to Work with Others, (b) Value in Working with Others, and (c) Comfort in Working with Others. These empirically derived scales showed good fit with the conceptual basis of the measure. The ISVS provides insight into the abilities, values, and beliefs underlying socio-cultural aspects of collaborative and authentic interprofessional care in the workplace, and can be used to evaluate the impact of interprofessional education efforts, in house team training, and workshops.

  5. The generalized 20/80 law using probabilistic fractals applied to petroleum field size

    USGS Publications Warehouse

    Crovelli, R.A.

    1995-01-01

    Fractal properties of the Pareto probability distribution are used to generalize "the 20/80 law." The 20/80 law is a heuristic law that has evolved over the years into the following rule of thumb for many populations: 20 percent of the population accounts for 80 percent of the total value. The general p100/q100 law in probabilistic form is defined with q as a function of p, where p is the population proportion and q is the proportion of total value. Using the Pareto distribution, the p100/q100 law in fractal form is derived with the parameter q being a fractal, where q unexpectedly possesses the scale invariance property. The 20/80 law is a special case of the p100/q100 law in fractal form. The p100/q100 law in fractal form is applied to petroleum fieldsize data to obtain p and q such that p100% of the oil fields greater than any specified scale or size in a geologic play account for q100% of the total oil of the fields. The theoretical percentages of total resources of oil using the fractal q are extremely close to the empirical percentages from the data using the statistic q. Also, the empirical scale invariance property of the statistic q for the petroleum fieldsize data is in excellent agreement with the theoretical scale invariance property of the fractal q. ?? 1995 Oxford University Press.

  6. Identifying Thresholds for Ecosystem-Based Management

    PubMed Central

    Samhouri, Jameal F.; Levin, Phillip S.; Ainsworth, Cameron H.

    2010-01-01

    Background One of the greatest obstacles to moving ecosystem-based management (EBM) from concept to practice is the lack of a systematic approach to defining ecosystem-level decision criteria, or reference points that trigger management action. Methodology/Principal Findings To assist resource managers and policymakers in developing EBM decision criteria, we introduce a quantitative, transferable method for identifying utility thresholds. A utility threshold is the level of human-induced pressure (e.g., pollution) at which small changes produce substantial improvements toward the EBM goal of protecting an ecosystem's structural (e.g., diversity) and functional (e.g., resilience) attributes. The analytical approach is based on the detection of nonlinearities in relationships between ecosystem attributes and pressures. We illustrate the method with a hypothetical case study of (1) fishing and (2) nearshore habitat pressure using an empirically-validated marine ecosystem model for British Columbia, Canada, and derive numerical threshold values in terms of the density of two empirically-tractable indicator groups, sablefish and jellyfish. We also describe how to incorporate uncertainty into the estimation of utility thresholds and highlight their value in the context of understanding EBM trade-offs. Conclusions/Significance For any policy scenario, an understanding of utility thresholds provides insight into the amount and type of management intervention required to make significant progress toward improved ecosystem structure and function. The approach outlined in this paper can be applied in the context of single or multiple human-induced pressures, to any marine, freshwater, or terrestrial ecosystem, and should facilitate more effective management. PMID:20126647

  7. The Interface between Research on Individual Difference Variables and Teaching Practice: The Case of Cognitive Factors and Personality

    ERIC Educational Resources Information Center

    Biedron, Adriana; Pawlak, Miroslaw

    2016-01-01

    While a substantial body of empirical evidence has been accrued about the role of individual differences in second language acquisition, relatively little is still known about how factors of this kind can mediate the effects of instructional practices as well as how empirically-derived insights can inform foreign language pedagogy, both with…

  8. Effects of Active Learning Classrooms on Student Learning: A Two-Year Empirical Investigation on Student Perceptions and Academic Performance

    ERIC Educational Resources Information Center

    Chiu, Pit Ho Patrio; Cheng, Shuk Han

    2017-01-01

    Recent studies on active learning classrooms (ACLs) have demonstrated their positive influence on student learning. However, most of the research evidence is derived from a few subject-specific courses or limited student enrolment. Empirical studies on this topic involving large student populations are rare. The present work involved a large-scale…

  9. An Empirical Method for Deriving Grade Equivalence for University Entrance Qualifications: An Application to A Levels and the International Baccalaureate

    ERIC Educational Resources Information Center

    Green, Francis; Vignoles, Anna

    2012-01-01

    We present a method to compare different qualifications for entry to higher education by studying students' subsequent performance. Using this method for students holding either the International Baccalaureate (IB) or A-levels gaining their degrees in 2010, we estimate an "empirical" equivalence scale between IB grade points and UCAS…

  10. Empirical relationship between leaf wax n-alkane δD and altitude in the Wuyi, Shennongjia and Tianshan Mountains, China: Implications for paleoaltimetry

    NASA Astrophysics Data System (ADS)

    Luo, Pan; Peng, Ping'an; Gleixner, Gerd; Zheng, Zhuo; Pang, Zhonghe; Ding, Zhongli

    2011-01-01

    Estimating past elevation not only provides evidence for vertical movements of the Earth's lithosphere, but also increases our understanding of interactions between tectonics, relief and climate in geological history. Development of biomarker hydrogen isotope-based paleoaltimetry techniques that can be applied to a wide range of sample types is therefore of continuing importance. Here we present leaf wax-derived n-alkane δD (δDwax) values along three soil altitudinal transects, at different latitudes, in the Wuyi, Shennongjia and Tianshan Mountains in China, to investigate δDwax gradients and the apparent fractionation between leaf wax and precipitation (εwax-p). We find that soil δDwax track altitudinal variations of precipitation δD along the three transects that span variable environment conditions and vertical vegetation spectra. An empirical δDwax-altitude relation is therefore established in which the average δDwax lapse rate of - 2.27 ± 0.38‰/100 m is suitable for predicting relative paleoelevation change (relative uplift). The application of this empirical gradient is restricted to phases in the mountain uplift stage when the atmospheric circulation had not distinctly changed and to when the climate was not arid. An empirical δDwax-latitude-altitude formula is also calculated: δDwax = 3.483LAT - 0.0227ALT - 261.5, which gives the preliminary spatial distribution pattern of δDwax in modern China. Mean value of εwax-p in the extreme humid Wuyi Mountains is quite negative (- 154‰), compared to the humid Shennongjia (- 129‰) and the arid (but with abundant summer precipitation) Tianshan Mountains (- 130‰), which suggests aridity or water availability in the growing season is the primary factor controlling soil/sediment εwax-p. Along the Tianshan transects, values of εwax-p are speculated to be constant with altitude; while along the Wuyi and Shennongjia transects, εwax-p are also constant at the low-mid altitudes, but become slightly more negative at high altitudes which could be attributed to overestimates of precipitation δD or the vegetation shift to grass/conifer. Additionally, a reversal of altitude effect in the vertical variation of δDwax was found in the alpine zone of the Tianshan Mountains, which might be caused by atmospheric circulation change with altitude. This implies that the paleo-circulation pattern and its changes should also be evaluated when stable isotope-based paleoaltimetry is applied.

  11. Empirical determination of collimator scatter data for use in Radcalc commercial monitor unit calculation software: Implication for prostate volumetric modulated-arc therapy calculations.

    PubMed

    Richmond, Neil; Tulip, Rachael; Walker, Chris

    2016-01-01

    The aim of this work was to determine, by measurement and independent monitor unit (MU) check, the optimum method for determining collimator scatter for an Elekta Synergy linac with an Agility multileaf collimator (MLC) within Radcalc, a commercial MU calculation software package. The collimator scatter factors were measured for 13 field shapes defined by an Elekta Agility MLC on a Synergy linac with 6MV photons. The value of the collimator scatter associated with each field was also calculated according to the equation Sc=Sc(mlc)+Sc(corr)(Sc(open)-Sc(mlc)) with Sc(corr) varied between 0 and 1, where Sc(open) is the value of collimator scatter calculated from the rectangular collimator-defined field and Sc(mlc) the value using only the MLC-defined field shape by applying sector integration. From this the optimum value of the correction was determined as that which gives the minimum difference between measured and calculated Sc. Single (simple fluence modulation) and dual-arc (complex fluence modulation) treatment plans were generated on the Monaco system for prostate volumetric modulated-arc therapy (VMAT) delivery. The planned MUs were verified by absolute dose measurement in phantom and by an independent MU calculation. The MU calculations were repeated with values of Sc(corr) between 0 and 1. The values of the correction yielding the minimum MU difference between treatment planning system (TPS) and check MU were established. The empirically derived value of Sc(corr) giving the best fit to the measured collimator scatter factors was 0.49. This figure however was not found to be optimal for either the single- or dual-arc prostate VMAT plans, which required 0.80 and 0.34, respectively, to minimize the differences between the TPS and independent-check MU. Point dose measurement of the VMAT plans demonstrated that the TPS MUs were appropriate for the delivered dose. Although the value of Sc(corr) may be obtained by direct comparison of calculation with measurement, the efficacy of the value determined for VMAT-MU calculations are very much dependent on the complexity of the MLC delivery. Copyright © 2016 American Association of Medical Dosimetrists. Published by Elsevier Inc. All rights reserved.

  12. Probabilistic analysis of tsunami hazards

    USGS Publications Warehouse

    Geist, E.L.; Parsons, T.

    2006-01-01

    Determining the likelihood of a disaster is a key component of any comprehensive hazard assessment. This is particularly true for tsunamis, even though most tsunami hazard assessments have in the past relied on scenario or deterministic type models. We discuss probabilistic tsunami hazard analysis (PTHA) from the standpoint of integrating computational methods with empirical analysis of past tsunami runup. PTHA is derived from probabilistic seismic hazard analysis (PSHA), with the main difference being that PTHA must account for far-field sources. The computational methods rely on numerical tsunami propagation models rather than empirical attenuation relationships as in PSHA in determining ground motions. Because a number of source parameters affect local tsunami runup height, PTHA can become complex and computationally intensive. Empirical analysis can function in one of two ways, depending on the length and completeness of the tsunami catalog. For site-specific studies where there is sufficient tsunami runup data available, hazard curves can primarily be derived from empirical analysis, with computational methods used to highlight deficiencies in the tsunami catalog. For region-wide analyses and sites where there are little to no tsunami data, a computationally based method such as Monte Carlo simulation is the primary method to establish tsunami hazards. Two case studies that describe how computational and empirical methods can be integrated are presented for Acapulco, Mexico (site-specific) and the U.S. Pacific Northwest coastline (region-wide analysis).

  13. Improved Correction of IR Loss in Diffuse Shortwave Measurements: An ARM Value-Added Product

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Younkin, K; Long, CN

    Simple single black detector pyranometers, such as the Eppley Precision Spectral Pyranometer (PSP) used by the Atmospheric Radiation Measurement (ARM) Program, are known to lose energy via infrared (IR) emission to the sky. This is especially a problem when making clear-sky diffuse shortwave (SW) measurements, which are inherently of low magnitude and suffer the greatest IR loss. Dutton et al. (2001) proposed a technique using information from collocated pyrgeometers to help compensate for this IR loss. The technique uses an empirically derived relationship between the pyrgeometer detector data (and alternatively the detector data plus the difference between the pyrgeometer casemore » and dome temperatures) and the nighttime pyranometer IR loss data. This relationship is then used to apply a correction to the diffuse SW data during daylight hours. We developed an ARM value-added product (VAP) called the SW DIFF CORR 1DUTT VAP to apply the Dutton et al. correction technique to ARM PSP diffuse SW measurements.« less

  14. Présentation d'une formule pratique d'estimation de l'évaporation potentielle, conforme aux nouvelles recommandations internationales

    NASA Astrophysics Data System (ADS)

    Lhomme, J. P.; Monteny, B.

    1982-03-01

    This paper begins to recall new concepts concerning evapotranspiration as they have been specified by the round-table conference of Budapest in May 1977. The potential evaporation ( EP) is now defined as the evaporation of a crop whose all exchange surfaces (leaves, stalks,...) are saturated, i.e., covered with a thin film of water. It can be calculated by a theoretical formula of Penman type. We give the reasons why it is interesting to use grass potential evaporation ( EP g ) as reference. The empirical relationships to estimate in this case the net radiation and the aerodynamic component of the formula have been derived from measurements made in Ivory Coast (West Africa). The relationship (8) has been obtained. It gives the daily value of EP g in millimeters of water per day (mm/d). The values calculated by this formula are compared to measurements of grass maximal evapotranspiration ( ETM g ).

  15. Quantum centipedes: collective dynamics of interacting quantum walkers

    NASA Astrophysics Data System (ADS)

    Krapivsky, P. L.; Luck, J. M.; Mallick, K.

    2016-08-01

    We consider the quantum centipede made of N fermionic quantum walkers on the one-dimensional lattice interacting by means of the simplest of all hard-bound constraints: the distance between two consecutive fermions is either one or two lattice spacings. This composite quantum walker spreads ballistically, just as the simple quantum walk. However, because of the interactions between the internal degrees of freedom, the distribution of its center-of-mass velocity displays numerous ballistic fronts in the long-time limit, corresponding to singularities in the empirical velocity distribution. The spectrum of the centipede and the corresponding group velocities are analyzed by direct means for the first few values of N. Some analytical results are obtained for arbitrary N by exploiting an exact mapping of the problem onto a free-fermion system. We thus derive the maximal velocity describing the ballistic spreading of the two extremal fronts of the centipede wavefunction, including its non-trivial value in the large-N limit.

  16. Southeastern United States wood pellets as a global energy resource: a cradle-to-gate life cycle assessment derived from empirical data

    NASA Astrophysics Data System (ADS)

    Morrison, Brandon; Golden, Jay S.

    2018-02-01

    Given increased policies driving renewable electricity generation and insufficient local production of woody biomass, many countries are reliant upon the importation of wood pellets. Of current wood pellet exports, the vast majority originates from the Southeastern United States (US). In this paper we present results from a cradle-to-gate, attributional process life cycle assessment in which two production scenarios of wood pellets were modelled for the Southeastern US: one utilising roundwood from a silviculture operation and the other utilising sawmill residues. The system boundary includes all steps from harvesting of the wood biomass, through delivery of the finished wood pellets to a US port facility. For each of the impact categories assessed, wood pellets from sawmill residues resulted in higher values, ranging from 5% to 31%. In relation to Global Warming Potential, roundwood pellets resulted in a 13-21% lower value than pellets produced from sawmill residues, depending upon the allocation method.

  17. Calibrant-Free Analyte Quantitation via a Variable Velocity Flow Cell.

    PubMed

    Beck, Jason G; Skuratovsky, Aleksander; Granger, Michael C; Porter, Marc D

    2017-01-17

    In this paper, we describe a novel method for analyte quantitation that does not rely on calibrants, internal standards, or calibration curves but, rather, leverages the relationship between disparate and predictable surface-directed analyte flux to an array of sensing addresses and a measured resultant signal. To reduce this concept to practice, we fabricated two flow cells such that the mean linear fluid velocity, U, was varied systematically over an array of electrodes positioned along the flow axis. This resulted in a predictable variation of the address-directed flux of a redox analyte, ferrocenedimethanol (FDM). The resultant limiting currents measured at a series of these electrodes, and accurately described by a convective-diffusive transport model, provided a means to calculate an "unknown" concentration without the use of calibrants, internal standards, or a calibration curve. Furthermore, the experiment and concentration calculation only takes minutes to perform. Deviation in calculated FDM concentrations from true values was minimized to less than 0.5% when empirically derived values of U were employed.

  18. Methodenvergleich zur Bestimmung der hydraulischen Durchlässigkeit

    NASA Astrophysics Data System (ADS)

    Storz, Katharina; Steger, Hagen; Wagner, Valentin; Bayer, Peter; Blum, Philipp

    2017-06-01

    Knowing the hydraulic conductivity (K) is a precondition for understanding groundwater flow processes in the subsurface. Numerous laboratory and field methods for the determination of hydraulic conductivity exist, which can lead to significantly different results. In order to quantify the variability of these various methods, the hydraulic conductivity was examined for an industrial silica sand (Dorsilit) using four different methods: (1) grain-size analysis, (2) Kozeny-Carman approach, (3) permeameter tests and (4) flow rate experiments in large-scale tank experiments. Due to the large volume of the artificially built aquifer, the tank experiment results are assumed to be the most representative. Hydraulic conductivity values derived from permeameter tests show only minor deviation, while results of the empirically evaluated grain-size analysis are about one magnitude higher and show great variances. The latter was confirmed by the analysis of several methods for the determination of K-values found in the literature, thus we generally question the suitability of grain-size analyses and strongly recommend the use of permeameter tests.

  19. The relevance of data on physicians and disability on the right to assisted suicide: can empirical studies resolve the issue?

    PubMed

    Batavia, A I

    2000-06-01

    Opponents of a right to physician-assisted suicide rely heavily on the results of several empirical studies, particularly data concerning physicians and other health professionals. This commentary concludes that values, not empirical data, must ultimately determine the legality of assisted suicide. Studies cannot resolve the fundamental issue.

  20. Large-scale compensation of errors in pairwise-additive empirical force fields: comparison of AMBER intermolecular terms with rigorous DFT-SAPT calculations.

    PubMed

    Zgarbová, Marie; Otyepka, Michal; Sponer, Jirí; Hobza, Pavel; Jurecka, Petr

    2010-09-21

    The intermolecular interaction energy components for several molecular complexes were calculated using force fields available in the AMBER suite of programs and compared with Density Functional Theory-Symmetry Adapted Perturbation Theory (DFT-SAPT) values. The extent to which such comparison is meaningful is discussed. The comparability is shown to depend strongly on the intermolecular distance, which means that comparisons made at one distance only are of limited value. At large distances the coulombic and van der Waals 1/r(6) empirical terms correspond fairly well with the DFT-SAPT electrostatics and dispersion terms, respectively. At the onset of electronic overlap the empirical values deviate from the reference values considerably. However, the errors in the force fields tend to cancel out in a systematic manner at equilibrium distances. Thus, the overall performance of the force fields displays errors an order of magnitude smaller than those of the individual interaction energy components. The repulsive 1/r(12) component of the van der Waals expression seems to be responsible for a significant part of the deviation of the force field results from the reference values. We suggest that further improvement of the force fields for intermolecular interactions would require replacement of the nonphysical 1/r(12) term by an exponential function. Dispersion anisotropy and its effects are discussed. Our analysis is intended to show that although comparing the empirical and non-empirical interaction energy components is in general problematic, it might bring insights useful for the construction of new force fields. Our results are relevant to often performed force-field-based interaction energy decompositions.

  1. Modeling of pickup ion distributions in the Halley cometosheath: Empirical limits on rates of ionization, diffusion, loss and creation of fast neutral atoms

    NASA Technical Reports Server (NTRS)

    Huddleston, D. E.; Neugebauer, M.; Goldstein, B. E.

    1994-01-01

    The shape of the velocity distribution of water group ions observed by the Giotto ion mass spectrometer on its approach to comet Halley is modeled to derive empirical values for the rates of ionization, energy diffusion, and loss in the midcometosheath. The model includes the effect of rapid pitch angle scattering into a bispherical shell distribution as well as the effect of the magnetization of the plasma on the charge exchange loss rate. It is found that the average rate of ionization of cometary neutrals in this region of the cometosheath appears to be of the order of a factor 3 faster than the `standard' rates approx. 1 x 10(exp -6)/s that are generally assumed to model the observations in most regions of the comet environment. For the region of the coma studied in the present work (approx. 1 - 2 x 10(exp 5) km from the nucleus), the inferred energy diffusion coefficient is D(sub 0) approx. equals 0.0002 to 0.0005 sq km/cu s, which is generally lower than values used in other models. The empirically obtained loss rate appears to be about an order of magnitude greater than can be explained by charge exchange with the `standard' cross section of approx. 2 x 10(exp -15)sq cm. However such cross sections are not well known and for water group ion/water group neutral interactions, rates as high as 8 x 10(exp -15) sq cm have previously been suggested in the literature. Assuming the entire loss rate is due to charge exchange yields a rate of creation of fast neutral atoms of the order of approx. 10(exp -4)/s or higher, depending on the level of velocity diffusion. The fast neutrals may, in turn, be partly responsible for the higher-than-expected ionization rate.

  2. Accurate ab initio dipole moment surfaces of ozone: First principle intensity predictions for rotationally resolved spectra in a large range of overtone and combination bands.

    PubMed

    Tyuterev, Vladimir G; Kochanov, Roman V; Tashkun, Sergey A

    2017-02-14

    Ab initio dipole moment surfaces (DMSs) of the ozone molecule are computed using the MRCI-SD method with AVQZ, AV5Z, and VQZ-F12 basis sets on a dense grid of about 1950 geometrical configurations. The analytical DMS representation used for the fit of ab initio points provides better behavior for large nuclear displacements than that of previous studies. Various DMS models were derived and tested. Vibration-rotation line intensities of 16 O 3 were calculated from these ab initio surfaces by the variational method using two different potential functions determined in our previous works. For the first time, a very good agreement of first principle calculations with the experiment was obtained for the line-by-line intensities in rotationally resolved ozone spectra in a large far- and mid-infrared range. This includes high overtone and combination bands up to ΔV = 6. A particular challenge was a correct description of the B-type bands (even ΔV 3 values) that represented major difficulties for the previous ab initio investigations and for the empirical spectroscopic models. The major patterns of various B-type bands were correctly described without empirically adjusted dipole moment parameters. For the 10 μm range, which is of key importance for the atmospheric ozone retrievals, our ab initio intensity results are within the experimental error margins. The theoretical values for the strongest lines of the ν 3 band lie in general between two successive versions of HITRAN (HIgh-resolution molecular TRANsmission) empirical database that corresponded to most extended available sets of observations. The overall qualitative agreement in a large wavenumber range for rotationally resolved cold and hot ozone bands up to about 6000 cm -1 is achieved here for the first time. These calculations reveal that several weak bands are yet missing from available spectroscopic databases.

  3. A steady state model of agricultural waste pyrolysis: A mini review.

    PubMed

    Trninić, M; Jovović, A; Stojiljković, D

    2016-09-01

    Agricultural waste is one of the main renewable energy resources available, especially in an agricultural country such as Serbia. Pyrolysis has already been considered as an attractive alternative for disposal of agricultural waste, since the technique can convert this special biomass resource into granular charcoal, non-condensable gases and pyrolysis oils, which could furnish profitable energy and chemical products owing to their high calorific value. In this regard, the development of thermochemical processes requires a good understanding of pyrolysis mechanisms. Experimental and some literature data on the pyrolysis characteristics of corn cob and several other agricultural residues under inert atmosphere were structured and analysed in order to obtain conversion behaviour patterns of agricultural residues during pyrolysis within the temperature range from 300 °C to 1000 °C. Based on experimental and literature data analysis, empirical relationships were derived, including relations between the temperature of the process and yields of charcoal, tar and gas (CO2, CO, H2 and CH4). An analytical semi-empirical model was then used as a tool to analyse the general trends of biomass pyrolysis. Although this semi-empirical model needs further refinement before application to all types of biomass, its prediction capability was in good agreement with results obtained by the literature review. The compact representation could be used in other applications, to conveniently extrapolate and interpolate these results to other temperatures and biomass types. © The Author(s) 2016.

  4. Leading change: a concept analysis.

    PubMed

    Nelson-Brantley, Heather V; Ford, Debra J

    2017-04-01

    To report an analysis of the concept of leading change. Nurses have been called to lead change to advance the health of individuals, populations, and systems. Conceptual clarity about leading change in the context of nursing and healthcare systems provides an empirical direction for future research and theory development that can advance the science of leadership studies in nursing. Concept analysis. CINAHL, PubMed, PsycINFO, Psychology and Behavioral Sciences Collection, Health Business Elite and Business Source Premier databases were searched using the terms: leading change, transformation, reform, leadership and change. Literature published in English from 2001 - 2015 in the fields of nursing, medicine, organizational studies, business, education, psychology or sociology were included. Walker and Avant's method was used to identify descriptions, antecedents, consequences and empirical referents of the concept. Model, related and contrary cases were developed. Five defining attributes of leading change were identified: (a) individual and collective leadership; (b) operational support; (c) fostering relationships; (d) organizational learning; and (e) balance. Antecedents were external or internal driving forces and organizational readiness. The consequences of leading change included improved organizational performance and outcomes and new organizational culture and values. A theoretical definition and conceptual model of leading change were developed. Future studies that use and test the model may contribute to the refinement of a middle-range theory to advance nursing leadership research and education. From this, empirically derived interventions that prepare and enable nurses to lead change to advance health may be realized. © 2016 John Wiley & Sons Ltd.

  5. Correction of Single Frequency Altimeter Measurements for Ionosphere Delay

    NASA Technical Reports Server (NTRS)

    Schreiner, William S.; Markin, Robert E.; Born, George H.

    1997-01-01

    This study is a preliminary analysis of the accuracy of various ionosphere models to correct single frequency altimeter height measurements for Ionospheric path delay. In particular, research focused on adjusting empirical and parameterized ionosphere models in the parameterized real-time ionospheric specification model (PRISM) 1.2 using total electron content (TEC) data from the global positioning system (GPS). The types of GPS data used to adjust PRISM included GPS line-of-sight (LOS) TEC data mapped to the vertical, and a grid of GPS derived TEC data in a sun-fixed longitude frame. The adjusted PRISM TEC values, as well as predictions by IRI-90, a climatotogical model, were compared to TOPEX/Poseidon (T/P) TEC measurements from the dual-frequency altimeter for a number of T/P tracks. When adjusted with GPS LOS data, the PRISM empirical model predicted TEC over 24 1 h data sets for a given local time to with in a global error of 8.60 TECU rms during a midnight centered ionosphere and 9.74 TECU rms during a noon centered ionosphere. Using GPS derived sun-fixed TEC data, the PRISM parameterized model predicted TEC within an error of 8.47 TECU rms centered at midnight and 12.83 TECU rms centered at noon. From these best results, it is clear that the proposed requirement of 3-4 TECU global rms for TOPEX/Poseidon Follow-On will be very difficult to meet, even with a substantial increase in the number of GPS ground stations, with any realizable combination of the aforementioned models or data assimilation schemes.

  6. Advances in the simulation and automated measurement of well-sorted granular material: 2. Direct measures of particle properties

    USGS Publications Warehouse

    Buscombe, Daniel D.; Rubin, David M.

    2012-01-01

    1. In this, the second of a pair of papers on the structure of well-sorted natural granular material (sediment), new methods are described for automated measurements from images of sediment, of: 1) particle-size standard deviation (arithmetic sorting) with and without apparent void fraction; and 2) mean particle size in material with void fraction. A variety of simulations of granular material are used for testing purposes, in addition to images of natural sediment. Simulations are also used to establish that the effects on automated particle sizing of grains visible through the interstices of the grains at the very surface of a granular material continue to a depth of approximately 4 grain diameters and that this is independent of mean particle size. Ensemble root-mean squared error between observed and estimated arithmetic sorting coefficients for 262 images of natural silts, sands and gravels (drawn from 8 populations) is 31%, which reduces to 27% if adjusted for bias (slope correction between observed and estimated values). These methods allow non-intrusive and fully automated measurements of surfaces of unconsolidated granular material. With no tunable parameters or empirically derived coefficients, they should be broadly universal in appropriate applications. However, empirical corrections may need to be applied for the most accurate results. Finally, analytical formulas are derived for the one-step pore-particle transition probability matrix, estimated from the image's autocorrelogram, from which void fraction of a section of granular material can be estimated directly. This model gives excellent predictions of bulk void fraction yet imperfect predictions of pore-particle transitions.

  7. Nucleon form factors in dispersively improved chiral effective field theory. II. Electromagnetic form factors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alarcon, J. M.; Weiss, C.

    We study the nucleon electromagnetic form factors (EM FFs) using a recently developed method combining Chiral Effective Field Theory (more » $$\\chi$$EFT) and dispersion analysis. The spectral functions on the two-pion cut at $$t > 4 M_\\pi^2$$ are constructed using the elastic unitarity relation and an $N/D$ representation. $$\\chi$$EFT is used to calculate the real unctions $$J_\\pm^1 (t) = f_\\pm^1(t)/F_\\pi(t)$$ (ratios of the complex $$\\pi\\pi \\rightarrow N \\bar N$$ partial-wave amplitudes and the timelike pion FF), which are free of $$\\pi\\pi$$ rescattering. Rescattering effects are included through the empirical timelike pion FF $$|F_\\pi(t)|^2$$. The method allows us to compute the isovector EM spectral functions up to $$t \\sim 1$$ GeV$^2$ with controlled accuracy (LO, NLO, and partial N2LO). With the spectral functions we calculate the isovector nucleon EM FFs and their derivatives at $t = 0$ (EM radii, moments) using subtracted dispersion relations. We predict the values of higher FF derivatives with minimal uncertainties and explain their collective behavior. Finally, we estimate the individual proton and neutron FFs by adding an empirical parametrization of the isoscalar sector. Excellent agreement with the present low-$Q^2$ FF data is achieved up to $$\\sim$$0.5 GeV$^2$ for $$G_E$$, and up to $$\\sim$$0.2 GeV$^2$ for $$G_M$$. Our results can be used to guide the analysis of low-$Q^2$ elastic scattering data and the extraction of the proton charge radius.« less

  8. Advances in the simulation and automated measurement of well-sorted granular material: 2. Direct measures of particle properties

    NASA Astrophysics Data System (ADS)

    Buscombe, D.; Rubin, D. M.

    2012-06-01

    In this, the second of a pair of papers on the structure of well-sorted natural granular material (sediment), new methods are described for automated measurements from images of sediment, of: 1) particle-size standard deviation (arithmetic sorting) with and without apparent void fraction; and 2) mean particle size in material with void fraction. A variety of simulations of granular material are used for testing purposes, in addition to images of natural sediment. Simulations are also used to establish that the effects on automated particle sizing of grains visible through the interstices of the grains at the very surface of a granular material continue to a depth of approximately 4 grain diameters and that this is independent of mean particle size. Ensemble root-mean squared error between observed and estimated arithmetic sorting coefficients for 262 images of natural silts, sands and gravels (drawn from 8 populations) is 31%, which reduces to 27% if adjusted for bias (slope correction between observed and estimated values). These methods allow non-intrusive and fully automated measurements of surfaces of unconsolidated granular material. With no tunable parameters or empirically derived coefficients, they should be broadly universal in appropriate applications. However, empirical corrections may need to be applied for the most accurate results. Finally, analytical formulas are derived for the one-step pore-particle transition probability matrix, estimated from the image's autocorrelogram, from which void fraction of a section of granular material can be estimated directly. This model gives excellent predictions of bulk void fraction yet imperfect predictions of pore-particle transitions.

  9. Nucleon form factors in dispersively improved chiral effective field theory. II. Electromagnetic form factors

    DOE PAGES

    Alarcon, J. M.; Weiss, C.

    2018-05-08

    We study the nucleon electromagnetic form factors (EM FFs) using a recently developed method combining Chiral Effective Field Theory (more » $$\\chi$$EFT) and dispersion analysis. The spectral functions on the two-pion cut at $$t > 4 M_\\pi^2$$ are constructed using the elastic unitarity relation and an $N/D$ representation. $$\\chi$$EFT is used to calculate the real unctions $$J_\\pm^1 (t) = f_\\pm^1(t)/F_\\pi(t)$$ (ratios of the complex $$\\pi\\pi \\rightarrow N \\bar N$$ partial-wave amplitudes and the timelike pion FF), which are free of $$\\pi\\pi$$ rescattering. Rescattering effects are included through the empirical timelike pion FF $$|F_\\pi(t)|^2$$. The method allows us to compute the isovector EM spectral functions up to $$t \\sim 1$$ GeV$^2$ with controlled accuracy (LO, NLO, and partial N2LO). With the spectral functions we calculate the isovector nucleon EM FFs and their derivatives at $t = 0$ (EM radii, moments) using subtracted dispersion relations. We predict the values of higher FF derivatives with minimal uncertainties and explain their collective behavior. Finally, we estimate the individual proton and neutron FFs by adding an empirical parametrization of the isoscalar sector. Excellent agreement with the present low-$Q^2$ FF data is achieved up to $$\\sim$$0.5 GeV$^2$ for $$G_E$$, and up to $$\\sim$$0.2 GeV$^2$ for $$G_M$$. Our results can be used to guide the analysis of low-$Q^2$ elastic scattering data and the extraction of the proton charge radius.« less

  10. ON THE THREE-DIMENSIONAL STRUCTURE OF THE MASS, METALLICITY, AND STAR FORMATION RATE SPACE FOR STAR-FORMING GALAXIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lara-Lopez, Maritza A.; Lopez-Sanchez, Angel R.; Hopkins, Andrew M., E-mail: mlopez@aao.gov.au

    2013-02-20

    We demonstrate that the space formed by the star formation rate (SFR), gas-phase metallicity (Z), and stellar mass (M {sub *}) can be reduced to a plane, as first proposed by Lara-Lopez et al. We study three different approaches to find the best representation of this 3D space, using a principal component analysis (PCA), a regression fit, and binning of the data. The PCA shows that this 3D space can be adequately represented in only two dimensions, i.e., a plane. We find that the plane that minimizes the {chi}{sup 2} for all variables, and hence provides the best representation ofmore » the data, corresponds to a regression fit to the stellar mass as a function of SFR and Z, M {sub *}= f(Z, SFR). We find that the distribution resulting from the median values in bins for our data gives the highest {chi}{sup 2}. We also show that the empirical calibrations to the oxygen abundance used to derive the Fundamental Metallicity Relation have important limitations, which contribute to the apparent inconsistencies. The main problem is that these empirical calibrations do not consider the ionization degree of the gas. Furthermore, the use of the N2 index to estimate oxygen abundances cannot be applied for 12 + log(O/H) {approx}> 8.8 because of the saturation of the [N II] {lambda}6584 line in the high-metallicity regime. Finally, we provide an update of the Fundamental Plane derived by Lara-Lopez et al.« less

  11. Deriving local demand for stumpage from estimates of regional supply and demand.

    Treesearch

    Kent P. Connaughton; Gerard A. Majerus; David H. Jackson

    1989-01-01

    The local (Forest-level or local-area) demand for stumpage can be derived from estimates of regional supply and demand. The derivation of local demand is justified when the local timber economy is similar to the regional timber economy; a simple regression of local on nonlocal prices can be used as an empirical test of similarity between local and regional economies....

  12. Empirical Characterization of Low-Altitude Ion Flux Derived from TWINS

    NASA Astrophysics Data System (ADS)

    Goldstein, J.; LLera, K.; McComas, D. J.; Redfern, J.; Valek, P. W.

    2018-05-01

    In this study we analyze ion differential flux from 10 events between 2008 and 2015. The ion fluxes are derived from low-altitude emissions (LAEs) in energetic neutral atom (ENA) images obtained by Two Wide-angle Imaging Neutral-atom Spectrometers (TWINS). The data set comprises 119.44 hr of observations, including 4,284 per energy images with 128,277 values of differential ENA flux from pixels near Earth's limb. Limb pixel data are extracted and mapped to a common polar ionospheric grid and associated with values of the Dst index. Statistical analysis is restricted to pixels within 10% of the LAE emissivity peak. For weak Dst conditions we find a premidnight peak in the average ion precipitation, whose flux and location are relatively insensitive to energy. For moderate Dst, elevated flux levels appear over a wider magnetic local time (MLT) range, with a separation of peak locations by energy. Strong disturbances bring a dramatic flux increase across the entire nightside at all energies but strongest for low energies in the postmidnight sector. The arrival of low-energy ions can lower the average energy for strong Dst, even as it raises the total integral number flux. TWINS-derived ion fluxes provide a macroscale measurement of the average precipitating ion distribution and confirm that convection, either quasi-steady or bursty, is an important process controlling the spatial and spectral properties of precipitating ions. The premidnight peak (weak Dst), MLT widening and energy-versus-MLT dependence (moderate Dst), and postmidnight low-energy ion enhancement (strong Dst) are consistent with observations and models of steady or bursty convective transport.

  13. Moments of inertia for neutron and strange stars: Limits derived for the Crab pulsar

    NASA Astrophysics Data System (ADS)

    Bejger, M.; Haensel, P.

    2002-12-01

    Recent estimates of the properties of the Crab nebula are used to derive constraints on the moment of inertia, mass and radius of the pulsar. To this purpose, we employ an approximate formula combining these three parameters. Our ``empirical formula'' I =~ a(x) M R2, where x=(M/Msun) (km/R), is based on numerical results obtained for thirty theoretical equations of state of dense matter. The functions a(x) for neutron stars and strange stars are qualitatively different. For neutron stars aNS(x)=x/(0.1+2x) for x<=0.1 (valid for M>0.2 Msun) and aNS(x)={2/ 9}(1+5x) for x>0.1. For strange stars aSS(x)={2/ 5}(1+x) (not valid for strange stars with crust and M<0.1 Msun). We obtain also an approximate expression for the maximum moment of inertia Imax,45 =~ (-0.37 + 7.12* xmax) (Mmax/Msun)(RM_max/ {10 km})2, where I45 = I/1045 g* cm2, valid for both neutron stars and strange stars. Applying our formulae to the evaluated values of ICrab, we derive constraints on the mass and radius of the pulsar. { A very conservative evaluation of the expanding nebula mass, Mneb=2 Msun, yields MCrab>1.2 Msun and RCrab= 10-14 km. Setting the most recent evaluation (``central value'') Mneb=4.6 Msun rules out most of the existing equations of state, leaving only the stiffest ones: MCrab>1.9 Msun, RCrab= 14-15 km.

  14. An Empirical Human Controller Model for Preview Tracking Tasks.

    PubMed

    van der El, Kasper; Pool, Daan M; Damveld, Herman J; van Paassen, Marinus Rene M; Mulder, Max

    2016-11-01

    Real-life tracking tasks often show preview information to the human controller about the future track to follow. The effect of preview on manual control behavior is still relatively unknown. This paper proposes a generic operator model for preview tracking, empirically derived from experimental measurements. Conditions included pursuit tracking, i.e., without preview information, and tracking with 1 s of preview. Controlled element dynamics varied between gain, single integrator, and double integrator. The model is derived in the frequency domain, after application of a black-box system identification method based on Fourier coefficients. Parameter estimates are obtained to assess the validity of the model in both the time domain and frequency domain. Measured behavior in all evaluated conditions can be captured with the commonly used quasi-linear operator model for compensatory tracking, extended with two viewpoints of the previewed target. The derived model provides new insights into how human operators use preview information in tracking tasks.

  15. Modeling the near-ultraviolet band of GK stars. III. Dependence on abundance pattern

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Short, C. Ian; Campbell, Eamonn A., E-mail: ishort@ap.smu.ca

    2013-06-01

    We extend the grid of non-LTE (NLTE) models presented in Paper II to explore variations in abundance pattern in two ways: (1) the adoption of the Asplund et al. (GASS10) abundances, (2) for stars of metallicity, [M/H], of –0.5, the adoption of a non-solar enhancement of α-elements by +0.3 dex. Moreover, our grid of synthetic spectral energy distributions (SEDs) is interpolated to a finer numerical resolution in both T {sub eff} (ΔT {sub eff} = 25 K) and log g (Δlog g = 0.25). We compare the values of T {sub eff} and log g inferred from fitting LTE andmore » NLTE SEDs to observed SEDs throughout the entire visible band, and in an ad hoc 'blue' band. We compare our spectrophotometrically derived T {sub eff} values to a variety of T {sub eff} calibrations, including more empirical ones, drawn from the literature. For stars of solar metallicity, we find that the adoption of the GASS10 abundances lowers the inferred T {sub eff} value by 25-50 K for late-type giants, and NLTE models computed with the GASS10 abundances give T {sub eff} results that are marginally in better agreement with other T {sub eff} calibrations. For stars of [M/H] = –0.5 there is marginal evidence that adoption of α-enhancement further lowers the derived T {sub eff} value by 50 K. Stellar parameters inferred from fitting NLTE models to SEDs are more dependent than LTE models on the wavelength region being fitted, and we find that the effect depends on how heavily line blanketed the fitting region is, whether the fitting region is to the blue of the Wien peak of the star's SED, or both.« less

  16. Manipulating the Gradient

    ERIC Educational Resources Information Center

    Gaze, Eric C.

    2005-01-01

    We introduce a cooperative learning, group lab for a Calculus III course to facilitate comprehension of the gradient vector and directional derivative concepts. The lab is a hands-on experience allowing students to manipulate a tangent plane and empirically measure the effect of partial derivatives on the direction of optimal ascent. (Contains 7…

  17. Learners with Dyslexia: Exploring Their Experiences with Different Online Reading Affordances

    ERIC Educational Resources Information Center

    Chen, Chwen Jen; Keong, Melissa Wei Yin; Teh, Chee Siong; Chuah, Kee Man

    2015-01-01

    To date, empirically derived guidelines for designing accessible online learning environments for learners with dyslexia are still scarce. This study aims to explore the learning experience of learners with dyslexia when reading passages using different online reading affordances to derive some guidelines for dyslexia-friendly online text. The…

  18. The amount effect and marginal value.

    PubMed

    Rachlin, Howard; Arfer, Kodi B; Safin, Vasiliy; Yen, Ming

    2015-07-01

    The amount effect of delay discounting (by which the value of larger reward amounts is discounted by delay at a lower rate than that of smaller amounts) strictly implies that value functions (value as a function of amount) are steeper at greater delays than they are at lesser delays. That is, the amount effect and the difference in value functions at different delays are actually a single empirical finding. Amount effects of delay discounting are typically found with choice experiments. Value functions for immediate rewards have been empirically obtained by direct judgment. (Value functions for delayed rewards have not been previously obtained.) The present experiment obtained value functions for both immediate and delayed rewards by direct judgment and found them to be steeper when the rewards were delayed--hence, finding an amount effect with delay discounting. © Society for the Experimental Analysis of Behavior.

  19. An Evaluation of Empirical Bayes' Estimation of Value- Added Teacher Performance Measures. Working Paper #31. Revised

    ERIC Educational Resources Information Center

    Guarino, Cassandra M.; Maxfield, Michelle; Reckase, Mark D.; Thompson, Paul; Wooldridge, Jeffrey M.

    2014-01-01

    Empirical Bayes' (EB) estimation is a widely used procedure to calculate teacher value-added. It is primarily viewed as a way to make imprecise estimates more reliable. In this paper we review the theory of EB estimation and use simulated data to study its ability to properly rank teachers. We compare the performance of EB estimators with that of…

  20. Effects of polarons on static polarizabilities and second order hyperpolarizabilities of conjugated polymers

    NASA Astrophysics Data System (ADS)

    Wang, Ya-Dong; Meng, Yan; Di, Bing; Wang, Shu-Ling; An, Zhong

    2010-12-01

    According to the one-dimensional tight-binding Su—Schrieffer—Heeger model, we have investigated the effects of charged polarons on the static polarizability, αxx, and the second order hyperpolarizabilities, γxxxx, of conjugated polymers. Our results are consistent qualitatively with previous ab initio and semi-empirical calculations. The origin of the universal growth is discussed using a local-view formalism that is based on the local atomic charge derivatives. Furthermore, combining the Su-Schrieffer-Heeger model and the extended Hubbard model, we have investigated systematically the effects of electron-electron interactions on αxx and γxxxx of charged polymer chains. For a fixed value of the nearest-neighbour interaction V, the values of αxx and γxxxx increase as the on-site Coulomb interaction U increases for U < Uc and decrease with U for U > Uc, where Uc is a critical value of U at which the static polarizability or the second order hyperpolarizability reaches a maximal value of αmax or γmax. It is found that the effect of the e-e interaction on the value of αxx is dependent on the ratio between U and V for either a short or a long charged polymer. Whereas, that effect on the value of γxxxx is sensitive both to the ratio of U to V and to the size of the molecule.

  1. On the impact of helium abundance on the Cepheid period-luminosity and Wesenheit relations and the distance ladder

    NASA Astrophysics Data System (ADS)

    Carini, R.; Brocato, E.; Raimondo, G.; Marconi, M.

    2017-08-01

    This work analyses the effect of the helium content on synthetic period-luminosity relations (PLRs) and period-Wesenheit relations (PWRs) of Cepheids and the systematic uncertainties on the derived distances that a hidden population of He-enhanced Cepheids may generate. We use new stellar and pulsation models to build a homogeneous and consistent framework to derive the Cepheid features. The Cepheid populations expected in synthetic colour-magnitude diagrams of young stellar systems (from 20 to 250 Myr) are computed in several photometric bands for Y = 0.25 and 0.35, at a fixed metallicity (Z = 0.008). The PLRs appear to be very similar in the two cases, with negligible effects (few per cent) on distances, while PWRs differ somewhat, with systematic uncertainties in deriving distances as high as ˜ 7 per cent at log P < 1.5. Statistical effects due to the number of variables used to determine the relations contribute to a distance systematic error of the order of few percent, with values decreasing from optical to near-infrared bands. The empirical PWRs derived from multiwavelength data sets for the Large Magellanic Cloud (LMC) is in a very good agreement with our theoretical PWRs obtained with a standard He content, supporting the evidence that LMC Cepheids do not show any He effect.

  2. Bias corrections of GOSAT SWIR XCO 2 and XCH 4 with TCCON data and their evaluation using aircraft measurement data

    DOE PAGES

    Inoue, Makoto; Morino, Isamu; Uchino, Osamu; ...

    2016-08-01

    We describe a method for removing systematic biases of column-averaged dry air mole fractions of CO 2 (XCO 2) and CH 4 (XCH 4) derived from short-wavelength infrared (SWIR) spectra of the Greenhouse gases Observing SATellite (GOSAT). We conduct correlation analyses between the GOSAT biases and simultaneously retrieved auxiliary parameters. We use these correlations to bias correct the GOSAT data, removing these spurious correlations. Data from the Total Carbon Column Observing Network (TCCON) were used as reference values for this regression analysis. To evaluate the effectiveness of this correction method, the uncorrected/corrected GOSAT data were compared to independent XCO 2more » and XCH 4 data derived from aircraft measurements taken for the Comprehensive Observation Network for TRace gases by AIrLiner (CONTRAIL) project, the National Oceanic and Atmospheric Administration (NOAA), the US Department of Energy (DOE), the National Institute for Environmental Studies (NIES), the Japan Meteorological Agency (JMA), the HIAPER Pole-to-Pole observations (HIPPO) program, and the GOSAT validation aircraft observation campaign over Japan. These comparisons demonstrate that the empirically derived bias correction improves the agreement between GOSAT XCO 2/XCH 4 and the aircraft data. Finally, we present spatial distributions and temporal variations of the derived GOSAT biases.« less

  3. Bias corrections of GOSAT SWIR XCO 2 and XCH 4 with TCCON data and their evaluation using aircraft measurement data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Inoue, Makoto; Morino, Isamu; Uchino, Osamu

    We describe a method for removing systematic biases of column-averaged dry air mole fractions of CO 2 (XCO 2) and CH 4 (XCH 4) derived from short-wavelength infrared (SWIR) spectra of the Greenhouse gases Observing SATellite (GOSAT). We conduct correlation analyses between the GOSAT biases and simultaneously retrieved auxiliary parameters. We use these correlations to bias correct the GOSAT data, removing these spurious correlations. Data from the Total Carbon Column Observing Network (TCCON) were used as reference values for this regression analysis. To evaluate the effectiveness of this correction method, the uncorrected/corrected GOSAT data were compared to independent XCO 2more » and XCH 4 data derived from aircraft measurements taken for the Comprehensive Observation Network for TRace gases by AIrLiner (CONTRAIL) project, the National Oceanic and Atmospheric Administration (NOAA), the US Department of Energy (DOE), the National Institute for Environmental Studies (NIES), the Japan Meteorological Agency (JMA), the HIAPER Pole-to-Pole observations (HIPPO) program, and the GOSAT validation aircraft observation campaign over Japan. These comparisons demonstrate that the empirically derived bias correction improves the agreement between GOSAT XCO 2/XCH 4 and the aircraft data. Finally, we present spatial distributions and temporal variations of the derived GOSAT biases.« less

  4. Estimating the effects of 17α-ethinylestradiol on stochastic population growth rate of fathead minnows: a population synthesis of empirically derived vital rates

    USGS Publications Warehouse

    Schwindt, Adam R.; Winkelman, Dana L.

    2016-01-01

    Urban freshwater streams in arid climates are wastewater effluent dominated ecosystems particularly impacted by bioactive chemicals including steroid estrogens that disrupt vertebrate reproduction. However, more understanding of the population and ecological consequences of exposure to wastewater effluent is needed. We used empirically derived vital rate estimates from a mesocosm study to develop a stochastic stage-structured population model and evaluated the effect of 17α-ethinylestradiol (EE2), the estrogen in human contraceptive pills, on fathead minnow Pimephales promelas stochastic population growth rate. Tested EE2 concentrations ranged from 3.2 to 10.9 ng L−1 and produced stochastic population growth rates (λ S ) below 1 at the lowest concentration, indicating potential for population decline. Declines in λ S compared to controls were evident in treatments that were lethal to adult males despite statistically insignificant effects on egg production and juvenile recruitment. In fact, results indicated that λ S was most sensitive to the survival of juveniles and female egg production. More broadly, our results document that population model results may differ even when empirically derived estimates of vital rates are similar among experimental treatments, and demonstrate how population models integrate and project the effects of stressors throughout the life cycle. Thus, stochastic population models can more effectively evaluate the ecological consequences of experimentally derived vital rates.

  5. Mathematical detection of aortic valve opening (B point) in impedance cardiography: A comparison of three popular algorithms.

    PubMed

    Árbol, Javier Rodríguez; Perakakis, Pandelis; Garrido, Alba; Mata, José Luis; Fernández-Santaella, M Carmen; Vila, Jaime

    2017-03-01

    The preejection period (PEP) is an index of left ventricle contractility widely used in psychophysiological research. Its computation requires detecting the moment when the aortic valve opens, which coincides with the B point in the first derivative of impedance cardiogram (ICG). Although this operation has been traditionally made via visual inspection, several algorithms based on derivative calculations have been developed to enable an automatic performance of the task. However, despite their popularity, data about their empirical validation are not always available. The present study analyzes the performance in the estimation of the aortic valve opening of three popular algorithms, by comparing their performance with the visual detection of the B point made by two independent scorers. Algorithm 1 is based on the first derivative of the ICG, Algorithm 2 on the second derivative, and Algorithm 3 on the third derivative. Algorithm 3 showed the highest accuracy rate (78.77%), followed by Algorithm 1 (24.57%) and Algorithm 2 (13.82%). In the automatic computation of PEP, Algorithm 2 resulted in significantly more missed cycles (48.57%) than Algorithm 1 (6.3%) and Algorithm 3 (3.5%). Algorithm 2 also estimated a significantly lower average PEP (70 ms), compared with the values obtained by Algorithm 1 (119 ms) and Algorithm 3 (113 ms). Our findings indicate that the algorithm based on the third derivative of the ICG performs significantly better. Nevertheless, a visual inspection of the signal proves indispensable, and this article provides a novel visual guide to facilitate the manual detection of the B point. © 2016 Society for Psychophysiological Research.

  6. A Bayesian Analysis of Scale-Invariant Processes

    DTIC Science & Technology

    2012-01-01

    Earth Grid (EASE- Grid). The NED raster elevation data of one arc-second resolution (30 m) over the continental US are derived from multiple satellites ...instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send...empirical and ME distributions, yet ensuring computational efficiency. Instead of com- puting empirical histograms from large amount of data , only some

  7. Nonlinear bulging factor based on R-curve data

    NASA Technical Reports Server (NTRS)

    Jeong, David Y.; Tong, Pin

    1994-01-01

    In this paper, a nonlinear bulging factor is derived using a strain energy approach combined with dimensional analysis. The functional form of the bulging factor contains an empirical constant that is determined using R-curve data from unstiffened flat and curved panel tests. The determination of this empirical constant is based on the assumption that the R-curve is the same for both flat and curved panels.

  8. Regionally Adaptable Ground Motion Prediction Equation (GMPE) from Empirical Models of Fourier and Duration of Ground Motion

    NASA Astrophysics Data System (ADS)

    Bora, Sanjay; Scherbaum, Frank; Kuehn, Nicolas; Stafford, Peter; Edwards, Benjamin

    2016-04-01

    The current practice of deriving empirical ground motion prediction equations (GMPEs) involves using ground motions recorded at multiple sites. However, in applications like site-specific (e.g., critical facility) hazard ground motions obtained from the GMPEs are need to be adjusted/corrected to a particular site/site-condition under investigation. This study presents a complete framework for developing a response spectral GMPE, within which the issue of adjustment of ground motions is addressed in a manner consistent with the linear system framework. The present approach is a two-step process in which the first step consists of deriving two separate empirical models, one for Fourier amplitude spectra (FAS) and the other for a random vibration theory (RVT) optimized duration (Drvto) of ground motion. In the second step the two models are combined within the RVT framework to obtain full response spectral amplitudes. Additionally, the framework also involves a stochastic model based extrapolation of individual Fourier spectra to extend the useable frequency limit of the empirically derived FAS model. The stochastic model parameters were determined by inverting the Fourier spectral data using an approach similar to the one as described in Edwards and Faeh (2013). Comparison of median predicted response spectra from present approach with those from other regional GMPEs indicates that the present approach can also be used as a stand-alone model. The dataset used for the presented analysis is a subset of the recently compiled database RESORCE-2012 across Europe, the Middle East and the Mediterranean region.

  9. A method to integrate descriptive and experimental field studies at the level of data and empirical concepts1

    PubMed Central

    Bijou, Sidney W.; Peterson, Robert F.; Ault, Marion H.

    1968-01-01

    It is the thesis of this paper that data from descriptive and experimental field studies can be interrelated at the level of data and empirical concepts if both sets are derived from frequency-of-occurrence measures. The methodology proposed for a descriptive field study is predicated on three assumptions: (1) The primary data of psychology are the observable interactions of a biological organism and environmental events, past and present. (2) Theoretical concepts and laws are derived from empirical concepts and laws, which in turn are derived from the raw data. (3) Descriptive field studies describe interactions between behavioral and environmental events; experimental field studies provide information on their functional relationships. The ingredients of a descriptive field investigation using frequency measures consist of: (1) specifying in objective terms the situation in which the study is conducted, (2) defining and recording behavioral and environmental events in observable terms, and (3) measuring observer reliability. Field descriptive studies following the procedures suggested here would reveal interesting new relationships in the usual ecological settings and would also provide provocative cues for experimental studies. On the other hand, field-experimental studies using frequency measures would probably yield findings that would suggest the need for describing new interactions in specific natural situations. PMID:16795175

  10. Density-based empirical likelihood procedures for testing symmetry of data distributions and K-sample comparisons.

    PubMed

    Vexler, Albert; Tanajian, Hovig; Hutson, Alan D

    In practice, parametric likelihood-ratio techniques are powerful statistical tools. In this article, we propose and examine novel and simple distribution-free test statistics that efficiently approximate parametric likelihood ratios to analyze and compare distributions of K groups of observations. Using the density-based empirical likelihood methodology, we develop a Stata package that applies to a test for symmetry of data distributions and compares K -sample distributions. Recognizing that recent statistical software packages do not sufficiently address K -sample nonparametric comparisons of data distributions, we propose a new Stata command, vxdbel, to execute exact density-based empirical likelihood-ratio tests using K samples. To calculate p -values of the proposed tests, we use the following methods: 1) a classical technique based on Monte Carlo p -value evaluations; 2) an interpolation technique based on tabulated critical values; and 3) a new hybrid technique that combines methods 1 and 2. The third, cutting-edge method is shown to be very efficient in the context of exact-test p -value computations. This Bayesian-type method considers tabulated critical values as prior information and Monte Carlo generations of test statistic values as data used to depict the likelihood function. In this case, a nonparametric Bayesian method is proposed to compute critical values of exact tests.

  11. Effect of Box-Cox transformation on power of Haseman-Elston and maximum-likelihood variance components tests to detect quantitative trait Loci.

    PubMed

    Etzel, C J; Shete, S; Beasley, T M; Fernandez, J R; Allison, D B; Amos, C I

    2003-01-01

    Non-normality of the phenotypic distribution can affect power to detect quantitative trait loci in sib pair studies. Previously, we observed that Winsorizing the sib pair phenotypes increased the power of quantitative trait locus (QTL) detection for both Haseman-Elston (HE) least-squares tests [Hum Hered 2002;53:59-67] and maximum likelihood-based variance components (MLVC) analysis [Behav Genet (in press)]. Winsorizing the phenotypes led to a slight increase in type 1 error in H-E tests and a slight decrease in type I error for MLVC analysis. Herein, we considered transforming the sib pair phenotypes using the Box-Cox family of transformations. Data were simulated for normal and non-normal (skewed and kurtic) distributions. Phenotypic values were replaced by Box-Cox transformed values. Twenty thousand replications were performed for three H-E tests of linkage and the likelihood ratio test (LRT), the Wald test and other robust versions based on the MLVC method. We calculated the relative nominal inflation rate as the ratio of observed empirical type 1 error divided by the set alpha level (5, 1 and 0.1% alpha levels). MLVC tests applied to non-normal data had inflated type I errors (rate ratio greater than 1.0), which were controlled best by Box-Cox transformation and to a lesser degree by Winsorizing. For example, for non-transformed, skewed phenotypes (derived from a chi2 distribution with 2 degrees of freedom), the rates of empirical type 1 error with respect to set alpha level=0.01 were 0.80, 4.35 and 7.33 for the original H-E test, LRT and Wald test, respectively. For the same alpha level=0.01, these rates were 1.12, 3.095 and 4.088 after Winsorizing and 0.723, 1.195 and 1.905 after Box-Cox transformation. Winsorizing reduced inflated error rates for the leptokurtic distribution (derived from a Laplace distribution with mean 0 and variance 8). Further, power (adjusted for empirical type 1 error) at the 0.01 alpha level ranged from 4.7 to 17.3% across all tests using the non-transformed, skewed phenotypes, from 7.5 to 20.1% after Winsorizing and from 12.6 to 33.2% after Box-Cox transformation. Likewise, power (adjusted for empirical type 1 error) using leptokurtic phenotypes at the 0.01 alpha level ranged from 4.4 to 12.5% across all tests with no transformation, from 7 to 19.2% after Winsorizing and from 4.5 to 13.8% after Box-Cox transformation. Thus the Box-Cox transformation apparently provided the best type 1 error control and maximal power among the procedures we considered for analyzing a non-normal, skewed distribution (chi2) while Winzorizing worked best for the non-normal, kurtic distribution (Laplace). We repeated the same simulations using a larger sample size (200 sib pairs) and found similar results. Copyright 2003 S. Karger AG, Basel

  12. Social phobia: further evidence of dimensional structure.

    PubMed

    Crome, Erica; Baillie, Andrew; Slade, Tim; Ruscio, Ayelet Meron

    2010-11-01

    Social phobia is a common mental disorder associated with significant impairment. Current research and treatment models of social phobia rely on categorical diagnostic conceptualizations lacking empirical support. This study aims to further research exploring whether social phobia is best conceptualized as a dimension or a discrete categorical disorder. This study used three distinct taxometric techniques (mean above minus below a cut, maximum Eigen value and latent mode) to explore the latent structure of social phobia in two large epidemiological samples, using indicators derived from diagnostic criteria and associated avoidant personality traits. Overall, outcomes from multiple taxometric analyses supported dimensional structure. This is consistent with conceptualizations of social phobia as lying on a continuum with avoidant personality traits. Support for the dimensionality of social phobia has important implications for future research, assessment, treatment, and public policy.

  13. The temperatures, abundances and gravities of F dwarf stars.

    NASA Technical Reports Server (NTRS)

    Bell, R. A.

    1971-01-01

    Theoretical colors computed from laboratory line data and from model stellar atmospheres have been used to interpret the colors of about 150 F and early G dwarfs. Effective temperatures have been derived from the H-beta index and from R-I, abundances have been obtained from m(sub 1) and from b-y, and gravities have been obtained from c(sub 1) and from b-y. The effective temperatures and gravities are in good agreement with values obtained from spectral scans. Absolute magnitudes have been obtained from the effective temperatures and gravities, the latter being used with assumed stellar masses to yield radii. The present results provide theoretical justification of the empirical formulas given by Crawford and by Stroemgren for the determination of absolute magnitudes and abundances from uvby photometry.

  14. An image based method for crop yield prediction using remotely sensed and crop canopy data: the case of Paphos district, western Cyprus

    NASA Astrophysics Data System (ADS)

    Papadavid, G.; Hadjimitsis, D.

    2014-08-01

    Remote sensing techniques development have provided the opportunity for optimizing yields in the agricultural procedure and moreover to predict the forthcoming yield. Yield prediction plays a vital role in Agricultural Policy and provides useful data to policy makers. In this context, crop and soil parameters along with NDVI index which are valuable sources of information have been elaborated statistically to test if a) Durum wheat yield can be predicted and b) when is the actual time-window to predict the yield in the district of Paphos, where Durum wheat is the basic cultivation and supports the rural economy of the area. 15 plots cultivated with Durum wheat from the Agricultural Research Institute of Cyprus for research purposes, in the area of interest, have been under observation for three years to derive the necessary data. Statistical and remote sensing techniques were then applied to derive and map a model that can predict yield of Durum wheat in this area. Indeed the semi-empirical model developed for this purpose, with very high correlation coefficient R2=0.886, has shown in practice that can predict yields very good. Students T test has revealed that predicted values and real values of yield have no statistically significant difference. The developed model can and will be further elaborated with more parameters and applied for other crops in the near future.

  15. Quantum Monte Carlo calculations of electromagnetic transitions in $^8$Be with meson-exchange currents derived from chiral effective field theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pastore, S.; Wiringa, Robert B.; Pieper, Steven C.

    2014-08-01

    We report quantum Monte Carlo calculations of electromagnetic transitions inmore » $^8$Be. The realistic Argonne $$v_{18}$$ two-nucleon and Illinois-7 three-nucleon potentials are used to generate the ground state and nine excited states, with energies that are in excellent agreement with experiment. A dozen $M1$ and eight $E2$ transition matrix elements between these states are then evaluated. The $E2$ matrix elements are computed only in impulse approximation, with those transitions from broad resonant states requiring special treatment. The $M1$ matrix elements include two-body meson-exchange currents derived from chiral effective field theory, which typically contribute 20--30\\% of the total expectation value. Many of the transitions are between isospin-mixed states; the calculations are performed for isospin-pure states and then combined with the empirical mixing coefficients to compare to experiment. In general, we find that transitions between states that have the same dominant spatial symmetry are in decent agreement with experiment, but those transitions between different spatial symmetries are often significantly underpredicted.« less

  16. Surface density of quasars in two high-latitude fields

    NASA Technical Reports Server (NTRS)

    Usher, P. D.; Green, R. F.; Huang, K. L.; Warnock, A., III

    1983-01-01

    Fourty-four objects selected for ultraviolet excess have been identified spectroscopically. The objects lie in two Palomar 1.2 m Schmidt fields in the north galactic polar cap, one of 7.7 sq deg centered on Kapteyn Selected Area 29, the other of 36 sq deg centered on SA 55. The objects are characterized by Color Classes (CC) 1A, 1, 1B, 1C, 2, and 3. Quasars comprise 75 percent of the CC 1A objects and 44 percent of the objects in the SA 29 field. Twelve quasars in the SA 29 field comprise a complete sample to B = 18.5 mag, and given an uncorrected surface density of 1.6 quasars/sq deg. This value is essentially that derived by Sandage (1969). Corrections are applied to account for the lack of high redshift quasars. An empirical correction is derived to account for lack of simultaneity in selection and photometry. A corrected lower limit to the surface density is estimated to be 1.85 quasars/sq deg to B = 18.5 mag.

  17. The use of index tests to determine the mechanical properties of crushed aggregates from Precambrian basement complex rocks, Ado-Ekiti, SW Nigeria

    NASA Astrophysics Data System (ADS)

    Afolagboye, Lekan Olatayo; Talabi, Abel Ojo; Oyelami, Charles Adebayo

    2017-05-01

    This study assessed the possibility of using index tests to determine the mechanical properties of crushed aggregates. The aggregates used in this study were derived from major Precambrian basement rocks in Ado-Ekiti, Nigeria. Regression analyses were performed to determine the empirical relations that mechanical properties of the aggregates may have with the point load strength (IS(50)), Schmidt rebound hammer value (SHR) and unconfined compressive strength (UCS) of the rocks. For all the data, strong correlation coefficients were found between IS(50), SHR, UCS, and mechanical properties of the aggregates. The regression analysis conducted on the different rocks separately showed that correlations coefficients obtained between the IS(50), SHR, UCS and mechanical properties of the aggregates were stronger than those of the grouped rocks. The T-test and F-test showed that the derived models were valid. This study has shown that the mechanical properties of the aggregates can be estimated from IS(50), SHR and USC but the influence of rock type on the relationships should be taken into consideration.

  18. Organizational values in the provision of access to care for the uninsured

    PubMed Central

    Harrison, Krista Lyn; Taylor, Holly A.

    2017-01-01

    Background For the last 20 years, health provider organizations have made efforts to align mission, values, and everyday practices to ensure high-quality, high-value, and ethical care. However, little attention has been paid to the organizational values and practices of community-based programs that organize and facilitate access to care for uninsured populations. This study aimed to identify and describe organizational values relevant to resource allocation and policy decisions that affect the services offered to members, using the case of community access programs: county-based programs that provide access to care for the uninsured working poor. Methods Comparative and qualitative case study methodology was used, including document review, observations, and key informant interviews, at two geographically diverse programs. Results Nine values were identified as relevant to decision making: stewardship, quality care, access to care, service to others, community well-being, member independence, organizational excellence, decency, and fairness. The way these values were deployed in resource allocation decisions that affected services offered to the uninsured are illustrated in one example per site. Conclusions This study addresses the previous dearth in the literature regarding an empirical description of organizational values employed in decision making of community organizations. To assess the transferability of the values identified, we compared our empirical results to prior empirical and conceptual work in the United States and internationally and found substantial alignment. Future studies can examine whether the identified organizational values are reflective of those at other health care organizations. PMID:28781981

  19. An Empirical Derivation of the Run Time of the Bubble Sort Algorithm.

    ERIC Educational Resources Information Center

    Gonzales, Michael G.

    1984-01-01

    Suggests a moving pictorial tool to help teach principles in the bubble sort algorithm. Develops such a tool applied to an unsorted list of numbers and describes a method to derive the run time of the algorithm. The method can be modified to run the times of various other algorithms. (JN)

  20. The Empirical Derivation of Equations for Predicting Subjective Textual Information. Final Report.

    ERIC Educational Resources Information Center

    Kauffman, Dan; And Others

    A study was made to derive an equation for predicting the "subjective" textual information contained in a text of material written in the English language. Specifically, this investigation describes, by a mathematical equation, the relationship between the "subjective" information content of written textual material and the relative number of…

  1. Correlation of second virial coefficient with solubility for proteins in salt solutions.

    PubMed

    Mehta, Chirag M; White, Edward T; Litster, James D

    2012-01-01

    In this work, osmotic second virial coefficients (B(22)) were determined and correlated with the measured solubilities for the proteins, α-amylase, ovalbumin, and lysozyme. The B(22) values and solubilities were determined in similar solution conditions using two salts, sodium chloride and ammonium sulfate in an acidic pH range. An overall decrease in the solubility of the proteins (salting out) was observed at high concentrations of ammonium sulfate and sodium chloride solutions. However, for α-amylase, salting-in behavior was also observed in low concentration sodium chloride solutions. In ammonium sulfate solutions, the B(22) are small and close to zero below 2.4 M. As the ammonium sulfate concentrations were further increased, B(22) values decreased for all systems studied. The effect of sodium chloride on B(22) varies with concentration, solution pH, and the type of protein studied. Theoretical models show a reasonable fit to the experimental derived data of B(22) and solubility. B(22) is also directly proportional to the logarithm of the solubility values for individual proteins in salt solutions, so the log-linear empirical models developed in this work can also be used to rapidly predict solubility and B(22) values for given protein-salt systems. Copyright © 2011 American Institute of Chemical Engineers (AIChE).

  2. A study of the relationship between the chemical structures and the fluorescence quantum yields of coumarins, quinoxalinones and benzoxazinones for the development of sensitive fluorescent derivatization reagents.

    PubMed

    Azuma, Kentaro; Suzuki, Sachiko; Uchiyama, Seiichi; Kajiro, Toshi; Santa, Tomofumi; Imai, Kazuhiro

    2003-04-01

    To develop new fluorescent derivatization reagents, we investigated the relationship between the chemical structures and the fluorescence quantum yields (phi(f)) of coumarins, quinoxalinones and benzoxadinones. Forty-six compounds were synthesized and their fluorescence spectra were measured in n-hexane, ethyl acetate, methanol and water. The energy levels of these compounds were calculated by combination of the semi-empirical AM1 and INDO/S (CI = all) methods. The deltaE(Tn(n,pi*), S1(pi,pi*)) (the energy gap between the Tn(n,pi*) and S1(pi,pi*) states) values were well correlated with the phi(f) values, which enables us to predict the phi(f) values from their chemical structures. Based on this relationship, 3-phenyl-7-N-piperazinoquinoxalin-2(1H)-one (PQ-Pz) and 7-(3-(S)-aminopyrrolidin-1-yl)-3-phenylquinoxalin-2-(1H)-one (PQ-APy) were developed as fluorescent derivatization reagents for carboxylic acids. The derivatives of the carboxylic acids with PQ-Pz and PQ-APy showed large phi(f) values even in polar solvents, suggesting that these reagents are suitable for the microanalysis of biologically important carboxylic acids by reversed phase HPLC.

  3. Are cross-cultural comparisons of norms on death anxiety valid?

    PubMed

    Beshai, James A

    2008-01-01

    Cross-cultural comparisons of norms derived from research on Death Anxiety are valid as long as they provide existential validity. Existential validity is not empirically derived like construct validity. It is an understanding of being human unto death. It is the realization that death is imminent. It is the inner sense that provides a responder to death anxiety scales with a valid expression of his or her sense about the prospect of dying. It can be articulated in a life review by a disclosure of one's ontology. This article calls upon psychologists who develop death anxiety scales to disclose their presuppositions about death before administering a questionnaire. By disclosing his or her ontology a psychologist provides a means of disclosing his or her intentionality in responding to the items. This humanistic paradigm allows for an interactive participation between investigator and subject. Lester, Templer, and Abdel-Khalek (2006-2007) enriched psychology with significant empirical data on several correlates of death anxiety. But all scientists, especially psychologists, will always have alternative interpretations of the same empirical fact pattern. Empirical data is limited by the affirmation of the consequent limitation. A phenomenology of language and communication makes existential validity a necessary step for a broader understanding of the meaning of death anxiety.

  4. Prediction of maximum earthquake intensities for the San Francisco Bay region

    USGS Publications Warehouse

    Borcherdt, Roger D.; Gibbs, James F.

    1975-01-01

    The intensity data for the California earthquake of April 18, 1906, are strongly dependent on distance from the zone of surface faulting and the geological character of the ground. Considering only those sites (approximately one square city block in size) for which there is good evidence for the degree of ascribed intensity, the empirical relation derived between 1906 intensities and distance perpendicular to the fault for 917 sites underlain by rocks of the Franciscan Formation is: Intensity = 2.69 - 1.90 log (Distance) (km). For sites on other geologic units intensity increments, derived with respect to this empirical relation, correlate strongly with the Average Horizontal Spectral Amplifications (AHSA) determined from 99 three-component recordings of ground motion generated by nuclear explosions in Nevada. The resulting empirical relation is: Intensity Increment = 0.27 +2.70 log (AHSA), and average intensity increments for the various geologic units are -0.29 for granite, 0.19 for Franciscan Formation, 0.64 for the Great Valley Sequence, 0.82 for Santa Clara Formation, 1.34 for alluvium, 2.43 for bay mud. The maximum intensity map predicted from these empirical relations delineates areas in the San Francisco Bay region of potentially high intensity from future earthquakes on either the San Andreas fault or the Hazard fault.

  5. Atmospheric structure and helium abundance on Saturn from Cassini/UVIS and CIRS observations

    NASA Astrophysics Data System (ADS)

    Koskinen, T. T.; Guerlet, S.

    2018-06-01

    We combine measurements from stellar occultations observed by the Cassini Ultraviolet Imaging Spectrograph (UVIS) and limb scans observed by the Composite Infrared Spectrometer (CIRS) to create empirical atmospheric structure models for Saturn corresponding to the locations probed by the occultations. The results cover multiple locations at low to mid-latitudes between the spring of 2005 and the fall of 2015. We connect the temperature-pressure (T-P) profiles retrieved from the CIRS limb scans in the stratosphere to the T-P profiles in the thermosphere retrieved from the UVIS occultations. We calculate the altitudes corresponding to the pressure levels in each case based on our best fit composition model that includes H2, He, CH4 and upper limits on H. We match the altitude structure to the density profile in the thermosphere that is retrieved from the occultations. Our models depend on the abundance of helium and we derive a volume mixing ratio of 11 ± 2% for helium in the lower atmosphere based on a statistical analysis of the values derived for 32 different occultation locations. We also derive the mean temperature and methane profiles in the upper atmosphere and constrain their variability. Our results are consistent with enhanced heating at the polar auroral region and a dynamically active upper atmosphere.

  6. Variability of Marine Aerosol Fine-Mode Fraction and Estimates of Anthropogenic Aerosol Component Over Cloud-Free Oceans from the Moderate Resolution Imaging Spectroradiometer (MODIS)

    NASA Technical Reports Server (NTRS)

    Yu, Hongbin; Chin, Mian; Remer, Lorraine A.; Kleidman, Richard G.; Bellouin, Nicolas; Bian, Huisheng; Diehl, Thomas

    2009-01-01

    In this study, we examine seasonal and geographical variability of marine aerosol fine-mode fraction (f(sub m)) and its impacts on deriving the anthropogenic component of aerosol optical depth (tau(sub a)) and direct radiative forcing from multispectral satellite measurements. A proxy of f(sub m), empirically derived from the Moderate Resolution Imaging Spectroradiometer (MODIS) Collection 5 data, shows large seasonal and geographical variations that are consistent with the Goddard Chemistry Aerosol Radiation Transport (GOCART) and Global Modeling Initiative (GMI) model simulations. The so-derived seasonally and spatially varying f(sub m) is then implemented into a method of estimating tau(sub a) and direct radiative forcing from the MODIS measurements. It is found that the use of a constant value for fm as in previous studies would have overestimated Ta by about 20% over global ocean, with the overestimation up to 45% in some regions and seasons. The 7-year (2001-2007) global ocean average tau(sub a) is 0.035, with yearly average ranging from 0.031 to 0.039. Future improvement in measurements is needed to better separate anthropogenic aerosol from natural ones and to narrow down the wide range of aerosol direct radiative forcing.

  7. Decoding of the light changes in eclipsing Wolf-Rayet binaries. I. A non-classical approach to the solution of light curves

    NASA Astrophysics Data System (ADS)

    Perrier, C.; Breysacher, J.; Rauw, G.

    2009-09-01

    Aims: We present a technique to determine the orbital and physical parameters of eclipsing eccentric Wolf-Rayet + O-star binaries, where one eclipse is produced by the absorption of the O-star light by the stellar wind of the W-R star. Methods: Our method is based on the use of the empirical moments of the light curve that are integral transforms evaluated from the observed light curves. The optical depth along the line of sight and the limb darkening of the W-R star are modelled by simple mathematical functions, and we derive analytical expressions for the moments of the light curve as a function of the orbital parameters and the key parameters of the transparency and limb-darkening functions. These analytical expressions are then inverted in order to derive the values of the orbital inclination, the stellar radii, the fractional luminosities, and the parameters of the wind transparency and limb-darkening laws. Results: The method is applied to the SMC W-R eclipsing binary HD 5980, a remarkable object that underwent an LBV-like event in August 1994. The analysis refers to the pre-outburst observational data. A synthetic light curve based on the elements derived for the system allows a quality assessment of the results obtained.

  8. Thermodynamics of the Trp-cage Miniprotein Unfolding in Urea

    PubMed Central

    Wafer, Lucas N. R.; Streicher, Werner W.; Makhatadze, George I.

    2010-01-01

    The thermodynamic properties of unfolding of the Trp-cage mini protein in the presence of various concentrations of urea have been characterized using temperature-induced unfolding monitored by far-UV circular dichroism spectroscopy. Analysis of the data using a two-state model allowed the calculation of the Gibbs energy of unfolding at 25°C as a function of urea concentration. This in turn was analyzed by the linear extrapolation model that yielded the dependence of Gibbs energy on urea concentration, i.e. the m-value for Trp-cage unfolding. The m-value obtained from the experimental data, as well as the experimental heat capacity change upon unfolding, were correlated with the structural parameters derived from the three dimensional structure of Trp-cage. It is shown that the m-value can be predicted well using a transfer model, while the heat capacity changes are in very good agreement with the empirical models based on model compounds studies. These results provide direct evidence that Trp-cage, despite its small size, is an excellent model for studies of protein unfolding and provide thermodynamic data that can be used to compare with atomistic computer simulations. PMID:20112418

  9. Pauling's electronegativity equation and a new corollary accurately predict bond dissociation enthalpies and enhance current understanding of the nature of the chemical bond.

    PubMed

    Matsunaga, Nikita; Rogers, Donald W; Zavitsas, Andreas A

    2003-04-18

    Contrary to other recent reports, Pauling's original electronegativity equation, applied as Pauling specified, describes quite accurately homolytic bond dissociation enthalpies of common covalent bonds, including highly polar ones, with an average deviation of +/-1.5 kcal mol(-1) from literature values for 117 such bonds. Dissociation enthalpies are presented for more than 250 bonds, including 79 for which experimental values are not available. Some previous evaluations of accuracy gave misleadingly poor results by applying the equation to cases for which it was not derived and for which it should not reproduce experimental values. Properly interpreted, the results of the equation provide new and quantitative insights into many facets of chemistry such as radical stabilities, factors influencing reactivity in electrophilic aromatic substitutions, the magnitude of steric effects, conjugative stabilization in unsaturated systems, rotational barriers, molecular and electronic structure, and aspects of autoxidation. A new corollary of the original equation expands its applicability and provides a rationale for previously observed empirical correlations. The equation raises doubts about a new bonding theory. Hydrogen is unique in that its electronegativity is not constant.

  10. Deriving Two-Dimensional Ocean Wave Spectra and Surface Height Maps from the Shuttle Imaging Radar (SIR-B)

    NASA Technical Reports Server (NTRS)

    Tilley, D. G.

    1986-01-01

    Directional ocean wave spectra were derived from Shuttle Imaging Radar (SIR-B) imagery in regions where nearly simultaneous aircraft-based measurements of the wave spectra were also available as part of the NASA Shuttle Mission 41G experiments. The SIR-B response to a coherently speckled scene is used to estimate the stationary system transfer function in the 15 even terms of an eighth-order two-dimensional polynomial. Surface elevation contours are assigned to SIR-B ocean scenes Fourier filtered using a empirical model of the modulation transfer function calibrated with independent measurements of wave height. The empirical measurements of the wave height distribution are illustrated for a variety of sea states.

  11. Empirical calibration of the near-infrared Ca II triplet - III. Fitting functions

    NASA Astrophysics Data System (ADS)

    Cenarro, A. J.; Gorgas, J.; Cardiel, N.; Vazdekis, A.; Peletier, R. F.

    2002-02-01

    Using a near-infrared stellar library of 706 stars with a wide coverage of atmospheric parameters, we study the behaviour of the CaII triplet strength in terms of effective temperature, surface gravity and metallicity. Empirical fitting functions for recently defined line-strength indices, namely CaT*, CaT and PaT, are provided. These functions can be easily implemented into stellar population models to provide accurate predictions for integrated CaII strengths. We also present a thorough study of the various error sources and their relation to the residuals of the derived fitting functions. Finally, the derived functional forms and the behaviour of the predicted CaII are compared with those of previous works in the field.

  12. High latitude meteoric δ18O compositions: Paleosol siderite in the Middle Cretaceous Nanushuk Formation, North Slope, Alaska

    USGS Publications Warehouse

    Ufnar, David F.; Ludvigson, Greg A.; Gonzalez, Luis A.; Brenner, Richard L.; Witzke, Brian J.

    2004-01-01

    Siderite-bearing pedogenic horizons of the Nanushuk Formation of the North Slope, Alaska, provide a critical high paleolatitude oxygen isotopic proxy record of paleoprecipitation, supplying important empirical data needed for paleoclimatic reconstructions and models of "greenhouse-world" precipitation rates. Siderite ??18O values were determined from four paleosol horizons in the National Petroleum Reserve Alaska (NPR-A) Grandstand # 1 Core, and the values range between -17.6??? and -14.3??? Peedee belemnite (PDB) with standard deviations generally less than 0.6??? within individual horizons. The ??13C values are much more variable, ranging from -4.6??? to +10.8??? PDB. A covariant ??18O versus ??13C trend in one horizon probably resulted from mixing between modified marine and meteoric phreatic fluids during siderite precipitation. Groundwater values calculated from siderite oxygen isotopic values and paleobotanical temperature estimates range from -23.0??? to -19.5??? standard mean ocean water (SMOW). Minor element analyses show that the siderites are impure, having enrichments in Ca, Mg, Mn, and Sr. Minor element substitutions and Mg/Fe and Mg/ (Ca + Mg) ratios also suggest the influence of marine fluids upon siderite precipitation. The pedogenic horizons are characterized by gleyed colors, rare root traces, abundant siderite, abundant organic matter, rare clay and silty clay coatings and infillings, some preservation of primary sedimentary stratification, and a lack of ferruginous oxides and mottles. The pedogenic features suggest that these were poorly drained, reducing, hydromorphic soils that developed in coal-bearing delta plain facies and are similar to modern Inceptisols. Model-derived estimates of precipitation rates for the Late Albian of the North Slope, Alaska (485-626 mm/yr), are consistent with precipitation rates necessary to maintain modern peat-forming environments. This information reinforces the mutual consistency between empirical paleotemperature estimates and isotope mass balance models of the hydrologic cycle and can be used in future global circulation modeling (GCM) experiments of "greenhouse-world" climates to constrain high latitude precipitation rates in simulations of ancient worlds with decreased equator-to-pole temperature gradients. ?? 2004 Geological Society of America.

  13. Trace Elements and Oxygen Isotope Zoning of the Sidewinder Skarn

    NASA Astrophysics Data System (ADS)

    Draper, C.; Gevedon, M. L.; Barnes, J.; Lackey, J. S.; Jiang, H.; Lee, C. T.

    2016-12-01

    Skarns of the Verde Antique Quarry and White Horse Mountain areas of the Sidewinder Range give insight into the paleohydrothermal systems operating in the California's Jurassic arc in the Southwestern Mojave Desert. Garnet from these skarns is iron rich: Xand= 55-100. Laser fluorination measurements show oxygen isotope (δ18O) compositions of garnet crystals and crystals domains have large ranges: -3.1‰ to +4.4‰ and -8.9‰ to +3.4‰, respectively. In general, the garnet cores have more negative δ18O values than rims, although oscillations are present. Negative values have been interpreted as influx of meteoric fluid and positive values as increased magmatic input. Here we report major and trace element concentrations for 17 core to rim Sidewinder garnet transects. REEs concentrations are low in all crystals, with total REE concentrations ranging from 0.710 ppm to 33.7 ppm, values that are lower than Cretaceous skarn garnets in the Sierra Nevada in the White Chief and Empire Mt skarns. Such low concentrations are likely due to the higher fraction of meteoric fluids during formation of the Sidewinder skarns. REE concentrations decrease from core to rim (REE core average=12.2ppm, REE rim average=7.21ppm). This is slightly more pronounced in the LREEs than in the HREEs (LaN/YbN core average= 10.9; rim average= 9.73, normalized to Chondrite). X­and tends to decrease core to rim in the Verde Antique skarn, whereas, Xand of the White Horse skarn does not correlate with distance from core. A large positive Eu anomaly (Eu/Eu* = 3­-30) in garnet from both skarns suggests oxidizing fluid conditions. Oxygen isotope data from garnet in these same skarns show periods of time with increased proportion of magmatic derived fluids in the total fluid budget. However, there is no corresponding widespread increase in total REE concentrations. Other studies of skarns from the western Sierra Nevadan arc (White Chief and Empire Mountain) observe complete decoupling of d18O values and trace element compositions. Future modeling should consider modal abundance of fluid soluble minerals in cooling and altering plutons to probe the REE budget.

  14. Assessing the value of transgenic crops.

    PubMed

    Lacey, Hugh

    2002-10-01

    In the current controversy about the value of transgenic crops, matters open to empirical inquiry are centrally at issue. One such matter is a key premise in a common argument (that I summarize) that transgenic crops should be considered to have universal value. The premise is that there are no alternative forms of agriculture available to enable the production of sufficient food to feed the world. The proponents of agroecology challenge it, claiming that agroecology provides an alternative, and they deny the claim that it is well founded on empirical evidence. It is, therefore, a matter of both social and scientific importance that this premise and the criticisms of it be investigated rigorously and empirically, so that the benefits and disadvantages of transgenic-intensive agriculture and agroecology can be compared in a reliable way. Conducting adequate investigation about the potential contribution of agroecology requires that the cultural conditions of its practice (and, thus, of the practices and movements of small-scale farmers in the "third world") be strengthened--and this puts the interests of investigation into tension with the socio-economic interests driving the development of transgenics. General issues about relationship between ethical argument and empirical (scientific) investigation are raised throughout the article.

  15. Nonlinear experimental dye-doped nematic liquid crystal optical transmission spectra estimated by neural network empirical physical formulas

    NASA Astrophysics Data System (ADS)

    Yildiz, Nihat; San, Sait Eren; Köysal, Oğuz

    2010-09-01

    In this paper, two complementary objectives related to optical transmission spectra of nematic liquid crystals (NLCs) were achieved. First, at room temperature, for both pure and dye (DR9) doped E7 NLCs, the 10-250 W halogen lamp transmission spectra (wavelength 400-1200 nm) were measured at various bias voltages. Second, because the measured spectra were inherently highly nonlinear, it was difficult to construct explicit empirical physical formulas (EPFs) to employ as transmittance functions. To avoid this difficulty, layered feedforward neural networks (LFNNs) were used to construct explicit EPFs for these theoretically unknown nonlinear NLC transmittance functions. As we theoretically showed in a previous work, a LFNN, as an excellent nonlinear function approximator, is highly relevant to EPF construction. The LFNN-EPFs efficiently and consistently estimated both the measured and yet-to-be-measured nonlinear transmittance response values. The experimentally obtained doping ratio dependencies and applied bias voltage responses of transmittance were also confirmed by LFFN-EPFs. This clearly indicates that physical laws embedded in the physical data can be faithfully extracted by the suitable LFNNs. The extraordinary success achieved with LFNN here suggests two potential applications. First, although not attempted here, these LFNN-EPFs, by such mathematical operations as derivation, integration, minimization etc., can be used to obtain further transmittance related functions of NLCs. Second, for a given NLC response function, whose theoretical nonlinear functional form is yet unknown, a suitable experimental data based LFNN-EPF can be constructed to predict the yet-to-be-measured values.

  16. Extreme ultraviolet index due to broken clouds at a midlatitude site, Granada (southeastern Spain)

    NASA Astrophysics Data System (ADS)

    Antón, M.; Piedehierro, A. A.; Alados-Arboledas, L.; Wolfran, E.; Olmo, F. J.

    2012-11-01

    Cloud cover usually attenuates the ultraviolet (UV) solar radiation but, under certain sky conditions, the clouds may produce an enhancement effect increasing the UV levels at surface. The main objective of this paper is to analyze an extreme UV enhancement episode recorded on 16 June 2009 at Granada (southeastern Spain). This phenomenon was characterized by a quick and intense increase in surface UV radiation under broken cloud fields (5-7 oktas) in which the Sun was surrounded by cumulus clouds (confirmed with sky images). Thus, the UV index (UVI) showed an enhancement of a factor 4 in the course of only 30 min around midday, varying from 2.6 to 10.4 (higher than the corresponding clear-sky UVI value). Additionally, the UVI presented values higher than 10 (extreme erythemal risk) for about 20 min running, with a maximum value around 11.5. The use of an empirical model and the total ozone column (TOC) derived from the Global Ozone Monitoring Experiment (GOME) for the period 1995-2011 showed that the value of UVI ~ 11.5 is substantially larger than the highest index that could origin the natural TOC variations over Granada. Finally, the UV erythemal dose accumulated during the period of 20 min with the extreme UVI values under broken cloud fields was 350 J/m2 which surpass the energy required to produce sunburn of the most human skin types.

  17. Chronic Fatigue Syndrome and Myalgic Encephalomyelitis: Toward An Empirical Case Definition

    PubMed Central

    Jason, Leonard A.; Kot, Bobby; Sunnquist, Madison; Brown, Abigail; Evans, Meredyth; Jantke, Rachel; Williams, Yolonda; Furst, Jacob; Vernon, Suzanne D.

    2015-01-01

    Current case definitions of Myalgic Encephalomyelitis (ME) and chronic fatigue syndrome (CFS) have been based on consensus methods, but empirical methods could be used to identify core symptoms and thereby improve the reliability. In the present study, several methods (i.e., continuous scores of symptoms, theoretically and empirically derived cut off scores of symptoms) were used to identify core symptoms best differentiating patients from controls. In addition, data mining with decision trees was conducted. Our study found a small number of core symptoms that have good sensitivity and specificity, and these included fatigue, post-exertional malaise, a neurocognitive symptom, and unrefreshing sleep. Outcomes from these analyses suggest that using empirically selected symptoms can help guide the creation of a more reliable case definition. PMID:26029488

  18. Development of traffic data input resources for the mechanistic empirical pavement design process.

    DOT National Transportation Integrated Search

    2011-12-12

    The Mechanistic-Empirical Pavement Design Guide (MEPDG) for New and Rehabilitated Pavement Structures uses : nationally based data traffic inputs and recommends that state DOTs develop their own site-specific and regional : values. To support the MEP...

  19. Development of local calibration factors and design criteria values for mechanistic-empirical pavement design.

    DOT National Transportation Integrated Search

    2015-08-01

    A mechanistic-empirical (ME) pavement design procedure allows for analyzing and selecting pavement structures based : on predicted distress progression resulting from stresses and strains within the pavement over its design life. The Virginia : Depar...

  20. An empirical spectroscopic database for acetylene in the regions of 5850-6341 cm-1 and 7000-9415 cm-1

    NASA Astrophysics Data System (ADS)

    Lyulin, O. M.; Campargue, A.

    2017-12-01

    Six studies have been recently devoted to a systematic analysis of the high-resolution near infrared absorption spectrum of acetylene recorded by Cavity Ring Down spectroscopy (CRDS) in Grenoble and by Fourier-transform spectroscopy (FTS) in Brussels and Hefei. On the basis of these works, in the present contribution, we construct an empirical database for acetylene in the 5850-9415 cm-1 region excluding the 6341-7000 cm-1 interval corresponding to the very strong ν1+ν3 manifold. Our database gathers and extends information included in our CRDS and FTS studies. In particular, the intensities of about 1700 lines measured by CRDS in the 7244-7920 cm-1 region are reported for the first time together with those of several bands of 12C13CH2 present in natural isotopic abundance in the acetylene sample. The Herman-Wallis coefficients of most of the bands are derived from a fit of the measured intensity values. A recommended line list is provided with positions calculated using empirical spectroscopic parameters of the lower and upper energy vibrational levels and intensities calculated using the derived Herman-Wallis coefficients. This approach allows completing the experimental list by adding missing lines and improving poorly determined positions and intensities. As a result the constructed line list includes a total of 11113 transitions belonging to 150 bands of 12C2H2 and 29 bands of 12C13CH2. For comparison the HITRAN database in the same region includes 869 transitions of 14 bands, all belonging to 12C2H2. Our weakest lines have an intensity on the order of 10-29 cm/molecule, about three orders of magnitude smaller than the HITRAN intensity cut off. Line profile parameters are added to the line list which is provided in HITRAN format. The comparison of the acetylene database to the HITRAN2012 line list or to results obtained using the global effective operator approach is discussed in terms of completeness and accuracy.

  1. Soil radium, soil gas radon and indoor radon empirical relationships to assist in post-closure impact assessment related to near-surface radioactive waste disposal.

    PubMed

    Appleton, J D; Cave, M R; Miles, J C H; Sumerling, T J

    2011-03-01

    Least squares (LS), Theil's (TS) and weighted total least squares (WTLS) regression analysis methods are used to develop empirical relationships between radium in the ground, radon in soil and radon in dwellings to assist in the post-closure assessment of indoor radon related to near-surface radioactive waste disposal at the Low Level Waste Repository in England. The data sets used are (i) estimated ²²⁶Ra in the < 2 mm fraction of topsoils (eRa226) derived from equivalent uranium (eU) from airborne gamma spectrometry data, (ii) eRa226 derived from measurements of uranium in soil geochemical samples, (iii) soil gas radon and (iv) indoor radon data. For models comparing indoor radon and (i) eRa226 derived from airborne eU data and (ii) soil gas radon data, some of the geological groupings have significant slopes. For these groupings there is reasonable agreement in slope and intercept between the three regression analysis methods (LS, TS and WTLS). Relationships between radon in dwellings and radium in the ground or radon in soil differ depending on the characteristics of the underlying geological units, with more permeable units having steeper slopes and higher indoor radon concentrations for a given radium or soil gas radon concentration in the ground. The regression models comparing indoor radon with soil gas radon have intercepts close to 5 Bq m⁻³ whilst the intercepts for those comparing indoor radon with eRa226 from airborne eU vary from about 20 Bq m⁻³ for a moderately permeable geological unit to about 40 Bq m⁻³ for highly permeable limestone, implying unrealistically high contributions to indoor radon from sources other than the ground. An intercept value of 5 Bq m⁻³ is assumed as an appropriate mean value for the UK for sources of indoor radon other than radon from the ground, based on examination of UK data. Comparison with published data used to derive an average indoor radon: soil ²²⁶Ra ratio shows that whereas the published data are generally clustered with no obvious correlation, the data from this study have substantially different relationships depending largely on the permeability of the underlying geology. Models for the relatively impermeable geological units plot parallel to the average indoor radon: soil ²²⁶Ra model but with lower indoor radon: soil ²²⁶Ra ratios, whilst the models for the permeable geological units plot parallel to the average indoor radon: soil ²²⁶Ra model but with higher than average indoor radon: soil ²²⁶Ra ratios. Copyright © 2010 Natural Environment Research Council. Published by Elsevier Ltd.. All rights reserved.

  2. Empirical STORM-E Model. [I. Theoretical and Observational Basis

    NASA Technical Reports Server (NTRS)

    Mertens, Christopher J.; Xu, Xiaojing; Bilitza, Dieter; Mlynczak, Martin G.; Russell, James M., III

    2013-01-01

    Auroral nighttime infrared emission observed by the Sounding of the Atmosphere using Broadband Emission Radiometry (SABER) instrument onboard the Thermosphere-Ionosphere-Mesosphere Energetics and Dynamics (TIMED) satellite is used to develop an empirical model of geomagnetic storm enhancements to E-region peak electron densities. The empirical model is called STORM-E and will be incorporated into the 2012 release of the International Reference Ionosphere (IRI). The proxy for characterizing the E-region response to geomagnetic forcing is NO+(v) volume emission rates (VER) derived from the TIMED/SABER 4.3 lm channel limb radiance measurements. The storm-time response of the NO+(v) 4.3 lm VER is sensitive to auroral particle precipitation. A statistical database of storm-time to climatological quiet-time ratios of SABER-observed NO+(v) 4.3 lm VER are fit to widely available geomagnetic indices using the theoretical framework of linear impulse-response theory. The STORM-E model provides a dynamic storm-time correction factor to adjust a known quiescent E-region electron density peak concentration for geomagnetic enhancements due to auroral particle precipitation. Part II of this series describes the explicit development of the empirical storm-time correction factor for E-region peak electron densities, and shows comparisons of E-region electron densities between STORM-E predictions and incoherent scatter radar measurements. In this paper, Part I of the series, the efficacy of using SABER-derived NO+(v) VER as a proxy for the E-region response to solar-geomagnetic disturbances is presented. Furthermore, a detailed description of the algorithms and methodologies used to derive NO+(v) VER from SABER 4.3 lm limb emission measurements is given. Finally, an assessment of key uncertainties in retrieving NO+(v) VER is presented

  3. Healthy-years equivalent: wounded but not yet dead.

    PubMed

    Hauber, A Brett

    2009-06-01

    The quality-adjusted life-year (QALY) has become the dominant measure of health value in health technology assessment in recent decades despite some well-known and fundamental flaws in the preference-elicitation methods used to construct health-state utility weights and the strong assumptions required to construct QALYs as a measure of health value using these utility weights. The healthy-years equivalent (HYE) was proposed as an alternative measure of health value that was purported to overcome many of the limitations of the QALY. The primary argument against the HYE is that it is difficult to estimate and, therefore, impractical. After much debate in the literature, the QALY appears to have won the battle; however, the HYE is not yet dead. Empirical research and recent advances in methods continue to offer evidence of the feasibility using the HYE as a measure of health value and also addresses some of criticisms surrounding the preference-elicitation methods used to estimate the HYE. This article provides a brief review of empirical applications of the HYE and identifies recent advances in empirical estimation that may breathe new life into a valiant, but wounded, measure.

  4. Nuclear binding energy using semi empirical mass formula

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ankita,, E-mail: ankitagoyal@gmail.com; Suthar, B.

    2016-05-06

    In the present communication, semi empirical mass formula using the liquid drop model has been presented. Nuclear binding energies are calculated using semi empirical mass formula with various constants given by different researchers. We also compare these calculated values with experimental data and comparative study for finding suitable constants is added using the error plot. The study is extended to find the more suitable constant to reduce the error.

  5. Science, policy, and the transparency of values.

    PubMed

    Elliott, Kevin C; Resnik, David B

    2014-07-01

    Opposing groups of scientists have recently engaged in a heated dispute over a preliminary European Commission (EC) report on its regulatory policy for endocrine-disrupting chemicals. In addition to the scientific issues at stake, a central question has been how scientists can maintain their objectivity when informing policy makers. Drawing from current ethical, conceptual, and empirical studies of objectivity and conflicts of interest in scientific research, we propose guiding principles for communicating scientific findings in a manner that promotes objectivity, public trust, and policy relevance. Both conceptual and empirical studies of scientific reasoning have shown that it is unrealistic to prevent policy-relevant scientific research from being influenced by value judgments. Conceptually, the current dispute over the EC report illustrates how scientists are forced to make value judgments about appropriate standards of evidence when informing public policy. Empirical studies provide further evidence that scientists are unavoidably influenced by a variety of potentially subconscious financial, social, political, and personal interests and values. When scientific evidence is inconclusive and major regulatory decisions are at stake, it is unrealistic to think that values can be excluded from scientific reasoning. Thus, efforts to suppress or hide interests or values may actually damage scientific objectivity and public trust, whereas a willingness to bring implicit interests and values into the open may be the best path to promoting good science and policy.

  6. The derivation of scenic utility functions and surfaces and their role in landscape management

    Treesearch

    John W. Hamilton; Gregory J. Buhyoff; J. Douglas Wellman

    1979-01-01

    This paper outlines a methodological approach for determining relevant physical landscape features which people use in formulating judgments about scenic utility. This information, coupled with either empirically derived or rationally stipulated regression techniques, may be used to produce scenic utility functions and surfaces. These functions can provide a means for...

  7. TEACHING PHYSICS: Biking around a hollow sphere

    NASA Astrophysics Data System (ADS)

    Mak, Se-yuen; Yip, Din-yan

    1999-11-01

    The conditions required for a cyclist riding a motorbike in a horizontal circle on or above the equator of a hollow sphere are derived using concepts of equilibrium and the condition for uniform circular motion. The result is compared with an empirical analysis based on a video show. Some special cases of interest derived from the general solution are elaborated.

  8. Towards a universal model for carbon dioxide uptake by plants

    DOE PAGES

    Wang, Han; Prentice, I. Colin; Keenan, Trevor F.; ...

    2017-09-04

    Gross primary production (GPP) - the uptake of carbon dioxide (CO 2) by leaves, and its conversion to sugars by photosynthesis - is the basis for life on land. Earth System Models (ESMs) incorporating the interactions of land ecosystems and climate are used to predict the future of the terrestrial sink for anthropogenic CO 2. ESMs require accurate representation of GPP. However, current ESMs disagree on how GPP responds to environmental variations, suggesting a need for a more robust theoretical framework for modelling. Here in this work, we focus on a key quantity for GPP, the ratio of leaf internalmore » to external CO 2 (χ). χ is tightly regulated and depends on environmental conditions, but is represented empirically and incompletely in today's models. We show that a simple evolutionary optimality hypothesis predicts specific quantitative dependencies of χ on temperature, vapour pressure deficit and elevation; and that these same dependencies emerge from an independent analysis of empirical χ values, derived from a worldwide dataset of >3,500 leaf stable carbon isotope measurements. A single global equation embodying these relationships then unifies the empirical light-use efficiency model with the standard model of C 3 photosynthesis, and successfully predicts GPP measured at eddy-covariance flux sites. This success is notable given the equation's simplicity and broad applicability across biomes and plant functional types. Finally, it provides a theoretical underpinning for the analysis of plant functional coordination across species and emergent properties of ecosystems, and a potential basis for the reformulation of the controls of GPP in next-generation ESMs.« less

  9. Towards a universal model for carbon dioxide uptake by plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Han; Prentice, I. Colin; Keenan, Trevor F.

    Gross primary production (GPP) - the uptake of carbon dioxide (CO 2) by leaves, and its conversion to sugars by photosynthesis - is the basis for life on land. Earth System Models (ESMs) incorporating the interactions of land ecosystems and climate are used to predict the future of the terrestrial sink for anthropogenic CO 2. ESMs require accurate representation of GPP. However, current ESMs disagree on how GPP responds to environmental variations, suggesting a need for a more robust theoretical framework for modelling. Here in this work, we focus on a key quantity for GPP, the ratio of leaf internalmore » to external CO 2 (χ). χ is tightly regulated and depends on environmental conditions, but is represented empirically and incompletely in today's models. We show that a simple evolutionary optimality hypothesis predicts specific quantitative dependencies of χ on temperature, vapour pressure deficit and elevation; and that these same dependencies emerge from an independent analysis of empirical χ values, derived from a worldwide dataset of >3,500 leaf stable carbon isotope measurements. A single global equation embodying these relationships then unifies the empirical light-use efficiency model with the standard model of C 3 photosynthesis, and successfully predicts GPP measured at eddy-covariance flux sites. This success is notable given the equation's simplicity and broad applicability across biomes and plant functional types. Finally, it provides a theoretical underpinning for the analysis of plant functional coordination across species and emergent properties of ecosystems, and a potential basis for the reformulation of the controls of GPP in next-generation ESMs.« less

  10. Surface Snow Density of East Antarctica Derived from In-Situ Observations

    NASA Astrophysics Data System (ADS)

    Tian, Y.; Zhang, S.; Du, W.; Chen, J.; Xie, H.; Tong, X.; Li, R.

    2018-04-01

    Models based on physical principles or semi-empirical parameterizations have used to compute the firn density, which is essential for the study of surface processes in the Antarctic ice sheet. However, parameterization of surface snow density is often challenged by the description of detailed local characterization. In this study we propose to generate a surface density map for East Antarctica from all the filed observations that are available. Considering that the observations are non-uniformly distributed around East Antarctica, obtained by different methods, and temporally inhomogeneous, the field observations are used to establish an initial density map with a grid size of 30 × 30 km2 in which the observations are averaged at a temporal scale of five years. We then construct an observation matrix with its columns as the map grids and rows as the temporal scale. If a site has an unknown density value for a period, we will set it to 0 in the matrix. In order to construct the main spatial and temple information of surface snow density matrix we adopt Empirical Orthogonal Function (EOF) method to decompose the observation matrix and only take first several lower-order modes, because these modes already contain most information of the observation matrix. However, there are a lot of zeros in the matrix and we solve it by using matrix completion algorithm, and then we derive the time series of surface snow density at each observation site. Finally, we can obtain the surface snow density by multiplying the modes interpolated by kriging with the corresponding amplitude of the modes. Comparative analysis have done between our surface snow density map and model results. The above details will be introduced in the paper.

  11. Empirical Soil Moisture Estimation with Spaceborne L-band Polarimetric Radars: Aquarius, SMAP, and PALSAR-2

    NASA Astrophysics Data System (ADS)

    Burgin, M. S.; van Zyl, J. J.

    2017-12-01

    Traditionally, substantial ancillary data is needed to parametrize complex electromagnetic models to estimate soil moisture from polarimetric radar data. The Soil Moisture Active Passive (SMAP) baseline radar soil moisture retrieval algorithm uses a data cube approach, where a cube of radar backscatter values is calculated using sophisticated models. In this work, we utilize the empirical approach by Kim and van Zyl (2009) which is an optional SMAP radar soil moisture retrieval algorithm; it expresses radar backscatter of a vegetated scene as a linear function of soil moisture, hence eliminating the need for ancillary data. We use 2.5 years of L-band Aquarius radar and radiometer derived soil moisture data to determine two coefficients of a linear model function on a global scale. These coefficients are used to estimate soil moisture with 2.5 months of L-band SMAP and L-band PALSAR-2 data. The estimated soil moisture is compared with the SMAP Level 2 radiometer-only soil moisture product; the global unbiased RMSE of the SMAP derived soil moisture corresponds to 0.06-0.07 cm3/cm3. In this study, we leverage the three diverse L-band radar data sets to investigate the impact of pixel size and pixel heterogeneity on soil moisture estimation performance. Pixel sizes range from 100 km for Aquarius, over 3, 9, 36 km for SMAP, to 10m for PALSAR-2. Furthermore, we observe seasonal variation in the radar sensitivity to soil moisture which allows the identification and quantification of seasonally changing vegetation. Utilizing this information, we further improve the estimation performance. The research described in this paper is supported by the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration. Copyright 2017. All rights reserved.

  12. Nucleon form factors in dispersively improved chiral effective field theory. II. Electromagnetic form factors

    NASA Astrophysics Data System (ADS)

    Alarcón, J. M.; Weiss, C.

    2018-05-01

    We study the nucleon electromagnetic form factors (EM FFs) using a recently developed method combining chiral effective field theory (χ EFT ) and dispersion analysis. The spectral functions on the two-pion cut at t >4 Mπ2 are constructed using the elastic unitarity relation and an N /D representation. χ EFT is used to calculate the real functions J±1(t ) =f±1(t ) /Fπ(t ) (ratios of the complex π π →N N ¯ partial-wave amplitudes and the timelike pion FF), which are free of π π rescattering. Rescattering effects are included through the empirical timelike pion FF | Fπ(t) | 2 . The method allows us to compute the isovector EM spectral functions up to t ˜1 GeV2 with controlled accuracy (leading order, next-to-leading order, and partial next-to-next-to-leading order). With the spectral functions we calculate the isovector nucleon EM FFs and their derivatives at t =0 (EM radii, moments) using subtracted dispersion relations. We predict the values of higher FF derivatives, which are not affected by higher-order chiral corrections and are obtained almost parameter-free in our approach, and explain their collective behavior. We estimate the individual proton and neutron FFs by adding an empirical parametrization of the isoscalar sector. Excellent agreement with the present low-Q2 FF data is achieved up to ˜0.5 GeV2 for GE, and up to ˜0.2 GeV2 for GM. Our results can be used to guide the analysis of low-Q2 elastic scattering data and the extraction of the proton charge radius.

  13. GPS-Derived Precipitable Water Compared with the Air Force Weather Agency’s MM5 Model Output

    DTIC Science & Technology

    2002-03-26

    and less then 100 sensors are available throughout Europe . While the receiver density is currently comparable to the upper-air sounding network...profiles from 38 upper air sites throughout Europe . Based on these empirical formulae and simplifications, Bevis (1992) has determined that the error...Alaska using Bevis’ (1992) empirical correlation based on 8718 radiosonde calculations over 2 years. Other studies have been conducted in Europe and

  14. Relationship Between Surface Reflectance in the Visible and Mid-IR used in MODIS Aerosol Algorithm-Theory

    NASA Technical Reports Server (NTRS)

    Kaufman, Yoram J.; Gobron, Nadine; Pinty, Bernard; Widlowski, Jean-Luc; Verstraete, Michel M.; Lau, William K. M. (Technical Monitor)

    2002-01-01

    Data from the Moderate Resolution Imaging Spectroradiometer (MODIS) instrument that flies in polar orbit on the Terra platform, are used to derive the aerosol optical thickness and properties over land and ocean. The relationships between visible reflectance (at blue, rho(sub blue), and red, rho(sub red)) and mid-infrared (at 2.1 microns, rho(sub 2.1)) are used in the MODIS aerosol retrieval algorithm to derive global distribution of aerosols over the land. These relations have been established from a series of measurements indicating that rho(sub blue) is approximately 0.5 rho(sub red) is approximately 0.25 rho(sub 2.1). Here we use a model to describe the transfer of radiation through a vegetation canopy composed of randomly oriented leaves to assess the theoretical foundations for these relationships. Calculations for a wide range of leaf area indices and vegetation fractions show that rho(sub blue) is consistently about 1/4 of rho(sub 2.1) as used by MODIS for the whole range of analyzed cases, except for very dark soils, such as those found in burn scars. For its part, the ratio rho(sub red)/rho(sub 2.1) varies from less than the empirically derived value of 1/2 for dense and dark vegetation, to more than 1/2 for bright mixture of soil and vegetation. This is in agreement with measurements over uniform dense vegetation, but not with measurements over mixed dark scenes. In the later case the discrepancy is probably mitigated by shadows due to uneven canopy and terrain on a large scale. It is concluded that the value of this ratio should ideally be made dependent on the land cover type in the operational processing of MODIS data, especially over dense forests.

  15. Cosmogenic 36Cl in karst waters from Bunker Cave North Western Germany - A tool to derive local evapotranspiration?

    NASA Astrophysics Data System (ADS)

    Münsterer, C.; Fohlmeister, J.; Christl, M.; Schröder-Ritzrau, A.; Alfimov, V.; Ivy-Ochs, S.; Wackerbarth, A.; Mangini, A.

    2012-06-01

    Monthly rain and drip waters were collected over a period of 10 months at Bunker Cave, Germany. The concentration of 36Cl and the 36Cl/Cl-ratios were determined by accelerator mass spectrometry (AMS), while stable (35+37)Cl concentrations were measured with both, ion chromatography (IC) and AMS. The measured 36Cl-fluxes of (0.97 ± 0.57) × 104 atoms cm-2 month-1 (0.97 atoms m-2 month-1) in precipitation were on average twice as high as the global mean atmospheric production rate. This observation is consistent with the local fallout pattern, which is characterized by a maximum at mid-latitudes. The stable chloride concentration in drip waters (ranging from 13.2 to 20.9 mg/l) and the 36Cl-concentrations (ranging from 16.9 × 106 to 35.3 × 106 atoms/l) are a factor of 7 and 10 above the values expected from empirical evapotranspiration formulas and the rain water concentrations, respectively. Most likely the additional stable Cl is due to human impact from a nearby urban conglomeration. The large 36Cl-enrichment is attributed to the local evapotranspiration effect, which appears to be higher than the calculated values and to additional bomb-derived 36Cl from nuclear weapons tests in the 1950s and 60s stored in the soil above the cave. In the densely vegetated soil above Bunker Cave, 36Cl seems not to behave as a completely conservative tracer. The bomb derived 36Cl might be retained in the soil due to uptake by minerals and organic material and is still being released now. Based on our data, the residence time of 36Cl in the soil is estimated to be about 75-85 years.

  16. Domain walls and ferroelectric reversal in corundum derivatives

    NASA Astrophysics Data System (ADS)

    Ye, Meng; Vanderbilt, David

    2017-01-01

    Domain walls are the topological defects that mediate polarization reversal in ferroelectrics, and they may exhibit quite different geometric and electronic structures compared to the bulk. Therefore, a detailed atomic-scale understanding of the static and dynamic properties of domain walls is of pressing interest. In this work, we use first-principles methods to study the structures of 180∘ domain walls, both in their relaxed state and along the ferroelectric reversal pathway, in ferroelectrics belonging to the family of corundum derivatives. Our calculations predict their orientation, formation energy, and migration energy and also identify important couplings between polarization, magnetization, and chirality at the domain walls. Finally, we point out a strong empirical correlation between the height of the domain-wall-mediated polarization reversal barrier and the local bonding environment of the mobile A cations as measured by bond-valence sums. Our results thus provide both theoretical and empirical guidance for future searches for ferroelectric candidates in materials of the corundum derivative family.

  17. Domain walls and ferroelectric reversal in corundum derivatives

    NASA Astrophysics Data System (ADS)

    Ye, Meng; Vanderbilt, David

    Domain walls are the topological defects that mediate polarization reversal in ferroelectrics, and they may exhibit quite different geometric and electronic structures compared to the bulk. Therefore, a detailed atomic-scale understanding of the static and dynamic properties of domain walls is of pressing interest. In this work, we use first-principles methods to study the structures of 180° domain walls, both in their relaxed state and along the ferroelectric reversal pathway, in ferroelectrics belonging to the family of corundum derivatives. Our calculations predict their orientation, formation energy, and migration energy, and also identify important couplings between polarization, magnetization, and chirality at the domain walls. Finally, we point out a strong empirical correlation between the height of the domain-wall mediated polarization reversal barrier and the local bonding environment of the mobile A cations as measured by bond valence sums. Our results thus provide both theoretical and empirical guidance to further search for ferroelectric candidates in materials of the corundum derivative family. The work is supported by ONR Grant N00014-12-1-1035.

  18. Modeling Major Adverse Outcomes of Pediatric and Adult Patients With Congenital Heart Disease Undergoing Cardiac Catheterization: Observations From the NCDR IMPACT Registry (National Cardiovascular Data Registry Improving Pediatric and Adult Congenital Treatment).

    PubMed

    Jayaram, Natalie; Spertus, John A; Kennedy, Kevin F; Vincent, Robert; Martin, Gerard R; Curtis, Jeptha P; Nykanen, David; Moore, Phillip M; Bergersen, Lisa

    2017-11-21

    Risk standardization for adverse events after congenital cardiac catheterization is needed to equitably compare patient outcomes among different hospitals as a foundation for quality improvement. The goal of this project was to develop a risk-standardization methodology to adjust for patient characteristics when comparing major adverse outcomes in the NCDR's (National Cardiovascular Data Registry) IMPACT Registry (Improving Pediatric and Adult Congenital Treatment). Between January 2011 and March 2014, 39 725 consecutive patients within IMPACT undergoing cardiac catheterization were identified. Given the heterogeneity of interventional procedures for congenital heart disease, new procedure-type risk categories were derived with empirical data and expert opinion, as were markers of hemodynamic vulnerability. A multivariable hierarchical logistic regression model to identify patient and procedural characteristics predictive of a major adverse event or death after cardiac catheterization was derived in 70% of the cohort and validated in the remaining 30%. The rate of major adverse event or death was 7.1% and 7.2% in the derivation and validation cohorts, respectively. Six procedure-type risk categories and 6 independent indicators of hemodynamic vulnerability were identified. The final risk adjustment model included procedure-type risk category, number of hemodynamic vulnerability indicators, renal insufficiency, single-ventricle physiology, and coagulation disorder. The model had good discrimination, with a C-statistic of 0.76 and 0.75 in the derivation and validation cohorts, respectively. Model calibration in the validation cohort was excellent, with a slope of 0.97 (standard error, 0.04; P value [for difference from 1] =0.53) and an intercept of 0.007 (standard error, 0.12; P value [for difference from 0] =0.95). The creation of a validated risk-standardization model for adverse outcomes after congenital cardiac catheterization can support reporting of risk-adjusted outcomes in the IMPACT Registry as a foundation for quality improvement. © 2017 American Heart Association, Inc.

  19. Value of a dual-polarized gap-filling radar in support of southern California post-fire debris-flow warnings

    USGS Publications Warehouse

    Jorgensen, David P.; Hanshaw, Maiana N.; Schmidt, Kevin M.; Laber, Jayme L; Staley, Dennis M.; Kean, Jason W.; Restrepo, Pedro J.

    2011-01-01

    A portable truck-mounted C-band Doppler weather radar was deployed to observe rainfall over the Station Fire burn area near Los Angeles, California, during the winter of 2009/10 to assist with debris-flow warning decisions. The deployments were a component of a joint NOAA–U.S. Geological Survey (USGS) research effort to improve definition of the rainfall conditions that trigger debris flows from steep topography within recent wildfire burn areas. A procedure was implemented to blend various dual-polarized estimators of precipitation (for radar observations taken below the freezing level) using threshold values for differential reflectivity and specific differential phase shift that improves the accuracy of the rainfall estimates over a specific burn area sited with terrestrial tipping-bucket rain gauges. The portable radar outperformed local Weather Surveillance Radar-1988 Doppler (WSR-88D) National Weather Service network radars in detecting rainfall capable of initiating post-fire runoff-generated debris flows. The network radars underestimated hourly precipitation totals by about 50%. Consistent with intensity–duration threshold curves determined from past debris-flow events in burned areas in Southern California, the portable radar-derived rainfall rates exceeded the empirical thresholds over a wider range of storm durations with a higher spatial resolution than local National Weather Service operational radars. Moreover, the truck-mounted C-band radar dual-polarimetric-derived estimates of rainfall intensity provided a better guide to the expected severity of debris-flow events, based on criteria derived from previous events using rain gauge data, than traditional radar-derived rainfall approaches using reflectivity–rainfall relationships for either the portable or operational network WSR-88D radars. Part of the reason for the improvement was due to siting the radar closer to the burn zone than the WSR-88Ds, but use of the dual-polarimetric variables improved the rainfall estimation by ~12% over the use of traditional Z–R relationships.

  20. Estimation of debris flow critical rainfall thresholds by a physically-based model

    NASA Astrophysics Data System (ADS)

    Papa, M. N.; Medina, V.; Ciervo, F.; Bateman, A.

    2012-11-01

    Real time assessment of debris flow hazard is fundamental for setting up warning systems that can mitigate its risk. A convenient method to assess the possible occurrence of a debris flow is the comparison of measured and forecasted rainfall with rainfall threshold curves (RTC). Empirical derivation of the RTC from the analysis of rainfall characteristics of past events is not possible when the database of observed debris flows is poor or when the environment changes with time. For landslides triggered debris flows, the above limitations may be overcome through the methodology here presented, based on the derivation of RTC from a physically based model. The critical RTC are derived from mathematical and numerical simulations based on the infinite-slope stability model in which land instability is governed by the increase in groundwater pressure due to rainfall. The effect of rainfall infiltration on landside occurrence is modelled trough a reduced form of the Richards equation. The simulations are performed in a virtual basin, representative of the studied basin, taking into account the uncertainties linked with the definition of the characteristics of the soil. A large number of calculations are performed combining different values of the rainfall characteristics (intensity and duration of event rainfall and intensity of antecedent rainfall). For each combination of rainfall characteristics, the percentage of the basin that is unstable is computed. The obtained database is opportunely elaborated to derive RTC curves. The methodology is implemented and tested on a small basin of the Amalfi Coast (South Italy).

  1. Investigating the Biosynthesis of Membrane-spanning Lipids Using Model Strains of Acidobacteria

    NASA Astrophysics Data System (ADS)

    Bradley, A. S.; Chubiz, L. M.

    2016-12-01

    Glycerol dialkyl glycerol tetraethers (GDGTs), deriving from the membrane-spanning lipids of microbes, are detected in a wide range of environments including marine and lacustrine waters, sediments, and in terrestrial soils. In sediments and soils, ratios of various GDGT structures form the basis of the TEX86 proxy based on isoprenoidal GDGTs derived from archaea, and the MBT/CBT proxy based on bacterial-derived branched GDGTs (brGDGTs), which is influenced by both temperature and pH. While the relationships of the proxy values to environmental variables have been empirically calibrated, much uncertainty remains in understanding genetic and physiological factors that affect the production of these lipid structures by microbes. In this study we compare two model bacterial strains - Edaphobacter aggregans WGB-1 , which has been previously demonstrated to produce brGDGTs (Damsté et al 2011) and Edaphobacter modestus JBG-1 (a non-brGDGT producer) to gain traction into understanding brGDGT production. We have sequenced each genome, facilitating comparisons that can be used to computationally generate hypotheses for genes involved in brGDGT biosynthesis. We will also report the results of initial experiments conducted to understand how the lipid profiles of each strain vary as a function of growth phase. Through a combination of genetic approaches and physiolotical experiments, we aim to bring new understanding to brGDGTs and how proxies derived from these lipids relate to environmental variables. Damsté et al. 2011 AEM 77: 4147

  2. Establishing appropriate inputs when using the mechanistic-empirical pavement design guide to design rigid pavements in Pennsylvania.

    DOT National Transportation Integrated Search

    2011-03-01

    Each design input in the Mechanistic-Empirical Design Guide (MEPDG) required for the design of Jointed Plain Concrete : Pavements (JPCPs) is introduced and discussed in this report. Best values for Pennsylvania conditions were established and : recom...

  3. Establishing Appropriate Inputs When Using the Mechanistic-Empirical Pavement Design Guide To Design Rigid Pavements in Pennsylvania

    DOT National Transportation Integrated Search

    2011-03-01

    Each design input in the Mechanistic-Empirical Design Guide (MEPDG) required for the design of Jointed Plain Concrete Pavements (JPCPs) is introduced and discussed in this report. Best values for Pennsylvania conditions were established and recommend...

  4. Assessing the accuracy of improved force-matched water models derived from Ab initio molecular dynamics simulations.

    PubMed

    Köster, Andreas; Spura, Thomas; Rutkai, Gábor; Kessler, Jan; Wiebeler, Hendrik; Vrabec, Jadran; Kühne, Thomas D

    2016-07-15

    The accuracy of water models derived from ab initio molecular dynamics simulations by means on an improved force-matching scheme is assessed for various thermodynamic, transport, and structural properties. It is found that although the resulting force-matched water models are typically less accurate than fully empirical force fields in predicting thermodynamic properties, they are nevertheless much more accurate than generally appreciated in reproducing the structure of liquid water and in fact superseding most of the commonly used empirical water models. This development demonstrates the feasibility to routinely parametrize computationally efficient yet predictive potential energy functions based on accurate ab initio molecular dynamics simulations for a large variety of different systems. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  5. All-Atom Four-Body Knowledge-Based Statistical Potentials to Distinguish Native Protein Structures from Nonnative Folds

    PubMed Central

    2017-01-01

    Recent advances in understanding protein folding have benefitted from coarse-grained representations of protein structures. Empirical energy functions derived from these techniques occasionally succeed in distinguishing native structures from their corresponding ensembles of nonnative folds or decoys which display varying degrees of structural dissimilarity to the native proteins. Here we utilized atomic coordinates of single protein chains, comprising a large diverse training set, to develop and evaluate twelve all-atom four-body statistical potentials obtained by exploring alternative values for a pair of inherent parameters. Delaunay tessellation was performed on the atomic coordinates of each protein to objectively identify all quadruplets of interacting atoms, and atomic potentials were generated via statistical analysis of the data and implementation of the inverted Boltzmann principle. Our potentials were evaluated using benchmarking datasets from Decoys-‘R'-Us, and comparisons were made with twelve other physics- and knowledge-based potentials. Ranking 3rd, our best potential tied CHARMM19 and surpassed AMBER force field potentials. We illustrate how a generalized version of our potential can be used to empirically calculate binding energies for target-ligand complexes, using HIV-1 protease-inhibitor complexes for a practical application. The combined results suggest an accurate and efficient atomic four-body statistical potential for protein structure prediction and assessment. PMID:29119109

  6. Effect of collision energy optimization on the measurement of peptides by selected reaction monitoring (SRM) mass spectrometry.

    PubMed

    Maclean, Brendan; Tomazela, Daniela M; Abbatiello, Susan E; Zhang, Shucha; Whiteaker, Jeffrey R; Paulovich, Amanda G; Carr, Steven A; Maccoss, Michael J

    2010-12-15

    Proteomics experiments based on Selected Reaction Monitoring (SRM, also referred to as Multiple Reaction Monitoring or MRM) are being used to target large numbers of protein candidates in complex mixtures. At present, instrument parameters are often optimized for each peptide, a time and resource intensive process. Large SRM experiments are greatly facilitated by having the ability to predict MS instrument parameters that work well with the broad diversity of peptides they target. For this reason, we investigated the impact of using simple linear equations to predict the collision energy (CE) on peptide signal intensity and compared it with the empirical optimization of the CE for each peptide and transition individually. Using optimized linear equations, the difference between predicted and empirically derived CE values was found to be an average gain of only 7.8% of total peak area. We also found that existing commonly used linear equations fall short of their potential, and should be recalculated for each charge state and when introducing new instrument platforms. We provide a fully automated pipeline for calculating these equations and individually optimizing CE of each transition on SRM instruments from Agilent, Applied Biosystems, Thermo-Scientific and Waters in the open source Skyline software tool ( http://proteome.gs.washington.edu/software/skyline ).

  7. Monastic incorporation of classical botanic medicines into the Renaissance pharmacopeia.

    PubMed

    Petrucelli, R J

    1994-01-01

    Ancient Greek physicians believed that health resulted from a balance of natural forces. Many, including Dioscorides, made compilations of plants and medicines derived from them, giving prominence to diuretics, cathartics and emetics. During the Roman Empire, although Greek physicians were highly valued, the Roman matron performed many medical functions and magic and astrology were increasingly used. In Judaic and later Christian societies disease was equated with divine disfavor. After the fall of Rome, the classical Greek medical texts were mainly preserved in Latin translation by the Benedictine monasteries, which were based around a patient infirmary, a herb garden and a library. Local plants were often substituted for the classical ones, however, and the compilations became confused and inaccurate. Greek medicine survived better in the remains of the Eastern Roman Empire, and benefitted from the influence of Arab medicine. Intellectual revival, when it came to Europe, did so on the fringes of the Moslem world, and Montpellier and Salerno were among the first of the new medical centers. Rather than relying on ancient experts, the new experimental method reported the tested effects of substances from identified plants. This advance was fostered by the foundation of universities and greatly aided by the later invention of the printing press, which also allowed wider dissemination of the classical texts.

  8. Challenging the classical notion of time in cognition: a quantum perspective

    PubMed Central

    Yearsley, James M.; Pothos, Emmanuel M.

    2014-01-01

    All mental representations change with time. A baseline intuition is that mental representations have specific values at different time points, which may be more or less accessible, depending on noise, forgetting processes, etc. We present a radical alternative, motivated by recent research using the mathematics from quantum theory for cognitive modelling. Such cognitive models raise the possibility that certain possibilities or events may be incompatible, so that perfect knowledge of one necessitates uncertainty for the others. In the context of time-dependence, in physics, this issue is explored with the so-called temporal Bell (TB) or Leggett–Garg inequalities. We consider in detail the theoretical and empirical challenges involved in exploring the TB inequalities in the context of cognitive systems. One interesting conclusion is that we believe the study of the TB inequalities to be empirically more constrained in psychology than in physics. Specifically, we show how the TB inequalities, as applied to cognitive systems, can be derived from two simple assumptions: cognitive realism and cognitive completeness. We discuss possible implications of putative violations of the TB inequalities for cognitive models and our understanding of time in cognition in general. Overall, this paper provides a surprising, novel direction in relation to how time should be conceptualized in cognition. PMID:24598421

  9. The Ca II infrared triplet's performance as an activity indicator compared to Ca II H and K. Empirical relations to convert Ca II infrared triplet measurements to common activity indices

    NASA Astrophysics Data System (ADS)

    Martin, J.; Fuhrmeister, B.; Mittag, M.; Schmidt, T. O. B.; Hempelmann, A.; González-Pérez, J. N.; Schmitt, J. H. M. M.

    2017-09-01

    Aims: A large number of Calcium infrared triplet (IRT) spectra are expected from the Gaia and CARMENES missions. Conversion of these spectra into known activity indicators will allow analysis of their temporal evolution to a better degree. We set out to find such a conversion formula and to determine its robustness. Methods: We have compared 2274 Ca II IRT spectra of active main-sequence F to K stars taken by the TIGRE telescope with those of inactive stars of the same spectral type. After normalizing and applying rotational broadening, we subtracted the comparison spectra to find the chromospheric excess flux caused by activity. We obtained the total excess flux, and compared it to established activity indices derived from the Ca II H and K lines, the spectra of which were obtained simultaneously to the infrared spectra. Results: The excess flux in the Ca II IRT is found to correlate well with R'HK and R+HK, as well as SMWO, if the B - V-dependency is taken into account. We find an empirical conversion formula to calculate the corresponding value of one activity indicator from the measurement of another, by comparing groups of datapoints of stars with similar B - V.

  10. Challenging the classical notion of time in cognition: a quantum perspective.

    PubMed

    Yearsley, James M; Pothos, Emmanuel M

    2014-04-22

    All mental representations change with time. A baseline intuition is that mental representations have specific values at different time points, which may be more or less accessible, depending on noise, forgetting processes, etc. We present a radical alternative, motivated by recent research using the mathematics from quantum theory for cognitive modelling. Such cognitive models raise the possibility that certain possibilities or events may be incompatible, so that perfect knowledge of one necessitates uncertainty for the others. In the context of time-dependence, in physics, this issue is explored with the so-called temporal Bell (TB) or Leggett-Garg inequalities. We consider in detail the theoretical and empirical challenges involved in exploring the TB inequalities in the context of cognitive systems. One interesting conclusion is that we believe the study of the TB inequalities to be empirically more constrained in psychology than in physics. Specifically, we show how the TB inequalities, as applied to cognitive systems, can be derived from two simple assumptions: cognitive realism and cognitive completeness. We discuss possible implications of putative violations of the TB inequalities for cognitive models and our understanding of time in cognition in general. Overall, this paper provides a surprising, novel direction in relation to how time should be conceptualized in cognition.

  11. Human Resource Development and Organizational Values

    ERIC Educational Resources Information Center

    Hassan, Arif

    2007-01-01

    Purpose: Organizations create mission statements and emphasize core values. Inculcating those values depends on the way employees are treated and nurtured. Therefore, there seems to be a strong relationship between human resource development (HRD) practices and organizational values. The paper aims to empirically examine this relationship.…

  12. Precision Orbit Derived Atmospheric Density: Development and Performance

    NASA Astrophysics Data System (ADS)

    McLaughlin, C.; Hiatt, A.; Lechtenberg, T.; Fattig, E.; Mehta, P.

    2012-09-01

    Precision orbit ephemerides (POE) are used to estimate atmospheric density along the orbits of CHAMP (Challenging Minisatellite Payload) and GRACE (Gravity Recovery and Climate Experiment). The densities are calibrated against accelerometer derived densities and considering ballistic coefficient estimation results. The 14-hour density solutions are stitched together using a linear weighted blending technique to obtain continuous solutions over the entire mission life of CHAMP and through 2011 for GRACE. POE derived densities outperform the High Accuracy Satellite Drag Model (HASDM), Jacchia 71 model, and NRLMSISE-2000 model densities when comparing cross correlation and RMS with accelerometer derived densities. Drag is the largest error source for estimating and predicting orbits for low Earth orbit satellites. This is one of the major areas that should be addressed to improve overall space surveillance capabilities; in particular, catalog maintenance. Generally, density is the largest error source in satellite drag calculations and current empirical density models such as Jacchia 71 and NRLMSISE-2000 have significant errors. Dynamic calibration of the atmosphere (DCA) has provided measurable improvements to the empirical density models and accelerometer derived densities of extremely high precision are available for a few satellites. However, DCA generally relies on observations of limited accuracy and accelerometer derived densities are extremely limited in terms of measurement coverage at any given time. The goal of this research is to provide an additional data source using satellites that have precision orbits available using Global Positioning System measurements and/or satellite laser ranging. These measurements strike a balance between the global coverage provided by DCA and the precise measurements of accelerometers. The temporal resolution of the POE derived density estimates is around 20-30 minutes, which is significantly worse than that of accelerometer derived density estimates. However, major variations in density are observed in the POE derived densities. These POE derived densities in combination with other data sources can be assimilated into physics based general circulation models of the thermosphere and ionosphere with the possibility of providing improved density forecasts for satellite drag analysis. POE derived density estimates were initially developed using CHAMP and GRACE data so comparisons could be made with accelerometer derived density estimates. This paper presents the results of the most extensive calibration of POE derived densities compared to accelerometer derived densities and provides the reasoning for selecting certain parameters in the estimation process. The factors taken into account for these selections are the cross correlation and RMS performance compared to the accelerometer derived densities and the output of the ballistic coefficient estimation that occurs simultaneously with the density estimation. This paper also presents the complete data set of CHAMP and GRACE results and shows that the POE derived densities match the accelerometer densities better than empirical models or DCA. This paves the way to expand the POE derived densities to include other satellites with quality GPS and/or satellite laser ranging observations.

  13. Thermospheric winds and exospheric temperatures from incoherent scatter radar measurements in four seasons

    NASA Technical Reports Server (NTRS)

    Antoniadis, D. A.

    1976-01-01

    The time-dependent equations of neutral air motion are solved subject to three constraints: two of them are the usual upper and lower boundary conditions and the third is the value of the wind-induced ion drift at any given height. Using incoherent radar data, this procedure leads to a fast, direct numerical integration of the two coupled differential equations describing the horizontal wind components and yields time dependent wind profiles and meridional exospheric neutral temperature gradients. The diurnal behavior of the neutral wind system and of the exospheric temperature is presented for two solstice and two equinox days. The data used were obtained by the St. Santin and the Millstone Hill incoherent scatter radars. The derived geographic distributions of the exospheric temperatures are compared with those predicted by the OGO-6 empirical thermospheric model.

  14. Measuring meaning in life following cancer

    PubMed Central

    Jim, Heather S.; Purnell, Jason Q.; Richardson, Susan A.; Golden-Kreutz, Deanna; Andersen, Barbara L.

    2007-01-01

    Meaning in life is a multi-faceted construct that has been conceptualized in diverse ways. It refers broadly to the value and purpose of life, important life goals, and for some, spirituality. We developed a measure of meaning in life derived from this conceptualization and designed to be a synthesis of relevant theoretical and empirical traditions. Two samples, all cancer patients, provided data for scale development and psychometric study. From exploratory and confirmatory factor analyses the Meaning in Life Scale (MiLS) emerged, and includes four aspects: Harmony and Peace, Life Perspective, Purpose and Goals, Confusion and Lessened Meaning, and Benefits of Spirituality. Supporting data for reliability (internal consistency, test–retest) and construct validity (convergent, discriminant, individual differences) are provided. The MiLS offers a theoretically based and psychometrically sound assessment of meaning in life suitable for use with cancer patients. PMID:16838197

  15. Patient-Specific Seizure Detection in Long-Term EEG Using Signal-Derived Empirical Mode Decomposition (EMD)-based Dictionary Approach.

    PubMed

    Kaleem, Muhammad; Gurve, Dharmendra; Guergachi, Aziz; Krishnan, Sridhar

    2018-06-25

    The objective of the work described in this paper is development of a computationally efficient methodology for patient-specific automatic seizure detection in long-term multi-channel EEG recordings. Approach: A novel patient-specific seizure detection approach based on signal-derived Empirical Mode Decomposition (EMD)-based dictionary approach is proposed. For this purpose, we use an empirical framework for EMD-based dictionary creation and learning, inspired by traditional dictionary learning methods, in which the EMD-based dictionary is learned from the multi-channel EEG data being analyzed for automatic seizure detection. We present the algorithm for dictionary creation and learning, whose purpose is to learn dictionaries with a small number of atoms. Using training signals belonging to seizure and non-seizure classes, an initial dictionary, termed as the raw dictionary, is formed. The atoms of the raw dictionary are composed of intrinsic mode functions obtained after decomposition of the training signals using the empirical mode decomposition algorithm. The raw dictionary is then trained using a learning algorithm, resulting in a substantial decrease in the number of atoms in the trained dictionary. The trained dictionary is then used for automatic seizure detection, such that coefficients of orthogonal projections of test signals against the trained dictionary form the features used for classification of test signals into seizure and non-seizure classes. Thus no hand-engineered features have to be extracted from the data as in traditional seizure detection approaches. Main results: The performance of the proposed approach is validated using the CHB-MIT benchmark database, and averaged accuracy, sensitivity and specificity values of 92.9%, 94.3% and 91.5%, respectively, are obtained using support vector machine classifier and five-fold cross-validation method. These results are compared with other approaches using the same database, and the suitability of the approach for seizure detection in long-term multi-channel EEG recordings is discussed. Significance: The proposed approach describes a computationally efficient method for automatic seizure detection in long-term multi-channel EEG recordings. The method does not rely on hand-engineered features, as are required in traditional approaches. Furthermore, the approach is suitable for scenarios where the dictionary once formed and trained can be used for automatic seizure detection of newly recorded data, making the approach suitable for long-term multi-channel EEG recordings. © 2018 IOP Publishing Ltd.

  16. Nonparametric spirometry reference values for Hispanic Americans.

    PubMed

    Glenn, Nancy L; Brown, Vanessa M

    2011-02-01

    Recent literature sites ethnic origin as a major factor in developing pulmonary function reference values. Extensive studies established reference values for European and African Americans, but not for Hispanic Americans. The Third National Health and Nutrition Examination Survey defines Hispanic as individuals of Spanish speaking cultures. While no group was excluded from the target population, sample size requirements only allowed inclusion of individuals who identified themselves as Mexican Americans. This research constructs nonparametric reference value confidence intervals for Hispanic American pulmonary function. The method is applicable to all ethnicities. We use empirical likelihood confidence intervals to establish normal ranges for reference values. Its major advantage: it is model free, but shares asymptotic properties of model based methods. Statistical comparisons indicate that empirical likelihood interval lengths are comparable to normal theory intervals. Power and efficiency studies agree with previously published theoretical results.

  17. BODYFIT-1FE: a computer code for three-dimensional steady-state/transient single-phase rod-bundle thermal-hydraulic analysis. Draft report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, B.C.J.; Sha, W.T.; Doria, M.L.

    1980-11-01

    The governing equations, i.e., conservation equations for mass, momentum, and energy, are solved as a boundary-value problem in space and an initial-value problem in time. BODYFIT-1FE code uses the technique of boundary-fitted coordinate systems where all the physical boundaries are transformed to be coincident with constant coordinate lines in the transformed space. By using this technique, one can prescribe boundary conditions accurately without interpolation. The transformed governing equations in terms of the boundary-fitted coordinates are then solved by using implicit cell-by-cell procedure with a choice of either central or upwind convective derivatives. It is a true benchmark rod-bundle code withoutmore » invoking any assumptions in the case of laminar flow. However, for turbulent flow, some empiricism must be employed due to the closure problem of turbulence modeling. The detailed velocity and temperature distributions calculated from the code can be used to benchmark and calibrate empirical coefficients employed in subchannel codes and porous-medium analyses.« less

  18. Trends and variability of cloud fraction cover in the Arctic, 1982-2009

    NASA Astrophysics Data System (ADS)

    Boccolari, Mauro; Parmiggiani, Flavio

    2018-05-01

    Climatology, trends and variability of cloud fraction cover (CFC) data over the Arctic (north of 70°N), were analysed over the 1982-2009 period. Data, available from the Climate Monitoring Satellite Application Facility (CM SAF), are derived from satellite measurements by AVHRR. Climatological means confirm permanent high CFC values over the Atlantic sector during all the year and during summer over the eastern Arctic Ocean. Lower values are found in the rest of the analysed area especially over Greenland and the Canadian Archipelago, nearly continuously during all the months. These results are confirmed by CFC trends and variability. Statistically significant trends were found during all the months over the Greenland Sea, particularly during the winter season (negative, less than -5 % dec -1) and over the Beaufort Sea in spring (positive, more than +5 % dec -1). CFC variability, investigated by the Empirical Orthogonal Functions, shows a substantial "non-variability" in the Northern Atlantic Ocean. Statistically significant correlations between CFC principal components elements and both the Pacific Decadal Oscillation index and Pacific North America patterns are found.

  19. A Test of Maxwell's Z Model Using Inverse Modeling

    NASA Technical Reports Server (NTRS)

    Anderson, J. L. B.; Schultz, P. H.; Heineck, T.

    2003-01-01

    In modeling impact craters a small region of energy and momentum deposition, commonly called a "point source", is often assumed. This assumption implies that an impact is the same as an explosion at some depth below the surface. Maxwell's Z Model, an empirical point-source model derived from explosion cratering, has previously been compared with numerical impact craters with vertical incidence angles, leading to two main inferences. First, the flowfield center of the Z Model must be placed below the target surface in order to replicate numerical impact craters. Second, for vertical impacts, the flow-field center cannot be stationary if the value of Z is held constant; rather, the flow-field center migrates downward as the crater grows. The work presented here evaluates the utility of the Z Model for reproducing both vertical and oblique experimental impact data obtained at the NASA Ames Vertical Gun Range (AVGR). Specifically, ejection angle data obtained through Three-Dimensional Particle Image Velocimetry (3D PIV) are used to constrain the parameters of Maxwell's Z Model, including the value of Z and the depth and position of the flow-field center via inverse modeling.

  20. Competency-Based Curriculum Development: A Pragmatic Approach

    ERIC Educational Resources Information Center

    Broski, David; And Others

    1977-01-01

    Examines the concept of competency-based education, describes an experience-based model for its development, and discusses some empirically derived rules-of-thumb for its application in allied health. (HD)

  1. Statistics of Static Stress Earthquake Triggering

    NASA Astrophysics Data System (ADS)

    Nandan, S.; Ouillon, G.; Woessner, J.; Sornette, D.; Wiemer, S.

    2014-12-01

    A likely source of earthquake clustering is static and/or dynamic stresses transferred by individual events. Previous attempts to quantify the role of static stress generally considered only the stress changes caused by large events, and often discarded data uncertainties. We test the static stress change hypothesis empirically by considering all events of magnitude M≥ 2.1 and the uncertainties in location and focal mechanism in the focal mechanism catalog for Southern California between 1981 and 2010 (Yang et al., 2011). We quantify: How the waiting time between earthquakes (1) relates to the Coulomb stress change (2) induced by event Ei at the location of Ej; How significant is the Coulomb Index (CI), fraction of source-receiver pairs with positive ΔCFS interactions, conditioned on time and amplitude of ΔCFS, compared to a mean-field CI derived from the time-independent structure of the fault network. We approximate the waiting time distributions empirically by (3), which respectively consists of triggering and background rate components, tapered by an exponential term to model the finiteness of the catalog. We observe that K/(Bc^p ) (the ratio of the triggering to the background rates at t=0), the exponent p, and the Maxwell time τ all increase with |ΔCFS| and are significantly larger for positive than for negative ΔCFS's. τ varies between ~90 days and ~150 days (approximately 0.3 decades over 6 decades of variation in stress). It defines the time beyond which the memory of stress is overprinted by occurrence of other events. The CI values become significant above a threshold |ΔCFS|. The mean-field CI is 52%, while the maximum observed CI value is ~60%. Correcting for the focal plane ambiguity, those values become respectively ~55% and ~72%. Lastly, the CI values decrease with the waiting time and converge to the mean-field CI value. The increase of p-value and K/(Bc^p ) with |ΔCFS| contradicts the prediction of stress shadow regions where seismicity is suppressed if ΔCFS<0. Our results rather suggest a spatially ubiquitous triggering process compatible with dynamic triggering, modulated by the sign and amplitude of the static stress field. We also conclude that static stress-based forecasts should not be performed over time scales much larger than τ, which is of the order of few hundred days.

  2. Potential relative increment (PRI): a new method to empirically derive optimal tree diameter growth

    Treesearch

    Don C Bragg

    2001-01-01

    Potential relative increment (PRI) is a new method to derive optimal diameter growth equations using inventory information from a large public database. Optimal growth equations for 24 species were developed using plot and tree records from several states (Michigan, Minnesota, and Wisconsin) of the North Central US. Most species were represented by thousands of...

  3. Predictive and mechanistic multivariate linear regression models for reaction development

    PubMed Central

    Santiago, Celine B.; Guo, Jing-Yao

    2018-01-01

    Multivariate Linear Regression (MLR) models utilizing computationally-derived and empirically-derived physical organic molecular descriptors are described in this review. Several reports demonstrating the effectiveness of this methodological approach towards reaction optimization and mechanistic interrogation are discussed. A detailed protocol to access quantitative and predictive MLR models is provided as a guide for model development and parameter analysis. PMID:29719711

  4. An Empirically-Derived Index of High School Academic Rigor. ACT Working Paper 2017-5

    ERIC Educational Resources Information Center

    Allen, Jeff; Ndum, Edwin; Mattern, Krista

    2017-01-01

    We derived an index of high school academic rigor by optimizing the prediction of first-year college GPA based on high school courses taken, grades, and indicators of advanced coursework. Using a large data set (n~108,000) and nominal parameterization of high school course outcomes, the high school academic rigor (HSAR) index capitalizes on…

  5. Task 4 : testing Iowa Portland cement concrete mixtures for the AASHTO mechanistic-empirical pavement design procedure.

    DOT National Transportation Integrated Search

    2008-05-01

    The present research project was designed to identify the typical Iowa material input values that are required by the Mechanistic- : Empirical Pavement Design Guide (MEPDG) for the Level 3 concrete pavement design. It was also designed to investigate...

  6. The Empirically Supported Status of Acceptance and Commitment Therapy: An Update

    ERIC Educational Resources Information Center

    Smout, Matthew F.; Hayes, Louise; Atkins, Paul W. B.; Klausen, Jessica; Duguid, James E.

    2012-01-01

    Acceptance and commitment therapy (ACT) is a transdiagnostic cognitive behavioural therapy that predominantly teaches clients acceptance and mindfulness skills, as well as values clarification and enactment skills. Australian treatment guideline providers have been cautious in recognising ACT as empirically supported. This article reviews evidence…

  7. Developmant of a Reparametrized Semi-Empirical Force Field to Compute the Rovibrational Structure of Large PAHs

    NASA Astrophysics Data System (ADS)

    Fortenberry, Ryan

    The Spitzer Space Telescope observation of spectra most likely attributable to diverse and abundant populations of polycyclic aromatic hydrocarbons (PAHs) in space has led to tremendous interest in these molecules as tracers of the physical conditions in different astrophysical regions. A major challenge in using PAHs as molecular tracers is the complexity of the spectral features in the 3-20 μm region. The large number and vibrational similarity of the putative PAHs responsible for these spectra necessitate determination for the most accurate basis spectra possible for comparison. It is essential that these spectra be established in order for the regions explored with the newest generation of observatories such as SOFIA and JWST to be understood. Current strategies to develop these spectra for individual PAHs involve either matrixisolation IR measurements or quantum chemical calculations of harmonic vibrational frequencies. These strategies have been employed to develop the successful PAH IR spectral database as a repository of basis functions used to fit astronomically observed spectra, but they are limited in important ways. Both techniques provide an adequate description of the molecules in their electronic, vibrational, and rotational ground state, but these conditions do not represent energetically hot regions for PAHs near strong radiation fields of stars and are not direct representations of the gas phase. Some non-negligible matrix effects are known in condensed-phase studies, and the inclusion of anharmonicity in quantum chemical calculations is essential to generate physically-relevant results especially for hot bands. While scaling factors in either case can be useful, they are agnostic to the system studied and are not robustly predictive. One strategy that has emerged to calculate the molecular vibrational structure uses vibrational perturbation theory along with a quartic force field (QFF) to account for higher-order derivatives of the potential energy surface. QFFs can regularly predict the fundamental vibrational frequencies to within 5 cm-1 of experimentally measured values. This level of accuracy represents a reduction in discrepancies by an order of magnitude compared with harmonic frequencies calculated with density functional theory (DFT). The major limitation of the QFF strategy is that the level of electronic-structure theory required to develop a predictive force field is prohibitively time consuming for molecular systems larger than 5 atoms. Recent advances in QFF techniques utilizing informed DFT approaches have pushed the size of the systems studied up to 24 heavy atoms, but relevant PAHs can have up to hundreds of atoms. We have developed alternative electronic-structure methods that maintain the accuracy of the coupled-cluster calculations extrapolated to the complete basis set limit with relativistic and core correlation corrections applied: the CcCR QFF. These alternative methods are based on simplifications of Hartree—Fock theory in which the computationally intensive two-electron integrals are approximated using empirical parameters. These methods reduce computational time to orders of magnitude less than the CcCR calculations. We have derived a set of optimized empirical parameters to minimize the difference molecular ions of astrochemical significance. We have shown that it is possible to derive a set of empirical parameters that will produce RMS energy differences of less than 2 cm- 1 for our test systems. We are proposing to adopt this reparameterization strategy and some of the lessons learned from the informed DFT studies to create a semi-empirical method whose tremendous speed will allow us to study the rovibrational structure of large PAHs with up to 100s of carbon atoms.

  8. Parameterization of aquatic ecosystem functioning and its natural variation: Hierarchical Bayesian modelling of plankton food web dynamics

    NASA Astrophysics Data System (ADS)

    Norros, Veera; Laine, Marko; Lignell, Risto; Thingstad, Frede

    2017-10-01

    Methods for extracting empirically and theoretically sound parameter values are urgently needed in aquatic ecosystem modelling to describe key flows and their variation in the system. Here, we compare three Bayesian formulations for mechanistic model parameterization that differ in their assumptions about the variation in parameter values between various datasets: 1) global analysis - no variation, 2) separate analysis - independent variation and 3) hierarchical analysis - variation arising from a shared distribution defined by hyperparameters. We tested these methods, using computer-generated and empirical data, coupled with simplified and reasonably realistic plankton food web models, respectively. While all methods were adequate, the simulated example demonstrated that a well-designed hierarchical analysis can result in the most accurate and precise parameter estimates and predictions, due to its ability to combine information across datasets. However, our results also highlighted sensitivity to hyperparameter prior distributions as an important caveat of hierarchical analysis. In the more complex empirical example, hierarchical analysis was able to combine precise identification of parameter values with reasonably good predictive performance, although the ranking of the methods was less straightforward. We conclude that hierarchical Bayesian analysis is a promising tool for identifying key ecosystem-functioning parameters and their variation from empirical datasets.

  9. Empirical modeling ENSO dynamics with complex-valued artificial neural networks

    NASA Astrophysics Data System (ADS)

    Seleznev, Aleksei; Gavrilov, Andrey; Mukhin, Dmitry

    2016-04-01

    The main difficulty in empirical reconstructing the distributed dynamical systems (e.g. regional climate systems, such as El-Nino-Southern Oscillation - ENSO) is a huge amount of observational data comprising time-varying spatial fields of several variables. An efficient reduction of system's dimensionality thereby is essential for inferring an evolution operator (EO) for a low-dimensional subsystem that determines the key properties of the observed dynamics. In this work, to efficient reduction of observational data sets we use complex-valued (Hilbert) empirical orthogonal functions which are appropriate, by their nature, for describing propagating structures unlike traditional empirical orthogonal functions. For the approximation of the EO, a universal model in the form of complex-valued artificial neural network is suggested. The effectiveness of this approach is demonstrated by predicting both the Jin-Neelin-Ghil ENSO model [1] behavior and real ENSO variability from sea surface temperature anomalies data [2]. The study is supported by Government of Russian Federation (agreement #14.Z50.31.0033 with the Institute of Applied Physics of RAS). 1. Jin, F.-F., J. D. Neelin, and M. Ghil, 1996: El Ni˜no/Southern Oscillation and the annual cycle: subharmonic frequency locking and aperiodicity. Physica D, 98, 442-465. 2. http://iridl.ldeo.columbia.edu/SOURCES/.KAPLAN/.EXTENDED/.v2/.ssta/

  10. Assessing the importance of self-regulating mechanisms in diamondback moth population dynamics: application of discrete mathematical models.

    PubMed

    Nedorezov, Lev V; Löhr, Bernhard L; Sadykova, Dinara L

    2008-10-07

    The applicability of discrete mathematical models for the description of diamondback moth (DBM) (Plutella xylostella L.) population dynamics was investigated. The parameter values for several well-known discrete time models (Skellam, Moran-Ricker, Hassell, Maynard Smith-Slatkin, and discrete logistic models) were estimated for an experimental time series from a highland cabbage-growing area in eastern Kenya. For all sets of parameters, boundaries of confidence domains were determined. Maximum calculated birth rates varied between 1.086 and 1.359 when empirical values were used for parameter estimation. After fitting of the models to the empirical trajectory, all birth rate values resulted considerably higher (1.742-3.526). The carrying capacity was determined between 13.0 and 39.9DBM/plant, after fitting of the models these values declined to 6.48-9.3, all values well within the range encountered empirically. The application of the Durbin-Watson criteria for comparison of theoretical and experimental population trajectories produced negative correlations with all models. A test of residual value groupings for randomness showed that their distribution is non-stochastic. In consequence, we conclude that DBM dynamics cannot be explained as a result of intra-population self-regulative mechanisms only (=by any of the models tested) and that more comprehensive models are required for the explanation of DBM population dynamics.

  11. Derivative financial instruments and nonprofit health care providers.

    PubMed

    Stewart, Louis J; Owhoso, Vincent

    2004-01-01

    This article examines the extent of derivative financial instrument use among US nonprofit health systems and the impact of these financial instruments on their cash flows, reported operating results, and financial risks. Our examination is conducted through a case study of New Jersey hospitals and health systems. We review the existing literature on interest rate derivative instruments and US hospitals and health systems. This literature describes the design of these derivative financial instruments and the theoretical benefits of their use by large health care provider organizations. Our contribution to the literature is to provide an empirical evaluation of derivative financial instruments usage among a geographically limited sample of US nonprofit health systems. We reviewed the audited financial statements of the 49 community hospitals and multi-hospital health systems operating in the state of New Jersey. We found that 8 percent of New Jersey's nonprofit health providers utilized interest rate derivatives with an aggregate principle value of $229 million. These derivative users combine interest rate swaps and caps to lower the effective interest costs of their long-term debt while limiting their exposure to future interest rate increases. In addition, while derivative assets and liabilities have an immaterial balance sheet impact, derivative related gains and losses are a material component of their reported operating results. We also found that derivative usage among these four health systems was responsible for generating positive cash flows in the range of 1 percent to 2 percent of their total 2001 cash flows from operations. As a result of our admittedly limited samples we conclude that interest rate swaps and caps are effective risk management tools. However, we also found that while these derivative financial instruments are useful hedges against the risks of issuing long-term financing instruments, they also expose derivative users to credit, contract termination and interest rate volatility risks. In conclusion, we find that these financial instruments can also generate negative as well as positive cash flows and have both a positive and negative impact on reported operating results.

  12. Changing National Forest Values: a content analysis.

    Treesearch

    David N. Bengston; Zhi Xu

    1995-01-01

    Empirically analyzes the evolution of national forest values in recent years. A computerized content analysis procedure was developed and used to analyze the forest value systems of forestry professionals, mainstream environmentalists, and the public. National forest values were found to have shifted significantly over the study period.

  13. Analytic, empirical and delta method temperature derivatives of D-D and D-T fusion reactivity formulations, as a means of verification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Langenbrunner, James R.; Booker, Jane M.

    We examine the derivatives with respect to temperature, for various deuterium-tritium (DT) and deuterium-deuterium (D-D) fusion-reactivity formulations. Langenbrunner and Makaruk [1] had studied this as a means of understanding the time and temperature domain of reaction history measured in dynamic fusion experiments. Presently, we consider the temperature derivative dependence of fusion reactivity as a means of exercising and verifying the consistency of the various reactivity formulations.

  14. The galactic reddening law - The evidence from uvby-beta photometry of B stars

    NASA Astrophysics Data System (ADS)

    Tobin, W.

    1985-01-01

    Values of interstellar reddening derived from uvby photometry of intermediate and high latitude B stars are used to test between the conflicting ideas of total galactic reddening expounded by Burstein and Heiles (1982) and de Vaucouleurs and Buta (1983). B stars are useful tracers of the galactic reddening because of their empirically and theoretically well-defined colours, and their large distances, but peculiar colours can result in an overestimate of the interstellar reddening, and Nicolet's (1982) B-star estimates of the polar reddening are too high because of this. Selection criteria are developed to exclude B stars with peculiar colours, and 72 selected B stars more than 250 pc from the galactic plane support the Burstein and Heiles zero-point of galactic reddening. The evidence of a few stars supports Burstein and Heiles' use of deep galaxy counts to provide a first-order correction for variations in the dust-to-gas ratio, but for corrections E (b - y) > 0.03 the accuracy may be less than their claimed 10%. However, the comparison of photometrically-derived values of interstellar reddening with values predicted by some model is inevitably partly subjective unless an extensive study is made of every individual star because otherwise any insufficiently red star can always plausibly be discounted as not outside all of the galactic dust, and any star that is too red can always plausibly be discounted as e.g. an undetected binary or emission-line star. The Burstein and Heiles maps are used to determine the intrinsic colours of some slightly-reddened B stars. B stars with projected rotational velocities of 250-300 km s-1 do not appear to be significantly redder than the Crawford (1978) standard relation.

  15. Empirical Bayes estimation of proportions with application to cowbird parasitism rates

    USGS Publications Warehouse

    Link, W.A.; Hahn, D.C.

    1996-01-01

    Bayesian models provide a structure for studying collections of parameters such as are considered in the investigation of communities, ecosystems, and landscapes. This structure allows for improved estimation of individual parameters, by considering them in the context of a group of related parameters. Individual estimates are differentially adjusted toward an overall mean, with the magnitude of their adjustment based on their precision. Consequently, Bayesian estimation allows for a more credible identification of extreme values in a collection of estimates. Bayesian models regard individual parameters as values sampled from a specified probability distribution, called a prior. The requirement that the prior be known is often regarded as an unattractive feature of Bayesian analysis and may be the reason why Bayesian analyses are not frequently applied in ecological studies. Empirical Bayes methods provide an alternative approach that incorporates the structural advantages of Bayesian models while requiring a less stringent specification of prior knowledge. Rather than requiring that the prior distribution be known, empirical Bayes methods require only that it be in a certain family of distributions, indexed by hyperparameters that can be estimated from the available data. This structure is of interest per se, in addition to its value in allowing for improved estimation of individual parameters; for example, hypotheses regarding the existence of distinct subgroups in a collection of parameters can be considered under the empirical Bayes framework by allowing the hyperparameters to vary among subgroups. Though empirical Bayes methods have been applied in a variety of contexts, they have received little attention in the ecological literature. We describe the empirical Bayes approach in application to estimation of proportions, using data obtained in a community-wide study of cowbird parasitism rates for illustration. Since observed proportions based on small sample sizes are heavily adjusted toward the mean, extreme values among empirical Bayes estimates identify those species for which there is the greatest evidence of extreme parasitism rates. Applying a subgroup analysis to our data on cowbird parasitism rates, we conclude that parasitism rates for Neotropical Migrants as a group are no greater than those of Resident/Short-distance Migrant species in this forest community. Our data and analyses demonstrate that the parasitism rates for certain Neotropical Migrant species are remarkably low (Wood Thrush and Rose-breasted Grosbeak) while those for others are remarkably high (Ovenbird and Red-eyed Vireo).

  16. Physical properties of CO-dark molecular gas traced by C+

    NASA Astrophysics Data System (ADS)

    Tang, Ningyu; Li, Di; Heiles, Carl; Wang, Shen; Pan, Zhichen; Wang, Jun-Jie

    2016-09-01

    Context. Neither Hi nor CO emission can reveal a significant quantity of so-called dark gas in the interstellar medium (ISM). It is considered that CO-dark molecular gas (DMG), the molecular gas with no or weak CO emission, dominates dark gas. Determination of physical properties of DMG is critical for understanding ISM evolution. Previous studies of DMG in the Galactic plane are based on assumptions of excitation temperature and volume density. Independent measurements of temperature and volume density are necessary. Aims: We intend to characterize physical properties of DMG in the Galactic plane based on C+ data from the Herschel open time key program, namely Galactic Observations of Terahertz C+ (GOT C+) and Hi narrow self-absorption (HINSA) data from international Hi 21 cm Galactic plane surveys. Methods: We identified DMG clouds with HINSA features by comparing Hi, C+, and CO spectra. We derived the Hi excitation temperature and Hi column density through spectral analysis of HINSA features. The Hi volume density was determined by utilizing the on-the-sky dimension of the cold foreground Hi cloud under the assumption of axial symmetry. The column and volume density of H2 were derived through excitation analysis of C+ emission. The derived parameters were then compared with a chemical evolutionary model. Results: We identified 36 DMG clouds with HINSA features. Based on uncertainty analysis, optical depth of HiτHi of 1 is a reasonable value for most clouds. With the assumption of τHi = 1, these clouds were characterized by excitation temperatures in a range of 20 K to 92 K with a median value of 55 K and volume densities in the range of 6.2 × 101 cm-3 to 1.2 × 103 cm-3 with a median value of 2.3 × 102 cm-3. The fraction of DMG column density in the cloud (fDMG) decreases with increasing excitation temperature following an empirical relation fDMG =-2.1 × 10-3Tex,(τHi = 1) + 1.0. The relation between fDMG and total hydrogen column density NH is given by fDMG = 1.0-3.7 × 1020/NH. We divided the clouds into a high extinction group and low extinction group with the dividing threshold being total hydrogen column density NH of 5.0 × 1021 cm-2 (AV = 2.7 mag). The values of fDMG in the low extinction group (AV ≤ 2.7 mag) are consistent with the results of the time-dependent, chemical evolutionary model at the age of ~10 Myr. Our empirical relation cannot be explained by the chemical evolutionary model for clouds in the high extinction group (AV > 2.7 mag). Compared to clouds in the low extinction group (AV ≤ 2.7 mag), clouds in the high extinction group (AV > 2.7 mag) have comparable volume densities but excitation temperatures that are 1.5 times lower. Moreover, CO abundances in clouds of the high extinction group (AV > 2.7 mag) are 6.6 × 102 times smaller than the canonical value in the Milky Way. Conclusions: The molecular gas seems to be the dominate component in these clouds. The high percentage of DMG in clouds of the high extinction group (AV > 2.7 mag) may support the idea that molecular clouds are forming from pre-existing molecular gas, I.e., a cold gas with a high H2 content but that contains a little or no CO content.

  17. Computation of bedrock-aquifer recharge in northern Westchester County, New York, and chemical quality of water from selected bedrock wells

    USGS Publications Warehouse

    Wolcott, Stephen W.; Snow, Robert F.

    1995-01-01

    An empirical technique was used to calculate the recharge to bedrock aquifers in northern Westchester County. This method requires delineation of ground-water divides within the aquifer area and values for (1) the extent of till and exposed bedrock within the aquifer area, and (2) mean annual runoff. This report contains maps and data needed for calculation of recharge in any given area within the 165square-mile study area. Recharge was computed by this technique for a 93-square-mile part of the study area and used a ground-water-flow model to evaluate the reliability of the method. A two-layer, steady-state model of the selected area was calibrated. The area consists predominantly of bedrock overlain by small localized deposits of till and stratified drill Ground-water-level and streamflow data collected in mid-November 1987 were used for model calibration. The data set approximates average annual conditions. The model was calibrated from (1) estimates of recharge as computed through the empirical technique, and (2) a range of values for hydrologic properties derived from aquifer tests and published literature. Recharge values used for model simulation appear to be reasonable for average steady-state conditions. Water-quality data were collected from 53 selected bedrock wells throughout northern Westchester County to define the background ground-water quality. The constituents and properties for which samples were analyzed included major cations and anions, temperature, pH, specific conductance, and hardness. Results indicate little difference in water quality among the bedrock aquifers within the study area. Ground water is mainly the calcium-bicarbonate type and is moderately hard. Average concentrations of sodium, sulfate, chloride, nitrate, iron, and manganese were within acceptable limits established by the U.S. Environmental Protection Agency for domestic water supply.

  18. What is geological entropy and why measure it? A parsimonious approach for predicting transport behaviour in heterogeneous aquifers

    NASA Astrophysics Data System (ADS)

    Bianchi, Marco; Pedretti, Daniele

    2017-04-01

    We present an approach to predict non-Fickian transport behaviour in alluvial aquifers from knowledge of physical heterogeneity. This parsimonious approach is based on only two measurable parameters describing the global variability and the structure of the hydraulic conductivity (K) field: the variance of the ln(K) values (σY 2), and a newly developed index of geological entropy (HR), based on the concept of Shannon information entropy. Both σY 2 and HR can be obtained from data collected during conventional hydrogeological investigations and from the analysis of a representative model of the spatial distribution of K classes (e.g. hydrofacies) over the domain of interest. The new index HR integrates multiple characteristics of the K field, including the presence of well-connected features, into a unique metric that quantifies the degrees of spatial disorder in the K field structure. Stochastic simulations of tracer tests in synthetic K fields based on realistic distributions of hydrofacies in alluvial aquifers are conducted to identify empirical relations between HR, σY 2, and the first three central temporal moments of the resulting breakthrough curves (BTCs). Results indicate that the first and second moments tend to increase with spatial disorder (i.e, HR increasing). Conversely, high values of the third moment (i.e. skewness), which indicate significant post-peak tailing in the BTCs and non-Fickian transport behaviour, are observed in more orderly structures (i.e, HR decreasing), or for very high σY 2 values. We show that simple closed-form empirical expressions can be derived to describe the bivariate dependency between the skewness of the BTC and corresponding pairs of HR and σY 2. This dependency shows clear correlation for a broad range of structures and Kvariability levels. Therefore, it provides an effective and broadly applicable approach to explain and predict non-Fickian transport in real aquifers, such as those at the well-known MADE site and at the Lawrence Livermore National Laboratory.

  19. Observability, Visualizability and the Question of Metaphysical Neutrality

    NASA Astrophysics Data System (ADS)

    Wolff, Johanna

    2015-09-01

    Theories in fundamental physics are unlikely to be ontologically neutral, yet they may nonetheless fail to offer decisive empirical support for or against particular metaphysical positions. I illustrate this point by close examination of a particular objection raised by Wolfgang Pauli against Hermann Weyl. The exchange reveals that both parties to the dispute appeal to broader epistemological principles to defend their preferred metaphysical starting points. I suggest that this should make us hesitant to assume that in deriving metaphysical conclusions from physical theories we place our metaphysical theories on a purely empirical foundation. The metaphysics within a particular physical theory may well be the result of a priori assumptions in the background, not particular empirical findings.

  20. Path integral for equities: Dynamic correlation and empirical analysis

    NASA Astrophysics Data System (ADS)

    Baaquie, Belal E.; Cao, Yang; Lau, Ada; Tang, Pan

    2012-02-01

    This paper develops a model to describe the unequal time correlation between rate of returns of different stocks. A non-trivial fourth order derivative Lagrangian is defined to provide an unequal time propagator, which can be fitted to the market data. A calibration algorithm is designed to find the empirical parameters for this model and different de-noising methods are used to capture the signals concealed in the rate of return. The detailed results of this Gaussian model show that the different stocks can have strong correlation and the empirical unequal time correlator can be described by the model's propagator. This preliminary study provides a novel model for the correlator of different instruments at different times.

  1. On Detecting Influential Data and Selecting Regression Variables

    DTIC Science & Technology

    1989-10-01

    subset of the data. The empirical influence function for ,, IFA is defined to be IFA = AA -- A (2) For a given positive definite matrix M and a nonzero...interest. Cook and Weisberg (1980) tried to treat their measurement of the influence on the fitted values X. They used the empirical influence function for...Characterizations of an empirical influence function for detecting influential cases in regression. Technometrics 22, 495-508. [3] Gray, J. B. and Ling, R. F

  2. The amorphous state: first-principles derivation of the Gordon-Taylor equation for direct prediction of the glass transition temperature of mixtures; estimation of the crossover temperature of fragile glass formers; physical basis of the "Rule of 2/3".

    PubMed

    Skrdla, Peter J; Floyd, Philip D; Dell'Orco, Philip C

    2017-08-09

    Predicting the glass transition temperature (T g ) of mixtures has applications that span across industries and scientific disciplines. By plotting experimentally determined T g values as a function of the glass composition, one can usually apply the Gordon-Taylor (G-T) equation to determine the slope, k, which subsequently can be used in T g predictions. Traditionally viewed as a phenomenological/empirical model, this work proposes a physical basis for the G-T equation. The proposed equations allow for the calculation of k directly and, hence, they determine/predict the T g values of mixtures algebraically. Two derivations for k are provided, one for strong glass-formers and the other for fragile mixtures, with the modeled trehalose-water and naproxen-indomethacin systems serving as examples of each. Separately, a new equation is described for the first time that allows for the direct determination of the crossover temperature, T x , for fragile glass-formers. Lastly, the so-called "Rule of 2/3", which is commonly used to estimate the T g of a pure amorphous phase based solely on the fusion/melting temperature, T f , of the corresponding crystalline phase, is shown to be underpinned by the heat capacity ratio of the two phases referenced to a common temperature, as evidenced by the calculations put forth for indomethacin and felodipine.

  3. Systematic Site Characterization at Seismic Stations combined with Empirical Spectral Modeling: critical data for local hazard analysis

    NASA Astrophysics Data System (ADS)

    Michel, Clotaire; Hobiger, Manuel; Edwards, Benjamin; Poggi, Valerio; Burjanek, Jan; Cauzzi, Carlo; Kästli, Philipp; Fäh, Donat

    2016-04-01

    The Swiss Seismological Service operates one of the densest national seismic networks in the world, still rapidly expanding (see http://www.seismo.ethz.ch/monitor/index_EN). Since 2009, every newly instrumented site is characterized following an established procedure to derive realistic 1D VS velocity profiles. In addition, empirical Fourier spectral modeling is performed on the whole network for each recorded event with sufficient signal-to-noise ratio. Besides the source characteristics of the earthquakes, statistical real time analyses of the residuals of the spectral modeling provide a seamlessly updated amplification function w.r. to Swiss rock conditions at every station. Our site characterization procedure is mainly based on the analysis of surface waves from passive experiments and includes cross-checks of the derived amplification functions with those obtained through spectral modeling. The systematic use of three component surface-wave analysis, allowing the derivation of both Rayleigh and Love waves dispersion curves, also contributes to the improved quality of the retrieved profiles. The results of site characterisation activities at recently installed strong-motion stations depict the large variety of possible effects of surface geology on ground motion in the Alpine context. Such effects range from de-amplification at hard-rock sites to amplification up to a factor of 15 in lacustrine sediments with respect to the Swiss reference rock velocity model. The derived velocity profiles are shown to reproduce observed amplification functions from empirical spectral modeling. Although many sites are found to exhibit 1D behavior, our procedure allows the detection and qualification of 2D and 3D effects. All data collected during the site characterization procedures in the last 20 years are gathered in a database, implementing a data model proposed for community use at the European scale through NERA and EPOS (www.epos-eu.org). A web stationbook derived from it can be accessed through the interface www.stations.seismo.ethz.ch.

  4. A rights-based proposal for managing faith-based values and expectations of migrants at end-of-life illustrated by an empirical study involving South Asians in the UK.

    PubMed

    Samanta, Jo; Samanta, Ash; Madhloom, Omar

    2018-06-08

    International migration is an important issue for many high-income countries and is accompanied by opportunities as well as challenges. South Asians are the largest minority ethnic group in the United Kingdom, and this diaspora is reflective of the growing diversity of British society. An empirical study was performed to ascertain the faith-based values, beliefs, views and attitudes of participants in relation to their perception of issues pertaining to end-of-life care. Empirical observations from this study, as well as the extant knowledge-base from the literature, are used to support and contextualise our reflections against a socio-legal backdrop. We argue for accommodation of faith-based values of migrants at end-of-life within normative structures of receiving countries. We posit the ethically relevant principles of inclusiveness, integration and embedment, for an innovative bioethical framework as a vehicle for accommodating faith-based values and needs of migrants at end-of-life. These tenets work conjunctively, as well as individually, in respect of individual care, enabling processes and procedures, and ultimately for formulating policy and strategy. © 2018 John Wiley & Sons Ltd.

  5. Empirically Exploring Higher Education Cultures of Assessment

    ERIC Educational Resources Information Center

    Fuller, Matthew B.; Skidmore, Susan T.; Bustamante, Rebecca M.; Holzweiss, Peggy C.

    2016-01-01

    Although touted as beneficial to student learning, cultures of assessment have not been examined adequately using validated instruments. Using data collected from a stratified, random sample (N = 370) of U.S. institutional research and assessment directors, the models tested in this study provide empirical support for the value of using the…

  6. Performance-based quality assurance/quality control (QA/QC) acceptance procedures for in-place soil testing phase 3.

    DOT National Transportation Integrated Search

    2015-01-01

    One of the objectives of this study was to evaluate soil testing equipment based on its capability of measuring in-place stiffness or modulus values. : As design criteria transition from empirical to mechanistic-empirical, soil test methods and equip...

  7. Constraining Δ33S signatures of Archean seawater sulfate with carbonate-associated sulfate

    NASA Astrophysics Data System (ADS)

    Peng, Y.; Bao, H.; Bekker, A.; Hofmann, A.

    2017-12-01

    Non-mass dependent sulfur isotope deviation of S-bearing phases in Archean sedimentary strata, and expressed as Δ33S, has a consistent pattern, i.e., sulfide (pyrite) predominantly bear positive Δ33S values, while Paleoarchean sulfate (barite) has negative Δ33S values. This pattern was later corroborated by observations of negative Δ33S values in Archean volcanogenic massive sulfide deposits and negative Δ33S values in early diagenetic nodular pyrite with a wide range of δ34S values, which is thought to be due to microbial sulfate reduction. These signatures have provided a set of initial conditions for a mechanistic interpretation at physical chemistry level. Unlike the younger geological times when large bodies of seawater evaporite deposits are common, to expand seawater sulfate records, carbonate-associated sulfate (CAS) was utilized as a proxy for ancient seawater sulfate. CAS extracted from the Archean carbonates carries positive Δ33S values. However, CAS could be derived from pyrite oxidation following exposure to modern oxidizing conditions and/or during laboratory extraction procedures. It is, therefore, important for us understanding context of the overall early earth atmospheric condition to empirically confirm whether Archean seawater sulfate was generally characterized by negative Δ33S signatures. Combined δ18O, Δ17O, δ34S, and Δ33S analyses of sequentially extracted water-leachable sulfate (WLS) and acid-leachable sulfate (ALS = CAS) and δ34S and Δ33S analyses of pyrite can help to identify the source of extracted sulfate. We studied drill-core samples of Archean carbonates from the 2.55 Ga Malmani and Campell Rand supgroups, South Africa. Our preliminary results show that 1) neither WLS nor ALS were extracted from samples with extremely low pyrite contents (less than 0.05 wt.%); 2) extractable WLS and ALS is present in samples with relatively high pyrite contents (more than 1 wt.%), and that δ34S and Δ33S values of WLS, ALS, and pyrite are similar; 3) δ18O and Δ17O values of WLS and ALS are negative and close to 0 ‰ V-SMOW, respectively. Our study indicates that ALS (=CAS) extractable from Archean carbonates is mostly derived from pyrite oxidation. Therefore, up to date, whether Archean seawater sulfate carried positive Δ33S values remains conjectural.

  8. Values Education as Good Practice Pedagogy: Evidence from Australian Empirical Research

    ERIC Educational Resources Information Center

    Lovat, Terence

    2017-01-01

    This article focuses on the Australian Government's Values Education Program and, within its context, the "Values Education Good Practice Schools Project" (VEGPSP) Reports and the "Project to Test and Measure the Impact of Values Education on Student Effects and School Ambience," funded federally from 2003 to 2010. Findings…

  9. Types of Faculty Scholars in Community Colleges

    ERIC Educational Resources Information Center

    Park, Toby J.; Braxton, John M.; Lyken-Segosebe, Dawn

    2015-01-01

    This chapter describes three empirically derived types of faculty scholars in community colleges: Immersed Scholars, Scholars of Dissemination, and Scholars of Pedagogical Knowledge. This chapter discusses these types and offers a recommendation.

  10. Comparison of measured efficiencies of nine turbine designs with efficiencies predicted by two empirical methods

    NASA Technical Reports Server (NTRS)

    English, Robert E; Cavicchi, Richard H

    1951-01-01

    Empirical methods of Ainley and Kochendorfer and Nettles were used to predict performances of nine turbine designs. Measured and predicted performances were compared. Appropriate values of blade-loss parameter were determined for the method of Kochendorfer and Nettles. The measured design-point efficiencies were lower than predicted by as much as 0.09 (Ainley and 0.07 (Kochendorfer and Nettles). For the method of Kochendorfer and Nettles, appropriate values of blade-loss parameter ranged from 0.63 to 0.87 and the off-design performance was accurately predicted.

  11. Lab and Pore-Scale Study of Low Permeable Soils Diffusional Tortuosity

    NASA Astrophysics Data System (ADS)

    Lekhov, V.; Pozdniakov, S. P.; Denisova, L.

    2016-12-01

    Diffusion plays important role in contaminant spreading in low permeable units. The effective diffusion coefficient of saturated porous medium depends on this coefficient in water, porosity and structural parameter of porous space - tortuosity. Theoretical models of relationship between porosity and diffusional tortuosity are usually derived for conceptual granular models of medium filled by solid particles of simple geometry. These models usually do not represent soils with complex microstructure. The empirical models, like as Archie's law, based on the experimental electrical conductivity data are mostly useful for practical applications. Such models contain empirical parameters that should be defined experimentally for given soil type. In this work, we compared tortuosity values obtained in lab-scale diffusional experiments and pore scale diffusion simulation for the studied soil microstructure and exanimated relationship between tortuosity and porosity. Samples for the study were taken from borehole cores of low-permeable silt-clay formation. Using the samples of 50 cm3 we performed lab scale diffusional experiments and estimated the lab-scale tortuosity. Next using these samples we studied the microstructure with X-ray microtomograph. Shooting performed on undisturbed microsamples of size 1,53 mm with a resolution ×300 (10243 vox). After binarization of each obtained 3-D structure, its spatial correlation analysis was performed. This analysis showed that the spatial correlation scale of the indicator variogram is considerably smaller than microsample length. Then there was the numerical simulation of the Laplace equation with binary coefficients for each microsamples. The total number of simulations at the finite-difference grid of 1753 cells was 3500. As a result the effective diffusion coefficient, tortuosity and porosity values were obtained for all studied microsamples. The results were analyzed in the form of graph of tortuosity versus porosity. The 6 experimental tortuosity values well agree with pore-scale simulations falling in the general pattern that shows nonlinear decreasing of tortuosity with decreasing of porosity. Fitting this graph by Archie model we found exponent value in the range between 1,8 and 2,4. This work was supported by RFBR via grant 14-05-00409.

  12. Bounds on quantum confinement effects in metal nanoparticles

    NASA Astrophysics Data System (ADS)

    Blackman, G. Neal; Genov, Dentcho A.

    2018-03-01

    Quantum size effects on the permittivity of metal nanoparticles are investigated using the quantum box model. Explicit upper and lower bounds are derived for the permittivity and relaxation rates due to quantum confinement effects. These bounds are verified numerically, and the size dependence and frequency dependence of the empirical Drude size parameter is extracted from the model. Results suggest that the common practice of empirically modifying the dielectric function can lead to inaccurate predictions for highly uniform distributions of finite-sized particles.

  13. Prediction of the Dynamic Yield Strength of Metals Using Two Structural-Temporal Parameters

    NASA Astrophysics Data System (ADS)

    Selyutina, N. S.; Petrov, Yu. V.

    2018-02-01

    The behavior of the yield strength of steel and a number of aluminum alloys is investigated in a wide range of strain rates, based on the incubation time criterion of yield and the empirical models of Johnson-Cook and Cowper-Symonds. In this paper, expressions for the parameters of the empirical models are derived through the characteristics of the incubation time criterion; a satisfactory agreement of these data and experimental results is obtained. The parameters of the empirical models can depend on some strain rate. The independence of the characteristics of the incubation time criterion of yield from the loading history and their connection with the structural and temporal features of the plastic deformation process give advantage of the approach based on the concept of incubation time with respect to empirical models and an effective and convenient equation for determining the yield strength in a wider range of strain rates.

  14. A five-step procedure for the clinical use of the MPD in neuropsychological assessment of children.

    PubMed

    Wallbrown, F H; Fuller, G B

    1984-01-01

    Described a five-step procedure that can be used to detect organicity on the basis of children's performance on the Minnesota Percepto Diagnostic Test (MPD). The first step consists of examining the T score for rotations to determine whether it is below the cut-off score, which has been established empirically as an indicator of organicity. The second step consists of matching the examinee's configuration of error scores, separation of circle-diamond (SpCD), distortion of circle-diamond (DCD), and distortion of dots (DD), with empirically derived tables. The third step consists of considering the T score for rotations and error configuration jointly. The fourth step consists of using empirically established discriminant equations, and the fifth step involves using data from limits testing and other data sources. The clinical and empirical bases for the five-step procedure also are discussed.

  15. The Derivation of Sink Functions of Wheat Organs using the GREENLAB Model

    PubMed Central

    Kang, Mengzhen; Evers, Jochem B.; Vos, Jan; de Reffye, Philippe

    2008-01-01

    Background and Aims In traditional crop growth models assimilate production and partitioning are described with empirical equations. In the GREENLAB functional–structural model, however, allocation of carbon to different kinds of organs depends on the number and relative sink strengths of growing organs present in the crop architecture. The aim of this study is to generate sink functions of wheat (Triticum aestivum) organs by calibrating the GREENLAB model using a dedicated data set, consisting of time series on the mass of individual organs (the ‘target data’). Methods An experiment was conducted on spring wheat (Triticum aestivum, ‘Minaret’), in a growth chamber from, 2004 to, 2005. Four harvests were made of six plants each to determine the size and mass of individual organs, including the root system, leaf blades, sheaths, internodes and ears of the main stem and different tillers. Leaf status (appearance, expansion, maturity and death) of these 24 plants was recorded. With the structures and mass of organs of four individual sample plants, the GREENLAB model was calibrated using a non-linear least-square-root fitting method, the aim of which was to minimize the difference in mass of the organs between measured data and model output, and to provide the parameter values of the model (the sink strengths of organs of each type, age and tiller order, and two empirical parameters linked to biomass production). Key Results and Conclusions The masses of all measured organs from one plant from each harvest were fitted simultaneously. With estimated parameters for sink and source functions, the model predicted the mass and size of individual organs at each position of the wheat structure in a mechanistic way. In addition, there was close agreement between experimentally observed and simulated values of leaf area index. PMID:18045794

  16. Pollution control costs of a transboundary river basin: Empirical tests of the fairness and stability of cost allocation mechanisms using game theory.

    PubMed

    Shi, Guang-Ming; Wang, Jin-Nan; Zhang, Bing; Zhang, Zhe; Zhang, Yong-Liang

    2016-07-15

    With rapid economic growth, transboundary river basin pollution in China has become a very serious problem. Based on practical experience in other countries, cooperation among regions is an economic way to control the emission of pollutants. This study develops a game theoretic simulation model to analyze the cost effectiveness of reducing water pollutant emissions in four regions of the Jialu River basin while considering the stability and fairness of four cost allocation schemes. Different schemes (the nucleolus, the weak nucleolus, the Shapley value and the Separable Cost Remaining Benefit (SCRB) principle) are used to allocate regionally agreed-upon water pollutant abatement costs. The main results show that the fully cooperative coalition yielded the highest incremental gain for regions willing to cooperate if each region agreed to negotiate by transferring part of the incremental gain obtained from the cooperation to cover the losses of other regions. In addition, these allocation schemes produce different outcomes in terms of their fairness to the players and in terms of their derived stability, as measured by the Shapley-Shubik Power Index and the Propensity to Disrupt. Although the Shapley value and the SCRB principle exhibit superior fairness and stabilization to the other methods, only the SCRB principle may maintains full cooperation among regions over the long term. The results provide clear empirical evidence that regional gain allocation may affect the sustainability of cooperation. Therefore, it is implied that not only the cost-effectiveness but also the long-term sustainability should be considered while formulating and implementing environmental policies. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Astrophysics Meets Atomic Physics: Fe I Line Identifications and Templates for Old Stellar Populations from Warm and Hot Stellar UV Spectra

    NASA Astrophysics Data System (ADS)

    Peterson, Ruth

    2017-08-01

    Imaging surveys from the ultraviolet to the infrared are recording ever more distant astronomical sources. Needed to interpret them are high-resolution ultraviolet spectral templates at all metallicities for both old and intermediate-age stars, and the atomic physics data essential to model their spectra. To this end we are proposing new UV spectra of four warm and hot stars spanning a wide range of metallicity. These will provide observational templates of old and young metal-poor turnoff stars, and the laboratory source for the identification of thousands of lines of neutral iron that appear in stellar spectra but are not identified in laboratory spectra. By matching existing and new stellar spectra to calculations of energy levels, line wavelengths, and gf-values, Peterson & Kurucz (2015) and Peterson, Kurucz, & Ayres (2017) identified 124 Fe I levels with energies up to 8.4eV. These provided 3000 detectable Fe I lines from 1600A to 5.4mu, and yielded empirical gf-values for 640 of these. Here we propose high-resolution UV spectra reaching 1780A for the first time at the turnoff, to detect and identify the strongest Fe I lines at 1800 - 1850A. This should add 250 new Fe I levels. These spectra, plus one at lower resolution reaching 1620A, will also provide empirical UV templates for turnoff stars at high redshifts as well as low. This is essential to deriving age and metallicity independently for globular clusters and old galaxies out to z 3. It will also improve abundances of trace elements in metal-poor stars, constraining nucleosynthesis at early epochs and aiding the reconstruction of the populations of the Milky Way halo and of nearby globular clusters.

  18. Empirical yield tables for Wisconsin.

    Treesearch

    Jerold T. Hahn; Joan M. Stelman

    1989-01-01

    Describes the tables derived from the 1983 Forest Survey of Wisconsin and presents ways the tables can be used. These tables are broken down according to Wisconsin`s five Forest Survey Units and 14 forest types.

  19. An empirically derived basis for calculating the area, rate, and distribution of water-drop impingement on airfoils

    NASA Technical Reports Server (NTRS)

    Bergrun, Norman R

    1952-01-01

    An empirically derived basis for predicting the area, rate, and distribution of water-drop impingement on airfoils of arbitrary section is presented. The concepts involved represent an initial step toward the development of a calculation technique which is generally applicable to the design of thermal ice-prevention equipment for airplane wing and tail surfaces. It is shown that sufficiently accurate estimates, for the purpose of heated-wing design, can be obtained by a few numerical computations once the velocity distribution over the airfoil has been determined. The calculation technique presented is based on results of extensive water-drop trajectory computations for five airfoil cases which consisted of 15-percent-thick airfoils encompassing a moderate lift-coefficient range. The differential equations pertaining to the paths of the drops were solved by a differential analyzer.

  20. Reconstruction of Missing Pixels in Satellite Images Using the Data Interpolating Empirical Orthogonal Function (DINEOF)

    NASA Astrophysics Data System (ADS)

    Liu, X.; Wang, M.

    2016-02-01

    For coastal and inland waters, complete (in spatial) and frequent satellite measurements are important in order to monitor and understand coastal biological and ecological processes and phenomena, such as diurnal variations. High-frequency images of the water diffuse attenuation coefficient at the wavelength of 490 nm (Kd(490)) derived from the Korean Geostationary Ocean Color Imager (GOCI) provide a unique opportunity to study diurnal variation of the water turbidity in coastal regions of the Bohai Sea, Yellow Sea, and East China Sea. However, there are lots of missing pixels in the original GOCI-derived Kd(490) images due to clouds and various other reasons. Data Interpolating Empirical Orthogonal Function (DINEOF) is a method to reconstruct missing data in geophysical datasets based on Empirical Orthogonal Function (EOF). In this study, the DINEOF is applied to GOCI-derived Kd(490) data in the Yangtze River mouth and the Yellow River mouth regions, the DINEOF reconstructed Kd(490) data are used to fill in the missing pixels, and the spatial patterns and temporal functions of the first three EOF modes are also used to investigate the sub-diurnal variation due to the tidal forcing. In addition, DINEOF method is also applied to the Visible Infrared Imaging Radiometer Suite (VIIRS) on board the Suomi National Polar-orbiting Partnership (SNPP) satellite to reconstruct missing pixels in the daily Kd(490) and chlorophyll-a concentration images, and some application examples in the Chesapeake Bay and the Gulf of Mexico will be presented.

  1. Are There Subtypes of Panic Disorder? An Interpersonal Perspective

    PubMed Central

    Zilcha-Mano, Sigal; McCarthy, Kevin S.; Dinger, Ulrike; Chambless, Dianne L.; Milrod, Barbara L.; Kunik, Lauren; Barber, Jacques P.

    2015-01-01

    Objective Panic disorder (PD) is associated with significant personal, social, and economic costs. However, little is known about specific interpersonal dysfunctions that characterize the PD population. The current study systematically examined these interpersonal dysfunctions. Method The present analyses included 194 patients with PD out of a sample of 201 who were randomized to cognitive-behavioral therapy, panic-focused psychodynamic psychotherapy, or applied relaxation training. Interpersonal dysfunction was measured using the Inventory of Interpersonal Problems–Circumplex (Horowitz, Alden, Wiggins, & Pincus, 2000). Results Individuals with PD reported greater levels of interpersonal distress than that of a normative cohort (especially when PD was accompanied by agoraphobia), but lower than that of a cohort of patients with major depression. There was no single interpersonal profile that characterized PD patients. Symptom-based clusters (with versus without agoraphobia) could not be discriminated on core or central interpersonal problems. Rather, as revealed by cluster analysis based on the pathoplasticity framework, there were two empirically derived interpersonal clusters among PD patients which were not accounted for by symptom severity and were opposite in nature: domineering-intrusive and nonassertive. The empirically derived interpersonal clusters appear to be of clinical utility in predicting alliance development throughout treatment: While the domineering-intrusive cluster did not show any changes in the alliance throughout treatment, the non-assertive cluster showed a process of significant strengthening of the alliance. Conclusions Empirically derived interpersonal clusters in PD provide clinically useful and non-redundant information about individuals with PD. PMID:26030762

  2. Regionalization of subsurface stormflow parameters of hydrologic models: Derivation from regional analysis of streamflow recession curves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ye, Sheng; Li, Hongyi; Huang, Maoyi

    2014-07-21

    Subsurface stormflow is an important component of the rainfall–runoff response, especially in steep terrain. Its contribution to total runoff is, however, poorly represented in the current generation of land surface models. The lack of physical basis of these common parameterizations precludes a priori estimation of the stormflow (i.e. without calibration), which is a major drawback for prediction in ungauged basins, or for use in global land surface models. This paper is aimed at deriving regionalized parameterizations of the storage–discharge relationship relating to subsurface stormflow from a top–down empirical data analysis of streamflow recession curves extracted from 50 eastern United Statesmore » catchments. Detailed regression analyses were performed between parameters of the empirical storage–discharge relationships and the controlling climate, soil and topographic characteristics. The regression analyses performed on empirical recession curves at catchment scale indicated that the coefficient of the power-law form storage–discharge relationship is closely related to the catchment hydrologic characteristics, which is consistent with the hydraulic theory derived mainly at the hillslope scale. As for the exponent, besides the role of field scale soil hydraulic properties as suggested by hydraulic theory, it is found to be more strongly affected by climate (aridity) at the catchment scale. At a fundamental level these results point to the need for more detailed exploration of the co-dependence of soil, vegetation and topography with climate.« less

  3. Cloud vertical profiles derived from CALIPSO and CloudSat and a comparison with MODIS derived clouds

    NASA Astrophysics Data System (ADS)

    Kato, S.; Sun-Mack, S.; Miller, W. F.; Rose, F. G.; Minnis, P.; Wielicki, B. A.; Winker, D. M.; Stephens, G. L.; Charlock, T. P.; Collins, W. D.; Loeb, N. G.; Stackhouse, P. W.; Xu, K.

    2008-05-01

    CALIPSO and CloudSat from the a-train provide detailed information of vertical distribution of clouds and aerosols. The vertical distribution of cloud occurrence is derived from one month of CALIPSO and CloudSat data as a part of the effort of merging CALIPSO, CloudSat and MODIS with CERES data. This newly derived cloud profile is compared with the distribution of cloud top height derived from MODIS on Aqua from cloud algorithms used in the CERES project. The cloud base from MODIS is also estimated using an empirical formula based on the cloud top height and optical thickness, which is used in CERES processes. While MODIS detects mid and low level clouds over the Arctic in April fairly well when they are the topmost cloud layer, it underestimates high- level clouds. In addition, because the CERES-MODIS cloud algorithm is not able to detect multi-layer clouds and the empirical formula significantly underestimates the depth of high clouds, the occurrence of mid and low-level clouds is underestimated. This comparison does not consider sensitivity difference to thin clouds but we will impose an optical thickness threshold to CALIPSO derived clouds for a further comparison. The effect of such differences in the cloud profile to flux computations will also be discussed. In addition, the effect of cloud cover to the top-of-atmosphere flux over the Arctic using CERES SSF and FLASHFLUX products will be discussed.

  4. Data-Driven H∞ Control for Nonlinear Distributed Parameter Systems.

    PubMed

    Luo, Biao; Huang, Tingwen; Wu, Huai-Ning; Yang, Xiong

    2015-11-01

    The data-driven H∞ control problem of nonlinear distributed parameter systems is considered in this paper. An off-policy learning method is developed to learn the H∞ control policy from real system data rather than the mathematical model. First, Karhunen-Loève decomposition is used to compute the empirical eigenfunctions, which are then employed to derive a reduced-order model (ROM) of slow subsystem based on the singular perturbation theory. The H∞ control problem is reformulated based on the ROM, which can be transformed to solve the Hamilton-Jacobi-Isaacs (HJI) equation, theoretically. To learn the solution of the HJI equation from real system data, a data-driven off-policy learning approach is proposed based on the simultaneous policy update algorithm and its convergence is proved. For implementation purpose, a neural network (NN)- based action-critic structure is developed, where a critic NN and two action NNs are employed to approximate the value function, control, and disturbance policies, respectively. Subsequently, a least-square NN weight-tuning rule is derived with the method of weighted residuals. Finally, the developed data-driven off-policy learning approach is applied to a nonlinear diffusion-reaction process, and the obtained results demonstrate its effectiveness.

  5. Dual-frequency sound-absorbing metasurface based on visco-thermal effects with frequency dependence

    NASA Astrophysics Data System (ADS)

    Ryoo, H.; Jeon, W.

    2018-03-01

    We investigate theoretically an acoustic metasurface with a high absorption coefficient at two frequencies and design it from subwavelength structures. We propose the use of a two-dimensional periodic array of four Helmholtz resonators in two types to obtain a metasurface with nearly perfect sound absorption at given target frequencies via interactions between waves emanating from different resonators. By considering how fluid viscosity affects acoustic energy dissipation in the narrow necks of the Helmholtz resonators, we obtain effective complex-valued material properties that depend on frequency and on the geometrical parameters of the resonators. We furthermore derive the effective acoustic impedance of the metasurface from the effective material properties and calculate the absorption spectra from the theoretical model, which we compare with the spectra obtained from a finite-element simulation. As a practical application of the theoretical model, we derive empirical formulas for the geometrical parameters of a metasurface which would yield perfect absorption at a given frequency. While previous works on metasurfaces based on Helmholtz resonators aimed to absorb sound at single frequencies, we use optimization to design a metasurface composed of four different Helmholtz resonators to absorb sound at two distinct frequencies.

  6. Global Patterns of Lightning Properties Derived by OTD and LIS

    NASA Technical Reports Server (NTRS)

    Beirle, Steffen; Koshak, W.; Blakeslee, R.; Wagner, T.

    2014-01-01

    The satellite instruments Optical Transient Detector (OTD) and Lightning Imaging Sensor (LIS) provide unique empirical data about the frequency of lightning flashes around the globe (OTD), and the tropics (LIS), which 5 has been used before to compile a well received global climatology of flash rate densities. Here we present a statistical analysis of various additional lightning properties derived from OTD/LIS, i.e. the number of so-called "events" and "groups" per flash, as well as 10 the mean flash duration, footprint and radiance. These normalized quantities, which can be associated with the flash "strength", show consistent spatial patterns; most strikingly, oceanic flashes show higher values than continental flashes for all properties. Over land, regions with high (Eastern US) 15 and low (India) flash strength can be clearly identified. We discuss possible causes and implications of the observed regional differences. Although a direct quantitative interpretation of the investigated flash properties is difficult, the observed spatial patterns provide valuable information for the 20 interpretation and application of climatological flash rates. Due to the systematic regional variations of physical flash characteristics, viewing conditions, and/or measurement sensitivities, parametrisations of lightning NOx based on total flash rate densities alone are probably affected by regional biases.

  7. ON THE PULSATIONAL-ORBITAL-PERIOD RELATION OF ECLIPSING BINARIES WITH δ-SCT COMPONENTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, X. B.; Luo, C. Q.; Fu, J. N.

    2013-11-01

    We have deduced a theoretical relation between the pulsation and orbital-periods of pulsating stars in close binaries based on their Roche lobe filling. It appears to be of a simple linear form, with the slope as a function of the pulsation constant, the mass ratio, and the filling factor for an individual system. Testing the data of 69 known eclipsing binaries containing δ-Sct-type components yields an empirical slope of 0.020 ± 0.006 for the P{sub pul}-P{sub orb} relation. We have further derived the upper limit of the P{sub pul}/P{sub orb} ratio for the δ-Sct stars in eclipsing binaries with amore » value of 0.09 ± 0.02. This value could serve as a criterion to distinguish whether or not a pulsator in an eclipsing binary pulsates in the p-mode. Applying the deduced P{sub pul}-P{sub orb} relation, we have computed the dominant pulsation constants for 37 δ-Sct stars in eclipsing systems with definite photometric solutions. These ranged between 0.008 and 0.033 days with a mean value of about 0.014 days, indicating that δ-Sct stars in eclipsing binaries mostly pulsate in the fourth or fifth overtones.« less

  8. Calculation of water equivalent thickness of materials of arbitrary density, elemental composition and thickness in proton beam irradiation

    NASA Astrophysics Data System (ADS)

    Zhang, Rui; Newhauser, Wayne D.

    2009-03-01

    In proton therapy, the radiological thickness of a material is commonly expressed in terms of water equivalent thickness (WET) or water equivalent ratio (WER). However, the WET calculations required either iterative numerical methods or approximate methods of unknown accuracy. The objective of this study was to develop a simple deterministic formula to calculate WET values with an accuracy of 1 mm for materials commonly used in proton radiation therapy. Several alternative formulas were derived in which the energy loss was calculated based on the Bragg-Kleeman rule (BK), the Bethe-Bloch equation (BB) or an empirical version of the Bethe-Bloch equation (EBB). Alternative approaches were developed for targets that were 'radiologically thin' or 'thick'. The accuracy of these methods was assessed by comparison to values from an iterative numerical method that utilized evaluated stopping power tables. In addition, we also tested the approximate formula given in the International Atomic Energy Agency's dosimetry code of practice (Technical Report Series No 398, 2000, IAEA, Vienna) and stopping power ratio approximation. The results of these comparisons revealed that most methods were accurate for cases involving thin or low-Z targets. However, only the thick-target formulas provided accurate WET values for targets that were radiologically thick and contained high-Z material.

  9. Chemical abundances of the PRGs UGC 7576 and UGC 9796. I. Testing the formation scenario

    NASA Astrophysics Data System (ADS)

    Spavone, M.; Iodice, E.; Arnaboldi, M.; Longo, G.; Gerhard, O.

    2011-07-01

    Context. The study of both the chemical abundances of HII regions in polar ring galaxies and their implications for the evolutionary scenario of these systems has been a step forward both in tracing the formation history of the galaxy and giving hints toward the mechanisms at work during the building of a disk by cold accretion process. It is now important to establish whether such results are typical of the class of polar disk galaxies as a whole. Aims: The present work aims at checking the cold accretion of gas through a "cosmic filament" as a possible scenario for the formation of the polar structures in UGC 7576 and UGC 9796. If these form by cold accretion, we expect the HII regions abundances and metallicities to be lower than those of same-luminosity spiral disks, with values of Z ~ 1/10 Z⊙, as predicted by cosmological simulations. Methods: We used deep long-slit spectra, obtained with DOLORES@TNG in the optical wavelengths, of the brightest HII regions associated with the polar structures to derive their chemical abundances and star formation rate. We used the empirical methods, based on the intensities of easily observable lines, to derive the oxygen abundance 12 + log (O/H) of both galaxies. Such values are compared with those typical of different morphological galaxy types of comparable luminosity. Results: The average metallicity values for UGC 7576 and UGC 9796 are Z = 0.4 Z⊙ and Z = 0.1 Z⊙, respectively. Both values are lower than those measured for ordinary spirals of similar luminosity, and UGC 7576 presents no metallicity gradient along the polar structure. These data, together with other observed features available for the two PRGs in previous works, are compared with the predictions of simulations of tidal accretion, cold accretion, and merging to disentangle these scenarios.

  10. Thermodynamic properties, melting temperature and viscosity of the mantles of Super Earths

    NASA Astrophysics Data System (ADS)

    Stamenkovic, V.; Spohn, T.; Breuer, D.

    2010-12-01

    The recent dicscovery of extrasolar planets with radii of about twice the Earth radius and masses of several Earth masses such as e.g., Corot-7b (approx 5Mearth and 1.6Rearth, Queloz et al. 2009) has increased the interest in the properties of rock at extremely high pressures. While the pressure at the Earth’s core-mantle boundary is about 135GPa, pressures at the base of the mantles of extraterrestrial rocky planets - if these are at all differentiated into mantles and cores - may reach Tera Pascals. Although the properties and the mineralogy of rock at extremely high pressure is little known there have been speculations about mantle convection, plate tectonics and dynamo action in these “Super-Earths”. We assume that the mantles of these planets can be thought of as consisting of perovskite but we discuss the effects of the post-perovskite transition and of MgO. We use the Keane equation of state and the Slater relation (see e.g., Stacey and Davies 2004) to derive an infinite pressure value for the Grüneisen parameter of 1.035. To derive this value we adopted the infinite pressure limit for K’ (pressure derivative of the bulk modulus) of 2.41 as derived by Stacey and Davies (2004) by fitting PREM. We further use the Lindeman law to calculate the melting curve. We gauge the melting curve using the available experimental data for pressures up to 120GPa. The melting temperature profile reaches 6000K at 135GPa and increases to temperatures between 12,000K and 24,000K at 1.1TPa with a preferred value of 21,000K. We find the adiabatic temperature increase to reach 2,500K at 135GPa and 5,400K at 1.1TPa. To calculate the pressure dependence of the viscosity we assume that the rheology is diffusion controlled and calculate the partial derivative with respect to pressure of the activation enthalpy. We cast the partial derivative in terms of an activation volume and use the semi-empirical homologous temperature scaling (e.g., Karato 2008). We find that the activation volume decreases from 2.4cm^3/mol at 135GPa to 1.6cm^3/mol at 1.1TPa. An estimate of the viscosity increase across the mantle to a pressure of 1.1TPa using the adiabat calculated above results in an increase of the viscosity of 19 orders of magnitude. This value raises questions about the differentiation of these planets, heat transfer in their deep interiors, and magnetic field generation.(Ref.: Karato, S. 2008. Deformation of Earth Materials, Cambridge University Press.; Stacey, F.D., Davies, P.M. 2004. PEPI 142: 137; Queloz, D. et al., 2009. Astronomy and Astrophysics 506: 303.)

  11. Reacting Chemistry Based Burn Model for Explosive Hydrocodes

    NASA Astrophysics Data System (ADS)

    Schwaab, Matthew; Greendyke, Robert; Steward, Bryan

    2017-06-01

    Currently, in hydrocodes designed to simulate explosive material undergoing shock-induced ignition, the state of the art is to use one of numerous reaction burn rate models. These burn models are designed to estimate the bulk chemical reaction rate. Unfortunately, these models are largely based on empirical data and must be recalibrated for every new material being simulated. We propose that the use of an equilibrium Arrhenius rate reacting chemistry model in place of these empirically derived burn models will improve the accuracy for these computational codes. Such models have been successfully used in codes simulating the flow physics around hypersonic vehicles. A reacting chemistry model of this form was developed for the cyclic nitramine RDX by the Naval Research Laboratory (NRL). Initial implementation of this chemistry based burn model has been conducted on the Air Force Research Laboratory's MPEXS multi-phase continuum hydrocode. In its present form, the burn rate is based on the destruction rate of RDX from NRL's chemistry model. Early results using the chemistry based burn model show promise in capturing deflagration to detonation features more accurately in continuum hydrocodes than previously achieved using empirically derived burn models.

  12. Very empirical treatment of solvation and entropy: a force field derived from Log Po/w

    NASA Astrophysics Data System (ADS)

    Kellogg, Glen Eugene; Burnett, James C.; Abraham, Donald J.

    2001-04-01

    A non-covalent interaction force field model derived from the partition coefficient of 1-octanol/water solubility is described. This model, HINT for Hydropathic INTeractions, is shown to include, in very empirical and approximate terms, all components of biomolecular associations, including hydrogen bonding, Coulombic interactions, hydrophobic interactions, entropy and solvation/desolvation. Particular emphasis is placed on: (1) demonstrating the relationship between the total empirical HINT score and free energy of association, ΔG interaction; (2) showing that the HINT hydrophobic-polar interaction sub-score represents the energy cost of desolvation upon binding for interacting biomolecules; and (3) a new methodology for treating constrained water molecules as discrete independent small ligands. An example calculation is reported for dihydrofolate reductase (DHFR) bound with methotrexate (MTX). In that case the observed very tight binding, ΔG interaction≤-13.6 kcal mol-1, is largely due to ten hydrogen bonds between the ligand and enzyme with estimated strength ranging between -0.4 and -2.3 kcal mol-1. Four water molecules bridging between DHFR and MTX contribute an additional -1.7 kcal mol-1 stability to the complex. The HINT estimate of the cost of desolvation is +13.9 kcal mol-1.

  13. Theory, the Final Frontier? A Corpus-Based Analysis of the Role of Theory in Psychological Articles.

    PubMed

    Beller, Sieghard; Bender, Andrea

    2017-01-01

    Contemporary psychology regards itself as an empirical science, at least in most of its subfields. Theory building and development are often considered critical to the sciences, but the extent to which psychology can be cast in this way is under debate. According to those advocating a strong role of theory, studies should be designed to test hypotheses derived from theories (theory-driven) and ideally should yield findings that stimulate hypothesis formation and theory building (theory-generating). The alternative position values empirical findings over theories as the lasting legacy of science. To investigate which role theory actually plays in current research practice, we analyse references to theory in the complete set of 2,046 articles accepted for publication in Frontiers of Psychology in 2015. This sample of articles, while not representative in the strictest sense, covers a broad range of sub-disciplines, both basic and applied, and a broad range of article types, including research articles, reviews, hypothesis & theory, and commentaries. For the titles, keyword lists, and abstracts in this sample, we conducted a text search for terms related to empiricism and theory, assessed the frequency and scope of usage for six theory-related terms, and analyzed their distribution over different article types and subsections of the journal. The results indicate substantially lower frequencies of theoretical than empirical terms, with references to a specific (named) theory in less than 10% of the sample and references to any of even the most frequently mentioned theories in less than 0.5% of the sample. In conclusion, we discuss possible limitations of our study and the prospect of theoretical advancement.

  14. Theory, the Final Frontier? A Corpus-Based Analysis of the Role of Theory in Psychological Articles

    PubMed Central

    Beller, Sieghard; Bender, Andrea

    2017-01-01

    Contemporary psychology regards itself as an empirical science, at least in most of its subfields. Theory building and development are often considered critical to the sciences, but the extent to which psychology can be cast in this way is under debate. According to those advocating a strong role of theory, studies should be designed to test hypotheses derived from theories (theory-driven) and ideally should yield findings that stimulate hypothesis formation and theory building (theory-generating). The alternative position values empirical findings over theories as the lasting legacy of science. To investigate which role theory actually plays in current research practice, we analyse references to theory in the complete set of 2,046 articles accepted for publication in Frontiers of Psychology in 2015. This sample of articles, while not representative in the strictest sense, covers a broad range of sub-disciplines, both basic and applied, and a broad range of article types, including research articles, reviews, hypothesis & theory, and commentaries. For the titles, keyword lists, and abstracts in this sample, we conducted a text search for terms related to empiricism and theory, assessed the frequency and scope of usage for six theory-related terms, and analyzed their distribution over different article types and subsections of the journal. The results indicate substantially lower frequencies of theoretical than empirical terms, with references to a specific (named) theory in less than 10% of the sample and references to any of even the most frequently mentioned theories in less than 0.5% of the sample. In conclusion, we discuss possible limitations of our study and the prospect of theoretical advancement. PMID:28642728

  15. Emotional intelligence: a review of the literature with specific focus on empirical and epistemological perspectives.

    PubMed

    Akerjordet, Kristin; Severinsson, Elisabeth

    2007-08-01

    The aim of this literature review was to evaluate and discuss previous research on emotional intelligence with specific focus on empirical and epistemological perspectives. The concept of emotional intelligence is derived from extensive research and theory about thoughts, feelings and abilities that, prior to 1990, were considered to be unrelated phenomena. Today, emotional intelligence attracts growing interest worldwide, contributing to critical reflection as well as to various educational, health and occupational outcomes. Systematic review. The findings revealed that the epistemological tradition of natural science is the most frequently used and that, therefore, few articles related to humanistic sciences or philosophical perspectives were found. There is no agreement as to whether emotional intelligence is an individual ability, non-cognitive skill, capability or competence. One important finding is that, regardless of the theoretical framework used, researchers agree that emotional intelligence embraces emotional awareness in relation to self and others, professional efficiency and emotional management. There have been some interesting theoretical frameworks that relate emotional intelligence to stress and mental health within different contexts. Emotional learning and maturation processes, i.e. personal growth and development in the area of emotional intelligence, are central to professional competence. There is no doubt that the research on emotional intelligence is scarce and still at the developmental stage. Clinical questions pertaining to the nursing profession should be developed with focus on personal qualities of relevance to nursing practice. Different approaches are needed in order to further expand the theoretical, empirical and philosophical foundation of this important and enigmatic concept. Emotional intelligence may have implications for health promotion and quality of working life within nursing. Emotional intelligence seems to lead to more positive attitudes, greater adaptability, improved relationships and increased orientation towards positive values.

  16. Integrated empirical ethics: loss of normativity?

    PubMed

    van der Scheer, Lieke; Widdershoven, Guy

    2004-01-01

    An important discussion in contemporary ethics concerns the relevance of empirical research for ethics. Specifically, two crucial questions pertain, respectively, to the possibility of inferring normative statements from descriptive statements, and to the danger of a loss of normativity if normative statements should be based on empirical research. Here we take part in the debate and defend integrated empirical ethical research: research in which normative guidelines are established on the basis of empirical research and in which the guidelines are empirically evaluated by focusing on observable consequences. We argue that in our concrete example normative statements are not derived from descriptive statements, but are developed within a process of reflection and dialogue that goes on within a specific praxis. Moreover, we show that the distinction in experience between the desirable and the undesirable precludes relativism. The normative guidelines so developed are both critical and normative: they help in choosing the right action and in evaluating that action. Finally, following Aristotle, we plead for a return to the view that morality and ethics are inherently related to one another, and for an acknowledgment of the fact that moral judgments have their origin in experience which is always related to historical and cultural circumstances.

  17. College Education and Attitudes toward Democracy in China: An Empirical Study

    ERIC Educational Resources Information Center

    Wang, Gang; Wu, Liyun; Han, Rongbin

    2015-01-01

    The modernization theory contends that there is a link between education and democracy. Yet few empirical studies have been done to investigate the role of higher education on promoting democratic values in the Chinese context. Using China General Social Survey 2006, this paper generates several findings which are not completely consistent with…

  18. Enhancing Established Counting Routines to Promote Place-Value Understanding: An Empirical Study in Early Elementary Classrooms

    ERIC Educational Resources Information Center

    Fraivillig, Judith L.

    2018-01-01

    Understanding place value is a critical and foundational competency for elementary mathematics. Classroom teachers who endeavor to promote place-value development adopt a variety of established practices to varying degrees of effectiveness. In parallel, researchers have validated models of how young children acquire place-value understanding.…

  19. Estimation of value at risk and conditional value at risk using normal mixture distributions model

    NASA Astrophysics Data System (ADS)

    Kamaruzzaman, Zetty Ain; Isa, Zaidi

    2013-04-01

    Normal mixture distributions model has been successfully applied in financial time series analysis. In this paper, we estimate the return distribution, value at risk (VaR) and conditional value at risk (CVaR) for monthly and weekly rates of returns for FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI) from July 1990 until July 2010 using the two component univariate normal mixture distributions model. First, we present the application of normal mixture distributions model in empirical finance where we fit our real data. Second, we present the application of normal mixture distributions model in risk analysis where we apply the normal mixture distributions model to evaluate the value at risk (VaR) and conditional value at risk (CVaR) with model validation for both risk measures. The empirical results provide evidence that using the two components normal mixture distributions model can fit the data well and can perform better in estimating value at risk (VaR) and conditional value at risk (CVaR) where it can capture the stylized facts of non-normality and leptokurtosis in returns distribution.

  20. Popper's Fact-Standard Dualism Contra "Value Free" Social Science.

    ERIC Educational Resources Information Center

    Eidlin, Fred H.

    1983-01-01

    Noncognitivism, the belief that normative statements (unlike empirical statements) do not convey objective knowledge is contrasted to Karl Popper's "critical dualism," which maintains that science is imbued with values and value judgments. Noncognitivism impedes the development of a social scientific method which would integrate…

Top