NASA Astrophysics Data System (ADS)
Chen, Yue; Cunningham, Gregory; Henderson, Michael
2016-09-01
This study aims to statistically estimate the errors in local magnetic field directions that are derived from electron directional distributions measured by Los Alamos National Laboratory geosynchronous (LANL GEO) satellites. First, by comparing derived and measured magnetic field directions along the GEO orbit to those calculated from three selected empirical global magnetic field models (including a static Olson and Pfitzer 1977 quiet magnetic field model, a simple dynamic Tsyganenko 1989 model, and a sophisticated dynamic Tsyganenko 2001 storm model), it is shown that the errors in both derived and modeled directions are at least comparable. Second, using a newly developed proxy method as well as comparing results from empirical models, we are able to provide for the first time circumstantial evidence showing that derived magnetic field directions should statistically match the real magnetic directions better, with averaged errors < ˜ 2°, than those from the three empirical models with averaged errors > ˜ 5°. In addition, our results suggest that the errors in derived magnetic field directions do not depend much on magnetospheric activity, in contrast to the empirical field models. Finally, as applications of the above conclusions, we show examples of electron pitch angle distributions observed by LANL GEO and also take the derived magnetic field directions as the real ones so as to test the performance of empirical field models along the GEO orbits, with results suggesting dependence on solar cycles as well as satellite locations. This study demonstrates the validity and value of the method that infers local magnetic field directions from particle spin-resolved distributions.
Chen, Yue; Cunningham, Gregory; Henderson, Michael
2016-09-21
Our study aims to statistically estimate the errors in local magnetic field directions that are derived from electron directional distributions measured by Los Alamos National Laboratory geosynchronous (LANL GEO) satellites. First, by comparing derived and measured magnetic field directions along the GEO orbit to those calculated from three selected empirical global magnetic field models (including a static Olson and Pfitzer 1977 quiet magnetic field model, a simple dynamic Tsyganenko 1989 model, and a sophisticated dynamic Tsyganenko 2001 storm model), it is shown that the errors in both derived and modeled directions are at least comparable. Furthermore, using a newly developedmore » proxy method as well as comparing results from empirical models, we are able to provide for the first time circumstantial evidence showing that derived magnetic field directions should statistically match the real magnetic directions better, with averaged errors < ~2°, than those from the three empirical models with averaged errors > ~5°. In addition, our results suggest that the errors in derived magnetic field directions do not depend much on magnetospheric activity, in contrast to the empirical field models. Finally, as applications of the above conclusions, we show examples of electron pitch angle distributions observed by LANL GEO and also take the derived magnetic field directions as the real ones so as to test the performance of empirical field models along the GEO orbits, with results suggesting dependence on solar cycles as well as satellite locations. Finally, this study demonstrates the validity and value of the method that infers local magnetic field directions from particle spin-resolved distributions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yue; Cunningham, Gregory; Henderson, Michael
Our study aims to statistically estimate the errors in local magnetic field directions that are derived from electron directional distributions measured by Los Alamos National Laboratory geosynchronous (LANL GEO) satellites. First, by comparing derived and measured magnetic field directions along the GEO orbit to those calculated from three selected empirical global magnetic field models (including a static Olson and Pfitzer 1977 quiet magnetic field model, a simple dynamic Tsyganenko 1989 model, and a sophisticated dynamic Tsyganenko 2001 storm model), it is shown that the errors in both derived and modeled directions are at least comparable. Furthermore, using a newly developedmore » proxy method as well as comparing results from empirical models, we are able to provide for the first time circumstantial evidence showing that derived magnetic field directions should statistically match the real magnetic directions better, with averaged errors < ~2°, than those from the three empirical models with averaged errors > ~5°. In addition, our results suggest that the errors in derived magnetic field directions do not depend much on magnetospheric activity, in contrast to the empirical field models. Finally, as applications of the above conclusions, we show examples of electron pitch angle distributions observed by LANL GEO and also take the derived magnetic field directions as the real ones so as to test the performance of empirical field models along the GEO orbits, with results suggesting dependence on solar cycles as well as satellite locations. Finally, this study demonstrates the validity and value of the method that infers local magnetic field directions from particle spin-resolved distributions.« less
Semi-empirical airframe noise prediction model
NASA Technical Reports Server (NTRS)
Hersh, A. S.; Putnam, T. W.; Lasagna, P. L.; Burcham, F. W., Jr.
1976-01-01
A semi-empirical maximum overall sound pressure level (OASPL) airframe noise model was derived. The noise radiated from aircraft wings and flaps was modeled by using the trailing-edge diffracted quadrupole sound theory derived by Ffowcs Williams and Hall. The noise radiated from the landing gear was modeled by using the acoustic dipole sound theory derived by Curle. The model was successfully correlated with maximum OASPL flyover noise measurements obtained at the NASA Dryden Flight Research Center for three jet aircraft - the Lockheed JetStar, the Convair 990, and the Boeing 747 aircraft.
Stopping Distances: An Excellent Example of Empirical Modelling.
ERIC Educational Resources Information Center
Lawson, D. A.; Tabor, J. H.
2001-01-01
Explores the derivation of empirical models for the stopping distance of a car being driven at a range of speeds. Indicates that the calculation of stopping distances makes an excellent example of empirical modeling because it is a situation that is readily understood and particularly relevant to many first-year undergraduates who are learning or…
NASA Astrophysics Data System (ADS)
Kamiyama, M.; Orourke, M. J.; Flores-Berrones, R.
1992-09-01
A new type of semi-empirical expression for scaling strong-motion peaks in terms of seismic source, propagation path, and local site conditions is derived. Peak acceleration, peak velocity, and peak displacement are analyzed in a similar fashion because they are interrelated. However, emphasis is placed on the peak velocity which is a key ground motion parameter for lifeline earthquake engineering studies. With the help of seismic source theories, the semi-empirical model is derived using strong motions obtained in Japan. In the derivation, statistical considerations are used in the selection of the model itself and the model parameters. Earthquake magnitude M and hypocentral distance r are selected as independent variables and the dummy variables are introduced to identify the amplification factor due to individual local site conditions. The resulting semi-empirical expressions for the peak acceleration, velocity, and displacement are then compared with strong-motion data observed during three earthquakes in the U.S. and Mexico.
Development of a detector model for generation of synthetic radiographs of cargo containers
NASA Astrophysics Data System (ADS)
White, Timothy A.; Bredt, Ofelia P.; Schweppe, John E.; Runkle, Robert C.
2008-05-01
Creation of synthetic cargo-container radiographs that possess attributes of their empirical counterparts requires accurate models of the imaging-system response. Synthetic radiographs serve as surrogate data in studies aimed at determining system effectiveness for detecting target objects when it is impractical to collect a large set of empirical radiographs. In the case where a detailed understanding of the detector system is available, an accurate detector model can be derived from first-principles. In the absence of this detail, it is necessary to derive empirical models of the imaging-system response from radiographs of well-characterized objects. Such a case is the topic of this work, where we demonstrate the development of an empirical model of a gamma-ray radiography system with the intent of creating a detector-response model that translates uncollided photon transport calculations into realistic synthetic radiographs. The detector-response model is calibrated to field measurements of well-characterized objects thus incorporating properties such as system sensitivity, spatial resolution, contrast and noise.
Quantitative evaluation of simulated functional brain networks in graph theoretical analysis.
Lee, Won Hee; Bullmore, Ed; Frangou, Sophia
2017-02-01
There is increasing interest in the potential of whole-brain computational models to provide mechanistic insights into resting-state brain networks. It is therefore important to determine the degree to which computational models reproduce the topological features of empirical functional brain networks. We used empirical connectivity data derived from diffusion spectrum and resting-state functional magnetic resonance imaging data from healthy individuals. Empirical and simulated functional networks, constrained by structural connectivity, were defined based on 66 brain anatomical regions (nodes). Simulated functional data were generated using the Kuramoto model in which each anatomical region acts as a phase oscillator. Network topology was studied using graph theory in the empirical and simulated data. The difference (relative error) between graph theory measures derived from empirical and simulated data was then estimated. We found that simulated data can be used with confidence to model graph measures of global network organization at different dynamic states and highlight the sensitive dependence of the solutions obtained in simulated data on the specified connection densities. This study provides a method for the quantitative evaluation and external validation of graph theory metrics derived from simulated data that can be used to inform future study designs. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Fire risk in San Diego County, California: A weighted Bayesian model approach
Kolden, Crystal A.; Weigel, Timothy J.
2007-01-01
Fire risk models are widely utilized to mitigate wildfire hazards, but models are often based on expert opinions of less understood fire-ignition and spread processes. In this study, we used an empirically derived weights-of-evidence model to assess what factors produce fire ignitions east of San Diego, California. We created and validated a dynamic model of fire-ignition risk based on land characteristics and existing fire-ignition history data, and predicted ignition risk for a future urbanization scenario. We then combined our empirical ignition-risk model with a fuzzy fire behavior-risk model developed by wildfire experts to create a hybrid model of overall fire risk. We found that roads influence fire ignitions and that future growth will increase risk in new rural development areas. We conclude that empirically derived risk models and hybrid models offer an alternative method to assess current and future fire risk based on management actions.
NASA Astrophysics Data System (ADS)
Kiafar, Hamed; Babazadeh, Hosssien; Marti, Pau; Kisi, Ozgur; Landeras, Gorka; Karimi, Sepideh; Shiri, Jalal
2017-10-01
Evapotranspiration estimation is of crucial importance in arid and hyper-arid regions, which suffer from water shortage, increasing dryness and heat. A modeling study is reported here to cross-station assessment between hyper-arid and humid conditions. The derived equations estimate ET0 values based on temperature-, radiation-, and mass transfer-based configurations. Using data from two meteorological stations in a hyper-arid region of Iran and two meteorological stations in a humid region of Spain, different local and cross-station approaches are applied for developing and validating the derived equations. The comparison of the gene expression programming (GEP)-based-derived equations with corresponding empirical-semi empirical ET0 estimation equations reveals the superiority of new formulas in comparison with the corresponding empirical equations. Therefore, the derived models can be successfully applied in these hyper-arid and humid regions as well as similar climatic contexts especially in data-lack situations. The results also show that when relying on proper input configurations, cross-station might be a promising alternative for locally trained models for the stations with data scarcity.
Evapotranspiration Calculations for an Alpine Marsh Meadow Site in Three-river Headwater Region
NASA Astrophysics Data System (ADS)
Zhou, B.; Xiao, H.
2016-12-01
Daily radiation and meteorological data were collected at an alpine marsh meadow site in the Three-river Headwater Region(THR). Use them to assess radiation models determined after comparing the performance between Zuo model and the model recommend by FAO56P-M.Four methods, FAO56P-M, Priestley-Taylor, Hargreaves, and Makkink methods were applied to determine daily reference evapotranspiration( ETr) for the growing season and built the empirical models for estimating daily actual evapotranspiration ETa between ETr derived from the four methods and evapotranspiration derived from Bowen Ratio method on alpine marsh meadow in this region. After comparing the performance of four empirical models by RMSE, MAE and AI, it showed these models all can get the better estimated daily ETaon alpine marsh meadow in this region, and the best performance of the FAO56 P-M, Makkink empirical model were better than Priestley-Taylor and Hargreaves model.
Using LANDSAT to provide potato production estimates to Columbia Basin farmers and processors
NASA Technical Reports Server (NTRS)
1991-01-01
The estimation of potato yields in the Columbia basin is described. The fundamental objective is to provide CROPIX with working models of potato production. A two-pronged approach was used to yield estimation: (1) using simulation models, and (2) using purely empirical models. The simulation modeling approach used satellite observations to determine certain key dates in the development of the crop for each field identified as potatoes. In particular, these include planting dates, emergence dates, and harvest dates. These critical dates are fed into simulation models of crop growth and development to derive yield forecasts. Purely empirical models were developed to relate yield to some spectrally derived measure of crop development. Two empirical approaches are presented: one relates tuber yield to estimates of cumulative intercepted solar radiation, the other relates tuber yield to the integral under GVI (Global Vegetation Index) curve.
Code of Federal Regulations, 2010 CFR
2010-04-01
... charges. An OTC derivatives dealer shall provide a description of all statistical models used for pricing... controls over those models, and a statement regarding whether the firm has developed its own internal VAR models. If the OTC derivatives dealer's VAR model incorporates empirical correlations across risk...
NASA Technical Reports Server (NTRS)
Habbal, Shadia Rifai; Esser, Ruth; Guhathakurta, Madhulika; Fisher, Richard
1995-01-01
Using the empirical constraints provided by observations in the inner corona and in interplanetary space. we derive the flow properties of the solar wind using a two fluid model. Density and scale height temperatures are derived from White Light coronagraph observations on SPARTAN 201-1 and at Mauna Loa, from 1.16 to 5.5 R, in the two polar coronal holes on 11-12 Apr. 1993. Interplanetary measurements of the flow speed and proton mass flux are taken from the Ulysses south polar passage. By comparing the results of the model computations that fit the empirical constraints in the two coronal hole regions, we show how the effects of the line of sight influence the empirical inferences and subsequently the corresponding numerical results.
GPP in Loblolly Pine: A Monthly Comparison of Empirical and Process Models
Christopher Gough; John Seiler; Kurt Johnsen; David Arthur Sampson
2002-01-01
Monthly and yearly gross primary productivity (GPP) estimates derived from an empirical and two process based models (3PG and BIOMASS) were compared. Spatial and temporal variation in foliar gas photosynthesis was examined and used to develop GPP prediction models for fertilized nine-year-old loblolly pine (Pinus taeda) stands located in the North...
Survival estimation and the effects of dependency among animals
Schmutz, Joel A.; Ward, David H.; Sedinger, James S.; Rexstad, Eric A.
1995-01-01
Survival models assume that fates of individuals are independent, yet the robustness of this assumption has been poorly quantified. We examine how empirically derived estimates of the variance of survival rates are affected by dependency in survival probability among individuals. We used Monte Carlo simulations to generate known amounts of dependency among pairs of individuals and analyzed these data with Kaplan-Meier and Cormack-Jolly-Seber models. Dependency significantly increased these empirical variances as compared to theoretically derived estimates of variance from the same populations. Using resighting data from 168 pairs of black brant, we used a resampling procedure and program RELEASE to estimate empirical and mean theoretical variances. We estimated that the relationship between paired individuals caused the empirical variance of the survival rate to be 155% larger than the empirical variance for unpaired individuals. Monte Carlo simulations and use of this resampling strategy can provide investigators with information on how robust their data are to this common assumption of independent survival probabilities.
An Empirical Human Controller Model for Preview Tracking Tasks.
van der El, Kasper; Pool, Daan M; Damveld, Herman J; van Paassen, Marinus Rene M; Mulder, Max
2016-11-01
Real-life tracking tasks often show preview information to the human controller about the future track to follow. The effect of preview on manual control behavior is still relatively unknown. This paper proposes a generic operator model for preview tracking, empirically derived from experimental measurements. Conditions included pursuit tracking, i.e., without preview information, and tracking with 1 s of preview. Controlled element dynamics varied between gain, single integrator, and double integrator. The model is derived in the frequency domain, after application of a black-box system identification method based on Fourier coefficients. Parameter estimates are obtained to assess the validity of the model in both the time domain and frequency domain. Measured behavior in all evaluated conditions can be captured with the commonly used quasi-linear operator model for compensatory tracking, extended with two viewpoints of the previewed target. The derived model provides new insights into how human operators use preview information in tracking tasks.
ERIC Educational Resources Information Center
Gillespie, Ann
2014-01-01
Introduction: This research is the first to investigate the experiences of teacher-librarians as evidence-based practice. An empirically derived model is presented in this paper. Method: This qualitative study utilised the expanded critical incident approach, and investigated the real-life experiences of fifteen Australian teacher-librarians,…
NASA Technical Reports Server (NTRS)
Huddleston, D.; Neugebauer, M.; Goldstein, B.
1994-01-01
The shape of the velocity distribution of water-group ions observed by the Giotto ion mass spectrometer on its approach to comet Halley is modeled to derive empirical values for the rates on ionization, energy diffusion, and loss in the mid-cometosheath.
NASA Astrophysics Data System (ADS)
Bora, Sanjay; Scherbaum, Frank; Kuehn, Nicolas; Stafford, Peter; Edwards, Benjamin
2016-04-01
The current practice of deriving empirical ground motion prediction equations (GMPEs) involves using ground motions recorded at multiple sites. However, in applications like site-specific (e.g., critical facility) hazard ground motions obtained from the GMPEs are need to be adjusted/corrected to a particular site/site-condition under investigation. This study presents a complete framework for developing a response spectral GMPE, within which the issue of adjustment of ground motions is addressed in a manner consistent with the linear system framework. The present approach is a two-step process in which the first step consists of deriving two separate empirical models, one for Fourier amplitude spectra (FAS) and the other for a random vibration theory (RVT) optimized duration (Drvto) of ground motion. In the second step the two models are combined within the RVT framework to obtain full response spectral amplitudes. Additionally, the framework also involves a stochastic model based extrapolation of individual Fourier spectra to extend the useable frequency limit of the empirically derived FAS model. The stochastic model parameters were determined by inverting the Fourier spectral data using an approach similar to the one as described in Edwards and Faeh (2013). Comparison of median predicted response spectra from present approach with those from other regional GMPEs indicates that the present approach can also be used as a stand-alone model. The dataset used for the presented analysis is a subset of the recently compiled database RESORCE-2012 across Europe, the Middle East and the Mediterranean region.
Reacting Chemistry Based Burn Model for Explosive Hydrocodes
NASA Astrophysics Data System (ADS)
Schwaab, Matthew; Greendyke, Robert; Steward, Bryan
2017-06-01
Currently, in hydrocodes designed to simulate explosive material undergoing shock-induced ignition, the state of the art is to use one of numerous reaction burn rate models. These burn models are designed to estimate the bulk chemical reaction rate. Unfortunately, these models are largely based on empirical data and must be recalibrated for every new material being simulated. We propose that the use of an equilibrium Arrhenius rate reacting chemistry model in place of these empirically derived burn models will improve the accuracy for these computational codes. Such models have been successfully used in codes simulating the flow physics around hypersonic vehicles. A reacting chemistry model of this form was developed for the cyclic nitramine RDX by the Naval Research Laboratory (NRL). Initial implementation of this chemistry based burn model has been conducted on the Air Force Research Laboratory's MPEXS multi-phase continuum hydrocode. In its present form, the burn rate is based on the destruction rate of RDX from NRL's chemistry model. Early results using the chemistry based burn model show promise in capturing deflagration to detonation features more accurately in continuum hydrocodes than previously achieved using empirically derived burn models.
Predictive and mechanistic multivariate linear regression models for reaction development
Santiago, Celine B.; Guo, Jing-Yao
2018-01-01
Multivariate Linear Regression (MLR) models utilizing computationally-derived and empirically-derived physical organic molecular descriptors are described in this review. Several reports demonstrating the effectiveness of this methodological approach towards reaction optimization and mechanistic interrogation are discussed. A detailed protocol to access quantitative and predictive MLR models is provided as a guide for model development and parameter analysis. PMID:29719711
Schwindt, Adam R; Winkelman, Dana L
2016-09-01
Urban freshwater streams in arid climates are wastewater effluent dominated ecosystems particularly impacted by bioactive chemicals including steroid estrogens that disrupt vertebrate reproduction. However, more understanding of the population and ecological consequences of exposure to wastewater effluent is needed. We used empirically derived vital rate estimates from a mesocosm study to develop a stochastic stage-structured population model and evaluated the effect of 17α-ethinylestradiol (EE2), the estrogen in human contraceptive pills, on fathead minnow Pimephales promelas stochastic population growth rate. Tested EE2 concentrations ranged from 3.2 to 10.9 ng L(-1) and produced stochastic population growth rates (λ S ) below 1 at the lowest concentration, indicating potential for population decline. Declines in λ S compared to controls were evident in treatments that were lethal to adult males despite statistically insignificant effects on egg production and juvenile recruitment. In fact, results indicated that λ S was most sensitive to the survival of juveniles and female egg production. More broadly, our results document that population model results may differ even when empirically derived estimates of vital rates are similar among experimental treatments, and demonstrate how population models integrate and project the effects of stressors throughout the life cycle. Thus, stochastic population models can more effectively evaluate the ecological consequences of experimentally derived vital rates.
O'Brien, D J; León-Vintró, L; McClean, B
2016-01-01
The use of radiotherapy fields smaller than 3 cm in diameter has resulted in the need for accurate detector correction factors for small field dosimetry. However, published factors do not always agree and errors introduced by biased reference detectors, inaccurate Monte Carlo models, or experimental errors can be difficult to distinguish. The aim of this study was to provide a robust set of detector-correction factors for a range of detectors using numerical, empirical, and semiempirical techniques under the same conditions and to examine the consistency of these factors between techniques. Empirical detector correction factors were derived based on small field output factor measurements for circular field sizes from 3.1 to 0.3 cm in diameter performed with a 6 MV beam. A PTW 60019 microDiamond detector was used as the reference dosimeter. Numerical detector correction factors for the same fields were derived based on calculations from a geant4 Monte Carlo model of the detectors and the Linac treatment head. Semiempirical detector correction factors were derived from the empirical output factors and the numerical dose-to-water calculations. The PTW 60019 microDiamond was found to over-respond at small field sizes resulting in a bias in the empirical detector correction factors. The over-response was similar in magnitude to that of the unshielded diode. Good agreement was generally found between semiempirical and numerical detector correction factors except for the PTW 60016 Diode P, where the numerical values showed a greater over-response than the semiempirical values by a factor of 3.7% for a 1.1 cm diameter field and higher for smaller fields. Detector correction factors based solely on empirical measurement or numerical calculation are subject to potential bias. A semiempirical approach, combining both empirical and numerical data, provided the most reliable results.
Path integral for equities: Dynamic correlation and empirical analysis
NASA Astrophysics Data System (ADS)
Baaquie, Belal E.; Cao, Yang; Lau, Ada; Tang, Pan
2012-02-01
This paper develops a model to describe the unequal time correlation between rate of returns of different stocks. A non-trivial fourth order derivative Lagrangian is defined to provide an unequal time propagator, which can be fitted to the market data. A calibration algorithm is designed to find the empirical parameters for this model and different de-noising methods are used to capture the signals concealed in the rate of return. The detailed results of this Gaussian model show that the different stocks can have strong correlation and the empirical unequal time correlator can be described by the model's propagator. This preliminary study provides a novel model for the correlator of different instruments at different times.
Köster, Andreas; Spura, Thomas; Rutkai, Gábor; Kessler, Jan; Wiebeler, Hendrik; Vrabec, Jadran; Kühne, Thomas D
2016-07-15
The accuracy of water models derived from ab initio molecular dynamics simulations by means on an improved force-matching scheme is assessed for various thermodynamic, transport, and structural properties. It is found that although the resulting force-matched water models are typically less accurate than fully empirical force fields in predicting thermodynamic properties, they are nevertheless much more accurate than generally appreciated in reproducing the structure of liquid water and in fact superseding most of the commonly used empirical water models. This development demonstrates the feasibility to routinely parametrize computationally efficient yet predictive potential energy functions based on accurate ab initio molecular dynamics simulations for a large variety of different systems. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Schwindt, Adam R.; Winkelman, Dana L.
2016-01-01
Urban freshwater streams in arid climates are wastewater effluent dominated ecosystems particularly impacted by bioactive chemicals including steroid estrogens that disrupt vertebrate reproduction. However, more understanding of the population and ecological consequences of exposure to wastewater effluent is needed. We used empirically derived vital rate estimates from a mesocosm study to develop a stochastic stage-structured population model and evaluated the effect of 17α-ethinylestradiol (EE2), the estrogen in human contraceptive pills, on fathead minnow Pimephales promelas stochastic population growth rate. Tested EE2 concentrations ranged from 3.2 to 10.9 ng L−1 and produced stochastic population growth rates (λ S ) below 1 at the lowest concentration, indicating potential for population decline. Declines in λ S compared to controls were evident in treatments that were lethal to adult males despite statistically insignificant effects on egg production and juvenile recruitment. In fact, results indicated that λ S was most sensitive to the survival of juveniles and female egg production. More broadly, our results document that population model results may differ even when empirically derived estimates of vital rates are similar among experimental treatments, and demonstrate how population models integrate and project the effects of stressors throughout the life cycle. Thus, stochastic population models can more effectively evaluate the ecological consequences of experimentally derived vital rates.
Empirical Corrections to Nutation Amplitudes and Precession Computed from a Global VLBI Solution
NASA Astrophysics Data System (ADS)
Schuh, H.; Ferrandiz, J. M.; Belda-Palazón, S.; Heinkelmann, R.; Karbon, M.; Nilsson, T.
2017-12-01
The IAU2000A nutation and IAU2006 precession models were adopted to provide accurate estimations and predictions of the Celestial Intermediate Pole (CIP). However, they are not fully accurate and VLBI (Very Long Baseline Interferometry) observations show that the CIP deviates from the position resulting from the application of the IAU2006/2000A model. Currently, those deviations or offsets of the CIP (Celestial Pole Offsets - CPO), can only be obtained by the VLBI technique. The accuracy of the order of 0.1 milliseconds of arc (mas) allows to compare the observed nutation with theoretical prediction model for a rigid Earth and constrain geophysical parameters describing the Earth's interior. In this study, we empirically evaluate the consistency, systematics and deviations of the IAU 2006/2000A precession-nutation model using several CPO time series derived from the global analysis of VLBI sessions. The final objective is the reassessment of the precession offset and rate, and the amplitudes of the principal terms of nutation, trying to empirically improve the conventional values derived from the precession/nutation theories. The statistical analysis of the residuals after re-fitting the main nutation terms demonstrates that our empirical corrections attain an error reduction by almost 15 micro arc seconds.
A Review of Multivariate Distributions for Count Data Derived from the Poisson Distribution.
Inouye, David; Yang, Eunho; Allen, Genevera; Ravikumar, Pradeep
2017-01-01
The Poisson distribution has been widely studied and used for modeling univariate count-valued data. Multivariate generalizations of the Poisson distribution that permit dependencies, however, have been far less popular. Yet, real-world high-dimensional count-valued data found in word counts, genomics, and crime statistics, for example, exhibit rich dependencies, and motivate the need for multivariate distributions that can appropriately model this data. We review multivariate distributions derived from the univariate Poisson, categorizing these models into three main classes: 1) where the marginal distributions are Poisson, 2) where the joint distribution is a mixture of independent multivariate Poisson distributions, and 3) where the node-conditional distributions are derived from the Poisson. We discuss the development of multiple instances of these classes and compare the models in terms of interpretability and theory. Then, we empirically compare multiple models from each class on three real-world datasets that have varying data characteristics from different domains, namely traffic accident data, biological next generation sequencing data, and text data. These empirical experiments develop intuition about the comparative advantages and disadvantages of each class of multivariate distribution that was derived from the Poisson. Finally, we suggest new research directions as explored in the subsequent discussion section.
Comparison of modelled and empirical atmospheric propagation data
NASA Technical Reports Server (NTRS)
Schott, J. R.; Biegel, J. D.
1983-01-01
The radiometric integrity of TM thermal infrared channel data was evaluated and monitored to develop improved radiometric preprocessing calibration techniques for removal of atmospheric effects. Modelled atmospheric transmittance and path radiance were compared with empirical values derived from aircraft underflight data. Aircraft thermal infrared imagery and calibration data were available on two dates as were corresponding atmospheric radiosonde data. The radiosonde data were used as input to the LOWTRAN 5A code which was modified to output atmospheric path radiance in addition to transmittance. The aircraft data were calibrated and used to generate analogous measurements. These data indicate that there is a tendancy for the LOWTRAN model to underestimate atmospheric path radiance and transmittance as compared to empirical data. A plot of transmittance versus altitude for both LOWTRAN and empirical data is presented.
NASA Astrophysics Data System (ADS)
Michel, Clotaire; Hobiger, Manuel; Edwards, Benjamin; Poggi, Valerio; Burjanek, Jan; Cauzzi, Carlo; Kästli, Philipp; Fäh, Donat
2016-04-01
The Swiss Seismological Service operates one of the densest national seismic networks in the world, still rapidly expanding (see http://www.seismo.ethz.ch/monitor/index_EN). Since 2009, every newly instrumented site is characterized following an established procedure to derive realistic 1D VS velocity profiles. In addition, empirical Fourier spectral modeling is performed on the whole network for each recorded event with sufficient signal-to-noise ratio. Besides the source characteristics of the earthquakes, statistical real time analyses of the residuals of the spectral modeling provide a seamlessly updated amplification function w.r. to Swiss rock conditions at every station. Our site characterization procedure is mainly based on the analysis of surface waves from passive experiments and includes cross-checks of the derived amplification functions with those obtained through spectral modeling. The systematic use of three component surface-wave analysis, allowing the derivation of both Rayleigh and Love waves dispersion curves, also contributes to the improved quality of the retrieved profiles. The results of site characterisation activities at recently installed strong-motion stations depict the large variety of possible effects of surface geology on ground motion in the Alpine context. Such effects range from de-amplification at hard-rock sites to amplification up to a factor of 15 in lacustrine sediments with respect to the Swiss reference rock velocity model. The derived velocity profiles are shown to reproduce observed amplification functions from empirical spectral modeling. Although many sites are found to exhibit 1D behavior, our procedure allows the detection and qualification of 2D and 3D effects. All data collected during the site characterization procedures in the last 20 years are gathered in a database, implementing a data model proposed for community use at the European scale through NERA and EPOS (www.epos-eu.org). A web stationbook derived from it can be accessed through the interface www.stations.seismo.ethz.ch.
Extended Empirical Roadside Shadowing model from ACTS mobile measurements
NASA Technical Reports Server (NTRS)
Goldhirsh, Julius; Vogel, Wolfhard
1995-01-01
Employing multiple data bases derived from land-mobile satellite measurements using the Advanced Communications Technology Satellite (ACTS) at 20 GHz, MARECS B-2 at 1.5 GHz, and helicopter measurements at 870 MHz and 1.5 GHz, the Empirical Road Side Shadowing Model (ERS) has been extended. The new model (Extended Empirical Roadside Shadowing Model, EERS) may now be employed at frequencies from UHF to 20 GHz, at elevation angles from 7 to 60 deg and at percentages from 1 to 80 percent (0 dB fade). The EERS distributions are validated against measured ones and fade deviations associated with the model are assessed. A model is also presented for estimating the effects of foliage (or non-foliage) on 20 GHz distributions, given distributions from deciduous trees devoid of leaves (or in full foliage).
Evaluating the intersection of a regional wildlife connectivity network with highways
Samuel A. Cushman; Jesse S. Lewis; Erin L. Landguth
2013-01-01
Reliable predictions of regional-scale population connectivity are needed to prioritize conservation actions. However, there have been few examples of regional connectivity models that are empirically derived and validated. The central goals of this paper were to (1) evaluate the effectiveness of factorial least cost path corridor mapping on an empirical...
Irrigation water demand: A meta-analysis of price elasticities
NASA Astrophysics Data System (ADS)
Scheierling, Susanne M.; Loomis, John B.; Young, Robert A.
2006-01-01
Metaregression models are estimated to investigate sources of variation in empirical estimates of the price elasticity of irrigation water demand. Elasticity estimates are drawn from 24 studies reported in the United States since 1963, including mathematical programming, field experiments, and econometric studies. The mean price elasticity is 0.48. Long-run elasticities, those that are most useful for policy purposes, are likely larger than the mean estimate. Empirical results suggest that estimates may be more elastic if they are derived from mathematical programming or econometric studies and calculated at a higher irrigation water price. Less elastic estimates are found to be derived from models based on field experiments and in the presence of high-valued crops.
NASA Astrophysics Data System (ADS)
Sadeghi, Morteza; Ghanbarian, Behzad; Horton, Robert
2018-02-01
Thermal conductivity is an essential component in multiphysics models and coupled simulation of heat transfer, fluid flow, and solute transport in porous media. In the literature, various empirical, semiempirical, and physical models were developed for thermal conductivity and its estimation in partially saturated soils. Recently, Ghanbarian and Daigle (GD) proposed a theoretical model, using the percolation-based effective-medium approximation, whose parameters are physically meaningful. The original GD model implicitly formulates thermal conductivity λ as a function of volumetric water content θ. For the sake of computational efficiency in numerical calculations, in this study, we derive an explicit λ(θ) form of the GD model. We also demonstrate that some well-known empirical models, e.g., Chung-Horton, widely applied in the HYDRUS model, as well as mixing models are special cases of the GD model under specific circumstances. Comparison with experiments indicates that the GD model can accurately estimate soil thermal conductivity.
Prediction of the Dynamic Yield Strength of Metals Using Two Structural-Temporal Parameters
NASA Astrophysics Data System (ADS)
Selyutina, N. S.; Petrov, Yu. V.
2018-02-01
The behavior of the yield strength of steel and a number of aluminum alloys is investigated in a wide range of strain rates, based on the incubation time criterion of yield and the empirical models of Johnson-Cook and Cowper-Symonds. In this paper, expressions for the parameters of the empirical models are derived through the characteristics of the incubation time criterion; a satisfactory agreement of these data and experimental results is obtained. The parameters of the empirical models can depend on some strain rate. The independence of the characteristics of the incubation time criterion of yield from the loading history and their connection with the structural and temporal features of the plastic deformation process give advantage of the approach based on the concept of incubation time with respect to empirical models and an effective and convenient equation for determining the yield strength in a wider range of strain rates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sorin Zaharia; C.Z. Cheng
In this paper, we study whether the magnetic field of the T96 empirical model can be in force balance with an isotropic plasma pressure distribution. Using the field of T96, we obtain values for the pressure P by solving a Poisson-type equation {del}{sup 2}P = {del} {center_dot} (J x B) in the equatorial plane, and 1-D profiles on the Sun-Earth axis by integrating {del}P = J x B. We work in a flux coordinate system in which the magnetic field is expressed in terms of Euler potentials. Our results lead to the conclusion that the T96 model field cannot bemore » in equilibrium with an isotropic pressure. We also analyze in detail the computation of Birkeland currents using the Vasyliunas relation and the T96 field, which yields unphysical results, again indicating the lack of force balance in the empirical model. The underlying reason for the force imbalance is likely the fact that the derivatives of the least-square fitted model B are not accurate predictions of the actual magnetospheric field derivatives. Finally, we discuss a possible solution to the problem of lack of force balance in empirical field models.« less
NASA Astrophysics Data System (ADS)
Baaquie, Belal E.; Liang, Cui
2007-01-01
The quantum finance pricing formulas for coupon bond options and swaptions derived by Baaquie [Phys. Rev. E 75, 016703 (2006)] are reviewed. We empirically study the swaption market and propose an efficient computational procedure for analyzing the data. Empirical results of the swaption price, volatility, and swaption correlation are compared with the predictions of quantum finance. The quantum finance model generates the market swaption price to over 90% accuracy.
Baaquie, Belal E; Liang, Cui
2007-01-01
The quantum finance pricing formulas for coupon bond options and swaptions derived by Baaquie [Phys. Rev. E 75, 016703 (2006)] are reviewed. We empirically study the swaption market and propose an efficient computational procedure for analyzing the data. Empirical results of the swaption price, volatility, and swaption correlation are compared with the predictions of quantum finance. The quantum finance model generates the market swaption price to over 90% accuracy.
Testing a new Free Core Nutation empirical model
NASA Astrophysics Data System (ADS)
Belda, Santiago; Ferrándiz, José M.; Heinkelmann, Robert; Nilsson, Tobias; Schuh, Harald
2016-03-01
The Free Core Nutation (FCN) is a free mode of the Earth's rotation caused by the different material characteristics of the Earth's core and mantle. This causes the rotational axes of those layers to slightly diverge from each other, resulting in a wobble of the Earth's rotation axis comparable to nutations. In this paper we focus on estimating empirical FCN models using the observed nutations derived from the VLBI sessions between 1993 and 2013. Assuming a fixed value for the oscillation period, the time-variable amplitudes and phases are estimated by means of multiple sliding window analyses. The effects of using different a priori Earth Rotation Parameters (ERP) in the derivation of models are also addressed. The optimal choice of the fundamental parameters of the model, namely the window width and step-size of its shift, is searched by performing a thorough experimental analysis using real data. The former analyses lead to the derivation of a model with a temporal resolution higher than the one used in the models currently available, with a sliding window reduced to 400 days and a day-by-day shift. It is shown that this new model increases the accuracy of the modeling of the observed Earth's rotation. Besides, empirical models determined from USNO Finals as a priori ERP present a slightly lower Weighted Root Mean Square (WRMS) of residuals than IERS 08 C04 along the whole period of VLBI observations, according to our computations. The model is also validated through comparisons with other recognized models. The level of agreement among them is satisfactory. Let us remark that our estimates give rise to the lowest residuals and seem to reproduce the FCN signal in more detail.
Empirical STORM-E Model. [I. Theoretical and Observational Basis
NASA Technical Reports Server (NTRS)
Mertens, Christopher J.; Xu, Xiaojing; Bilitza, Dieter; Mlynczak, Martin G.; Russell, James M., III
2013-01-01
Auroral nighttime infrared emission observed by the Sounding of the Atmosphere using Broadband Emission Radiometry (SABER) instrument onboard the Thermosphere-Ionosphere-Mesosphere Energetics and Dynamics (TIMED) satellite is used to develop an empirical model of geomagnetic storm enhancements to E-region peak electron densities. The empirical model is called STORM-E and will be incorporated into the 2012 release of the International Reference Ionosphere (IRI). The proxy for characterizing the E-region response to geomagnetic forcing is NO+(v) volume emission rates (VER) derived from the TIMED/SABER 4.3 lm channel limb radiance measurements. The storm-time response of the NO+(v) 4.3 lm VER is sensitive to auroral particle precipitation. A statistical database of storm-time to climatological quiet-time ratios of SABER-observed NO+(v) 4.3 lm VER are fit to widely available geomagnetic indices using the theoretical framework of linear impulse-response theory. The STORM-E model provides a dynamic storm-time correction factor to adjust a known quiescent E-region electron density peak concentration for geomagnetic enhancements due to auroral particle precipitation. Part II of this series describes the explicit development of the empirical storm-time correction factor for E-region peak electron densities, and shows comparisons of E-region electron densities between STORM-E predictions and incoherent scatter radar measurements. In this paper, Part I of the series, the efficacy of using SABER-derived NO+(v) VER as a proxy for the E-region response to solar-geomagnetic disturbances is presented. Furthermore, a detailed description of the algorithms and methodologies used to derive NO+(v) VER from SABER 4.3 lm limb emission measurements is given. Finally, an assessment of key uncertainties in retrieving NO+(v) VER is presented
Precision Orbit Derived Atmospheric Density: Development and Performance
NASA Astrophysics Data System (ADS)
McLaughlin, C.; Hiatt, A.; Lechtenberg, T.; Fattig, E.; Mehta, P.
2012-09-01
Precision orbit ephemerides (POE) are used to estimate atmospheric density along the orbits of CHAMP (Challenging Minisatellite Payload) and GRACE (Gravity Recovery and Climate Experiment). The densities are calibrated against accelerometer derived densities and considering ballistic coefficient estimation results. The 14-hour density solutions are stitched together using a linear weighted blending technique to obtain continuous solutions over the entire mission life of CHAMP and through 2011 for GRACE. POE derived densities outperform the High Accuracy Satellite Drag Model (HASDM), Jacchia 71 model, and NRLMSISE-2000 model densities when comparing cross correlation and RMS with accelerometer derived densities. Drag is the largest error source for estimating and predicting orbits for low Earth orbit satellites. This is one of the major areas that should be addressed to improve overall space surveillance capabilities; in particular, catalog maintenance. Generally, density is the largest error source in satellite drag calculations and current empirical density models such as Jacchia 71 and NRLMSISE-2000 have significant errors. Dynamic calibration of the atmosphere (DCA) has provided measurable improvements to the empirical density models and accelerometer derived densities of extremely high precision are available for a few satellites. However, DCA generally relies on observations of limited accuracy and accelerometer derived densities are extremely limited in terms of measurement coverage at any given time. The goal of this research is to provide an additional data source using satellites that have precision orbits available using Global Positioning System measurements and/or satellite laser ranging. These measurements strike a balance between the global coverage provided by DCA and the precise measurements of accelerometers. The temporal resolution of the POE derived density estimates is around 20-30 minutes, which is significantly worse than that of accelerometer derived density estimates. However, major variations in density are observed in the POE derived densities. These POE derived densities in combination with other data sources can be assimilated into physics based general circulation models of the thermosphere and ionosphere with the possibility of providing improved density forecasts for satellite drag analysis. POE derived density estimates were initially developed using CHAMP and GRACE data so comparisons could be made with accelerometer derived density estimates. This paper presents the results of the most extensive calibration of POE derived densities compared to accelerometer derived densities and provides the reasoning for selecting certain parameters in the estimation process. The factors taken into account for these selections are the cross correlation and RMS performance compared to the accelerometer derived densities and the output of the ballistic coefficient estimation that occurs simultaneously with the density estimation. This paper also presents the complete data set of CHAMP and GRACE results and shows that the POE derived densities match the accelerometer densities better than empirical models or DCA. This paves the way to expand the POE derived densities to include other satellites with quality GPS and/or satellite laser ranging observations.
A Review of Multivariate Distributions for Count Data Derived from the Poisson Distribution
Inouye, David; Yang, Eunho; Allen, Genevera; Ravikumar, Pradeep
2017-01-01
The Poisson distribution has been widely studied and used for modeling univariate count-valued data. Multivariate generalizations of the Poisson distribution that permit dependencies, however, have been far less popular. Yet, real-world high-dimensional count-valued data found in word counts, genomics, and crime statistics, for example, exhibit rich dependencies, and motivate the need for multivariate distributions that can appropriately model this data. We review multivariate distributions derived from the univariate Poisson, categorizing these models into three main classes: 1) where the marginal distributions are Poisson, 2) where the joint distribution is a mixture of independent multivariate Poisson distributions, and 3) where the node-conditional distributions are derived from the Poisson. We discuss the development of multiple instances of these classes and compare the models in terms of interpretability and theory. Then, we empirically compare multiple models from each class on three real-world datasets that have varying data characteristics from different domains, namely traffic accident data, biological next generation sequencing data, and text data. These empirical experiments develop intuition about the comparative advantages and disadvantages of each class of multivariate distribution that was derived from the Poisson. Finally, we suggest new research directions as explored in the subsequent discussion section. PMID:28983398
Modeling the atmospheric chemistry of TICs
NASA Astrophysics Data System (ADS)
Henley, Michael V.; Burns, Douglas S.; Chynwat, Veeradej; Moore, William; Plitz, Angela; Rottmann, Shawn; Hearn, John
2009-05-01
An atmospheric chemistry model that describes the behavior and disposition of environmentally hazardous compounds discharged into the atmosphere was coupled with the transport and diffusion model, SCIPUFF. The atmospheric chemistry model was developed by reducing a detailed atmospheric chemistry mechanism to a simple empirical effective degradation rate term (keff) that is a function of important meteorological parameters such as solar flux, temperature, and cloud cover. Empirically derived keff functions that describe the degradation of target toxic industrial chemicals (TICs) were derived by statistically analyzing data generated from the detailed chemistry mechanism run over a wide range of (typical) atmospheric conditions. To assess and identify areas to improve the developed atmospheric chemistry model, sensitivity and uncertainty analyses were performed to (1) quantify the sensitivity of the model output (TIC concentrations) with respect to changes in the input parameters and (2) improve, where necessary, the quality of the input data based on sensitivity results. The model predictions were evaluated against experimental data. Chamber data were used to remove the complexities of dispersion in the atmosphere.
Design guidelines for an umbilical cord blood stem cell therapy quality assessment model
NASA Astrophysics Data System (ADS)
Januszewski, Witold S.; Michałek, Krzysztof; Yagensky, Oleksandr; Wardzińska, Marta
The paper enlists the pivotal guidelines for producing an empirical umbilical cord blood stem cell therapy quality assessment model. The methodology adapted was single equation linear model with domain knowledge derived from MEDAFAR classification. The resulting model is ready for therapeutical application.
A root-mean-square pressure fluctuations model for internal flow applications
NASA Technical Reports Server (NTRS)
Chen, Y. S.
1985-01-01
A transport equation for the root-mean-square pressure fluctuations of turbulent flow is derived from the time-dependent momentum equation for incompressible flow. Approximate modeling of this transport equation is included to relate terms with higher order correlations to the mean quantities of turbulent flow. Three empirical constants are introduced in the model. Two of the empirical constants are estimated from homogeneous turbulence data and wall pressure fluctuations measurements. The third constant is determined by comparing the results of large eddy simulations for a plane channel flow and an annulus flow.
Farrance, Ian; Frenkel, Robert
2014-01-01
The Guide to the Expression of Uncertainty in Measurement (usually referred to as the GUM) provides the basic framework for evaluating uncertainty in measurement. The GUM however does not always provide clearly identifiable procedures suitable for medical laboratory applications, particularly when internal quality control (IQC) is used to derive most of the uncertainty estimates. The GUM modelling approach requires advanced mathematical skills for many of its procedures, but Monte Carlo simulation (MCS) can be used as an alternative for many medical laboratory applications. In particular, calculations for determining how uncertainties in the input quantities to a functional relationship propagate through to the output can be accomplished using a readily available spreadsheet such as Microsoft Excel. The MCS procedure uses algorithmically generated pseudo-random numbers which are then forced to follow a prescribed probability distribution. When IQC data provide the uncertainty estimates the normal (Gaussian) distribution is generally considered appropriate, but MCS is by no means restricted to this particular case. With input variations simulated by random numbers, the functional relationship then provides the corresponding variations in the output in a manner which also provides its probability distribution. The MCS procedure thus provides output uncertainty estimates without the need for the differential equations associated with GUM modelling. The aim of this article is to demonstrate the ease with which Microsoft Excel (or a similar spreadsheet) can be used to provide an uncertainty estimate for measurands derived through a functional relationship. In addition, we also consider the relatively common situation where an empirically derived formula includes one or more ‘constants’, each of which has an empirically derived numerical value. Such empirically derived ‘constants’ must also have associated uncertainties which propagate through the functional relationship and contribute to the combined standard uncertainty of the measurand. PMID:24659835
Farrance, Ian; Frenkel, Robert
2014-02-01
The Guide to the Expression of Uncertainty in Measurement (usually referred to as the GUM) provides the basic framework for evaluating uncertainty in measurement. The GUM however does not always provide clearly identifiable procedures suitable for medical laboratory applications, particularly when internal quality control (IQC) is used to derive most of the uncertainty estimates. The GUM modelling approach requires advanced mathematical skills for many of its procedures, but Monte Carlo simulation (MCS) can be used as an alternative for many medical laboratory applications. In particular, calculations for determining how uncertainties in the input quantities to a functional relationship propagate through to the output can be accomplished using a readily available spreadsheet such as Microsoft Excel. The MCS procedure uses algorithmically generated pseudo-random numbers which are then forced to follow a prescribed probability distribution. When IQC data provide the uncertainty estimates the normal (Gaussian) distribution is generally considered appropriate, but MCS is by no means restricted to this particular case. With input variations simulated by random numbers, the functional relationship then provides the corresponding variations in the output in a manner which also provides its probability distribution. The MCS procedure thus provides output uncertainty estimates without the need for the differential equations associated with GUM modelling. The aim of this article is to demonstrate the ease with which Microsoft Excel (or a similar spreadsheet) can be used to provide an uncertainty estimate for measurands derived through a functional relationship. In addition, we also consider the relatively common situation where an empirically derived formula includes one or more 'constants', each of which has an empirically derived numerical value. Such empirically derived 'constants' must also have associated uncertainties which propagate through the functional relationship and contribute to the combined standard uncertainty of the measurand.
A Proposed Change to ITU-R Recommendation 681
NASA Technical Reports Server (NTRS)
Davarian, F.
1996-01-01
Recommendation 681 of the International Telecommunications Union (ITU) provides five models for the prediction of propagation effects on land mobile satellite links: empirical roadside shadowing (ERS), attenuation frequency scaling, fade duration distribution, non-fade duration distribution, and fading due to multipath. Because the above prediction models have been empirically derived using a limited amount of data, these schemes work only for restricted ranges of link parameters. With the first two models, for example, the frequency and elevation angle parameters are restricted to 0.8 to 2.7 GHz and 20 to 60 degrees, respectively. Recently measured data have enabled us to enhance the range of the first two schemes. Moreover, for convenience, they have been combined into a single scheme named the extended empirical roadside shadowing (EERS) model.
New empirically-derived solar radiation pressure model for GPS satellites
NASA Technical Reports Server (NTRS)
Bar-Sever, Y.; Kuang, D.
2003-01-01
Solar radiation pressure force is the second largest perturbation acting on GPS satellites, after the gravitational attraction from the Earth, Sun, and Moon. It is the largest error source in the modeling of GPS orbital dynamics.
Bounds on quantum confinement effects in metal nanoparticles
NASA Astrophysics Data System (ADS)
Blackman, G. Neal; Genov, Dentcho A.
2018-03-01
Quantum size effects on the permittivity of metal nanoparticles are investigated using the quantum box model. Explicit upper and lower bounds are derived for the permittivity and relaxation rates due to quantum confinement effects. These bounds are verified numerically, and the size dependence and frequency dependence of the empirical Drude size parameter is extracted from the model. Results suggest that the common practice of empirically modifying the dielectric function can lead to inaccurate predictions for highly uniform distributions of finite-sized particles.
Essays on pricing electricity and electricity derivatives in deregulated markets
NASA Astrophysics Data System (ADS)
Popova, Julia
2008-10-01
This dissertation is composed of four essays on the behavior of wholesale electricity prices and their derivatives. The first essay provides an empirical model that takes into account the spatial features of a transmission network on the electricity market. The spatial structure of the transmission grid plays a key role in determining electricity prices, but it has not been incorporated into previous empirical models. The econometric model in this essay incorporates a simple representation of the transmission system into a spatial panel data model of electricity prices, and also accounts for the effect of dynamic transmission system constraints on electricity market integration. Empirical results using PJM data confirm the existence of spatial patterns in electricity prices and show that spatial correlation diminishes as transmission lines become more congested. The second essay develops and empirically tests a model of the influence of natural gas storage inventories on the electricity forward premium. I link a model of the effect of gas storage constraints on the higher moments of the distribution of electricity prices to a model of the effect of those moments on the forward premium. Empirical results using PJM data support the model's predictions that gas storage inventories sharply reduce the electricity forward premium when demand for electricity is high and space-heating demand for gas is low. The third essay examines the efficiency of PJM electricity markets. A market is efficient if prices reflect all relevant information, so that prices follow a random walk. The hypothesis of random walk is examined using empirical tests, including the Portmanteau, Augmented Dickey-Fuller, KPSS, and multiple variance ratio tests. The results are mixed though evidence of some level of market efficiency is found. The last essay investigates the possibility that previous researchers have drawn spurious conclusions based on classical unit root tests incorrectly applied to wholesale electricity prices. It is well known that electricity prices exhibit both cyclicity and high volatility which varies through time. Results indicate that heterogeneity in unconditional variance---which is not detected by classical unit root tests---may contribute to the appearance of non-stationarity.
Empirical mass-loss rates for 25 O and early B stars, derived from Copernicus observations
NASA Technical Reports Server (NTRS)
Gathier, R.; Lamers, H. J. G. L. M.; Snow, T. P.
1981-01-01
Ultraviolet line profiles are fitted with theoretical line profiles in the cases of 25 stars covering a spectral type range from O4 to B1, including all luminosity classes. Ion column densities are compared for the determination of wind ionization, and it is found that the O VI/N V ratio is dependent on the mean density of the wind and not on effective temperature value, while the Si IV/N V ratio is temperature-dependent. The column densities are used to derive a mass-loss rate parameter that is empirically correlated against the mass-loss rate by means of standard stars with well-determined rates from IR or radio data. The empirical mass-loss rates obtained are compared with those derived by others and found to vary by as much as a factor of 10, which is shown to be due to uncertainties or errors in the ionization fractions of models used for wind ionization balance prediction.
Improving Marine Ecosystem Models with Biochemical Tracers
NASA Astrophysics Data System (ADS)
Pethybridge, Heidi R.; Choy, C. Anela; Polovina, Jeffrey J.; Fulton, Elizabeth A.
2018-01-01
Empirical data on food web dynamics and predator-prey interactions underpin ecosystem models, which are increasingly used to support strategic management of marine resources. These data have traditionally derived from stomach content analysis, but new and complementary forms of ecological data are increasingly available from biochemical tracer techniques. Extensive opportunities exist to improve the empirical robustness of ecosystem models through the incorporation of biochemical tracer data and derived indices, an area that is rapidly expanding because of advances in analytical developments and sophisticated statistical techniques. Here, we explore the trophic information required by ecosystem model frameworks (species, individual, and size based) and match them to the most commonly used biochemical tracers (bulk tissue and compound-specific stable isotopes, fatty acids, and trace elements). Key quantitative parameters derived from biochemical tracers include estimates of diet composition, niche width, and trophic position. Biochemical tracers also provide powerful insight into the spatial and temporal variability of food web structure and the characterization of dominant basal and microbial food web groups. A major challenge in incorporating biochemical tracer data into ecosystem models is scale and data type mismatches, which can be overcome with greater knowledge exchange and numerical approaches that transform, integrate, and visualize data.
Kanazawa, Kiyoshi; Sueshige, Takumi; Takayasu, Hideki; Takayasu, Misako
2018-03-30
A microscopic model is established for financial Brownian motion from the direct observation of the dynamics of high-frequency traders (HFTs) in a foreign exchange market. Furthermore, a theoretical framework parallel to molecular kinetic theory is developed for the systematic description of the financial market from microscopic dynamics of HFTs. We report first on a microscopic empirical law of traders' trend-following behavior by tracking the trajectories of all individuals, which quantifies the collective motion of HFTs but has not been captured in conventional order-book models. We next introduce the corresponding microscopic model of HFTs and present its theoretical solution paralleling molecular kinetic theory: Boltzmann-like and Langevin-like equations are derived from the microscopic dynamics via the Bogoliubov-Born-Green-Kirkwood-Yvon hierarchy. Our model is the first microscopic model that has been directly validated through data analysis of the microscopic dynamics, exhibiting quantitative agreements with mesoscopic and macroscopic empirical results.
NASA Astrophysics Data System (ADS)
Kanazawa, Kiyoshi; Sueshige, Takumi; Takayasu, Hideki; Takayasu, Misako
2018-03-01
A microscopic model is established for financial Brownian motion from the direct observation of the dynamics of high-frequency traders (HFTs) in a foreign exchange market. Furthermore, a theoretical framework parallel to molecular kinetic theory is developed for the systematic description of the financial market from microscopic dynamics of HFTs. We report first on a microscopic empirical law of traders' trend-following behavior by tracking the trajectories of all individuals, which quantifies the collective motion of HFTs but has not been captured in conventional order-book models. We next introduce the corresponding microscopic model of HFTs and present its theoretical solution paralleling molecular kinetic theory: Boltzmann-like and Langevin-like equations are derived from the microscopic dynamics via the Bogoliubov-Born-Green-Kirkwood-Yvon hierarchy. Our model is the first microscopic model that has been directly validated through data analysis of the microscopic dynamics, exhibiting quantitative agreements with mesoscopic and macroscopic empirical results.
Probabilistic analysis of tsunami hazards
Geist, E.L.; Parsons, T.
2006-01-01
Determining the likelihood of a disaster is a key component of any comprehensive hazard assessment. This is particularly true for tsunamis, even though most tsunami hazard assessments have in the past relied on scenario or deterministic type models. We discuss probabilistic tsunami hazard analysis (PTHA) from the standpoint of integrating computational methods with empirical analysis of past tsunami runup. PTHA is derived from probabilistic seismic hazard analysis (PSHA), with the main difference being that PTHA must account for far-field sources. The computational methods rely on numerical tsunami propagation models rather than empirical attenuation relationships as in PSHA in determining ground motions. Because a number of source parameters affect local tsunami runup height, PTHA can become complex and computationally intensive. Empirical analysis can function in one of two ways, depending on the length and completeness of the tsunami catalog. For site-specific studies where there is sufficient tsunami runup data available, hazard curves can primarily be derived from empirical analysis, with computational methods used to highlight deficiencies in the tsunami catalog. For region-wide analyses and sites where there are little to no tsunami data, a computationally based method such as Monte Carlo simulation is the primary method to establish tsunami hazards. Two case studies that describe how computational and empirical methods can be integrated are presented for Acapulco, Mexico (site-specific) and the U.S. Pacific Northwest coastline (region-wide analysis).
NASA Technical Reports Server (NTRS)
Sittler, Edward C., Jr.; Guhathakurta, Madhulika
1999-01-01
We have developed a two-dimensional semiempirical MHD model of the solar corona and solar wind. The model uses empirically derived electron density profiles from white-light coronagraph data measured during the Skylub period and an empirically derived model of the magnetic field which is fitted to observed streamer topologies, which also come from the white-light coronagraph data The electron density model comes from that developed by Guhathakurta and coworkers. The electron density model is extended into interplanetary space by using electron densities derived from the Ulysses plasma instrument. The model also requires an estimate of the solar wind velocity as a function of heliographic latitude and radial component of the magnetic field at 1 AU, both of which can be provided by the Ulysses spacecraft. The model makes estimates as a function of radial distance and latitude of various fluid parameters of the plasma such as flow velocity V, effective temperature T(sub eff), and effective heat flux q(sub eff), which are derived from the equations of conservation of mass, momentum, and energy, respectively. The term effective indicates that wave contributions could be present. The model naturally provides the spiral pattern of the magnetic field far from the Sun and an estimate of the large-scale surface magnetic field at the Sun, which we estimate to be approx. 12 - 15 G. The magnetic field model shows that the large-scale surface magnetic field is dominated by an octupole term. The model is a steady state calculation which makes the assumption of azimuthal symmetry and solves the various conservation equations in the rotating frame of the Sun. The conservation equations are integrated along the magnetic field direction in the rotating frame of the Sun, thus providing a nearly self-consistent calculation of the fluid parameters. The model makes a minimum number of assumptions about the physics of the solar corona and solar wind and should provide a very accurate empirical description of the solar corona and solar wind Once estimates of mass density rho, flow velocity V, effective temperature T(sub eff), effective heat flux q(sub eff), and magnetic field B are computed from the model and waves are assumed unimportant, all other plasma parameters such as Mach number, Alfven speed, gyrofrequency, etc. can be derived as a function of radial distance and latitude from the Sun. The model can be used as a planning tool for such missions as Slar Probe and provide an empirical framework for theoretical models of the solar corona and solar wind The model will be used to construct a semiempirical MHD description of the steady state solar corona and solar wind using the SOHO Large Angle Spectrometric Coronagraph (LASCO) polarized brightness white-light coronagraph data, SOHO Extreme Ultraviolet Imaging Telescope data, and Ulysses plasma data.
USDA-ARS?s Scientific Manuscript database
Although empirical models have been developed previously, a mechanistic model is needed for estimating electrical conductivity (EC) using time domain reflectometry (TDR) with variable lengths of coaxial cable. The goals of this study are to: (1) derive a mechanistic model based on multisection tra...
Towards a New Functional Anatomy of Language
ERIC Educational Resources Information Center
Poeppel, David; Hickok, Gregory
2004-01-01
The classical brain-language model derived from the work of Broca, Wernicke, Lichtheim, Geschwind, and others has been useful as a heuristic model that stimulates research and as a clinical model that guides diagnosis. However, it is now uncontroversial that the classical model is (i) empirically wrong in that it cannot account for the range of…
NASA Astrophysics Data System (ADS)
Hansen, Kenneth; Altwegg, Kathrin; Berthelier, Jean-Jacques; Bieler, Andre; Calmonte, Ursina; Combi, Michael; De Keyser, Johan; Fiethe, Björn; Fougere, Nicolas; Fuselier, Stephen; Gombosi, Tamas; Hässig, Myrtha; Huang, Zhenguang; Le Roy, Lena; Rubin, Martin; Tenishev, Valeriy; Toth, Gabor; Tzou, Chia-Yu
2016-04-01
We have previously used results from the AMPS DSMC (Adaptive Mesh Particle Simulator Direct Simulation Monte Carlo) model to create an empirical model of the near comet coma (<400 km) of comet 67P for the pre-equinox orbit of comet 67P/Churyumov-Gerasimenko. In this work we extend the empirical model to the post-equinox, post-perihelion time period. In addition, we extend the coma model to significantly further from the comet (~100,000-1,000,000 km). The empirical model characterizes the neutral coma in a comet centered, sun fixed reference frame as a function of heliocentric distance, radial distance from the comet, local time and declination. Furthermore, we have generalized the model beyond application to 67P by replacing the heliocentric distance parameterizations and mapping them to production rates. Using this method, the model become significantly more general and can be applied to any comet. The model is a significant improvement over simpler empirical models, such as the Haser model. For 67P, the DSMC results are, of course, a more accurate representation of the coma at any given time, but the advantage of a mean state, empirical model is the ease and speed of use. One application of the empirical model is to de-trend the spacecraft motion from the ROSINA COPS and DFMS data (Rosetta Orbiter Spectrometer for Ion and Neutral Analysis, Comet Pressure Sensor, Double Focusing Mass Spectrometer). The ROSINA instrument measures the neutral coma density at a single point and the measured value is influenced by the location of the spacecraft relative to the comet and the comet-sun line. Using the empirical coma model we can correct for the position of the spacecraft and compute a total production rate based on the single point measurement. In this presentation we will present the coma production rate as a function of heliocentric distance both pre- and post-equinox and perihelion.
Competency-Based Curriculum Development: A Pragmatic Approach
ERIC Educational Resources Information Center
Broski, David; And Others
1977-01-01
Examines the concept of competency-based education, describes an experience-based model for its development, and discusses some empirically derived rules-of-thumb for its application in allied health. (HD)
Empirical algorithms for ocean optics parameters
NASA Astrophysics Data System (ADS)
Smart, Jeffrey H.
2007-06-01
As part of the Worldwide Ocean Optics Database (WOOD) Project, The Johns Hopkins University Applied Physics Laboratory has developed and evaluated a variety of empirical models that can predict ocean optical properties, such as profiles of the beam attenuation coefficient computed from profiles of the diffuse attenuation coefficient. In this paper, we briefly summarize published empirical optical algorithms and assess their accuracy for estimating derived profiles. We also provide new algorithms and discuss their applicability for deriving optical profiles based on data collected from a variety of locations, including the Yellow Sea, the Sea of Japan, and the North Atlantic Ocean. We show that the scattering coefficient (b) can be computed from the beam attenuation coefficient (c) to about 10% accuracy. The availability of such relatively accurate predictions is important in the many situations where the set of data is incomplete.
ESTIMATION OF CHEMICAL TOXICITY TO WILDLIFE SPECIES USING INTERSPECIES CORRELATION MODELS
Ecological risks to wildlife are typically assessed using toxicity data for relataively few species and with limited understanding of differences in species sensitivity to contaminants. Empirical interspecies correlation models were derived from LD50 values for 49 wildlife speci...
Linking agent-based models and stochastic models of financial markets
Feng, Ling; Li, Baowen; Podobnik, Boris; Preis, Tobias; Stanley, H. Eugene
2012-01-01
It is well-known that financial asset returns exhibit fat-tailed distributions and long-term memory. These empirical features are the main objectives of modeling efforts using (i) stochastic processes to quantitatively reproduce these features and (ii) agent-based simulations to understand the underlying microscopic interactions. After reviewing selected empirical and theoretical evidence documenting the behavior of traders, we construct an agent-based model to quantitatively demonstrate that “fat” tails in return distributions arise when traders share similar technical trading strategies and decisions. Extending our behavioral model to a stochastic model, we derive and explain a set of quantitative scaling relations of long-term memory from the empirical behavior of individual market participants. Our analysis provides a behavioral interpretation of the long-term memory of absolute and squared price returns: They are directly linked to the way investors evaluate their investments by applying technical strategies at different investment horizons, and this quantitative relationship is in agreement with empirical findings. Our approach provides a possible behavioral explanation for stochastic models for financial systems in general and provides a method to parameterize such models from market data rather than from statistical fitting. PMID:22586086
Linking agent-based models and stochastic models of financial markets.
Feng, Ling; Li, Baowen; Podobnik, Boris; Preis, Tobias; Stanley, H Eugene
2012-05-29
It is well-known that financial asset returns exhibit fat-tailed distributions and long-term memory. These empirical features are the main objectives of modeling efforts using (i) stochastic processes to quantitatively reproduce these features and (ii) agent-based simulations to understand the underlying microscopic interactions. After reviewing selected empirical and theoretical evidence documenting the behavior of traders, we construct an agent-based model to quantitatively demonstrate that "fat" tails in return distributions arise when traders share similar technical trading strategies and decisions. Extending our behavioral model to a stochastic model, we derive and explain a set of quantitative scaling relations of long-term memory from the empirical behavior of individual market participants. Our analysis provides a behavioral interpretation of the long-term memory of absolute and squared price returns: They are directly linked to the way investors evaluate their investments by applying technical strategies at different investment horizons, and this quantitative relationship is in agreement with empirical findings. Our approach provides a possible behavioral explanation for stochastic models for financial systems in general and provides a method to parameterize such models from market data rather than from statistical fitting.
A semi-empirical model for estimating surface solar radiation from satellite data
NASA Astrophysics Data System (ADS)
Janjai, Serm; Pattarapanitchai, Somjet; Wattan, Rungrat; Masiri, Itsara; Buntoung, Sumaman; Promsen, Worrapass; Tohsing, Korntip
2013-05-01
This paper presents a semi-empirical model for estimating surface solar radiation from satellite data for a tropical environment. The model expresses solar irradiance as a semi-empirical function of cloud index, aerosol optical depth, precipitable water, total column ozone and air mass. The cloud index data were derived from MTSAT-1R satellite, whereas the aerosol optical depth data were obtained from MODIS/Terra satellite. The total column ozone data were derived from OMI/AURA satellite and the precipitable water data were obtained from NCEP/NCAR. A five year period (2006-2010) of these data and global solar irradiance measured at four sites in Thailand namely, Chiang Mai (18.78 °N, 98.98 °E), Nakhon Pathom (13.82 °N, 100.04 °E), Ubon Ratchathani (15.25 °N, 104.87 °E) and Songkhla (7.20 °N, 100.60 °E), were used to derive the coefficients of the model. To evaluate its performance, the model was used to calculate solar radiation at four sites in Thailand namely, Phisanulok (16.93 °N, 100.24 °E), Kanchanaburi (14.02 °N, 99.54 °E), Nongkhai (17.87 °N, 102.72 °E) and Surat Thani (9.13 °N, 99.15 °E) and the results were compared with solar radiation measured at these sites. It was found that the root mean square difference (RMSD) between measured and calculated values of hourly solar radiation was in the range of 25.5-29.4%. The RMSD is reduced to 10.9-17.0% for the case of monthly average hourly radiation. The proposed model has the advantage in terms of the simplicity for applications and reasonable accuracy of the results.
Empirical modeling of dynamic behaviors of pneumatic artificial muscle actuators.
Wickramatunge, Kanchana Crishan; Leephakpreeda, Thananchai
2013-11-01
Pneumatic Artificial Muscle (PAM) actuators yield muscle-like mechanical actuation with high force to weight ratio, soft and flexible structure, and adaptable compliance for rehabilitation and prosthetic appliances to the disabled as well as humanoid robots or machines. The present study is to develop empirical models of the PAM actuators, that is, a PAM coupled with pneumatic control valves, in order to describe their dynamic behaviors for practical control design and usage. Empirical modeling is an efficient approach to computer-based modeling with observations of real behaviors. Different characteristics of dynamic behaviors of each PAM actuator are due not only to the structures of the PAM actuators themselves, but also to the variations of their material properties in manufacturing processes. To overcome the difficulties, the proposed empirical models are experimentally derived from real physical behaviors of the PAM actuators, which are being implemented. In case studies, the simulated results with good agreement to experimental results, show that the proposed methodology can be applied to describe the dynamic behaviors of the real PAM actuators. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
Brown, Patrick T; Li, Wenhong; Cordero, Eugene C; Mauget, Steven A
2015-04-21
The comparison of observed global mean surface air temperature (GMT) change to the mean change simulated by climate models has received much public and scientific attention. For a given global warming signal produced by a climate model ensemble, there exists an envelope of GMT values representing the range of possible unforced states of the climate system (the Envelope of Unforced Noise; EUN). Typically, the EUN is derived from climate models themselves, but climate models might not accurately simulate the correct characteristics of unforced GMT variability. Here, we simulate a new, empirical, EUN that is based on instrumental and reconstructed surface temperature records. We compare the forced GMT signal produced by climate models to observations while noting the range of GMT values provided by the empirical EUN. We find that the empirical EUN is wide enough so that the interdecadal variability in the rate of global warming over the 20(th) century does not necessarily require corresponding variability in the rate-of-increase of the forced signal. The empirical EUN also indicates that the reduced GMT warming over the past decade or so is still consistent with a middle emission scenario's forced signal, but is likely inconsistent with the steepest emission scenario's forced signal.
Brown, Patrick T.; Li, Wenhong; Cordero, Eugene C.; Mauget, Steven A.
2015-01-01
The comparison of observed global mean surface air temperature (GMT) change to the mean change simulated by climate models has received much public and scientific attention. For a given global warming signal produced by a climate model ensemble, there exists an envelope of GMT values representing the range of possible unforced states of the climate system (the Envelope of Unforced Noise; EUN). Typically, the EUN is derived from climate models themselves, but climate models might not accurately simulate the correct characteristics of unforced GMT variability. Here, we simulate a new, empirical, EUN that is based on instrumental and reconstructed surface temperature records. We compare the forced GMT signal produced by climate models to observations while noting the range of GMT values provided by the empirical EUN. We find that the empirical EUN is wide enough so that the interdecadal variability in the rate of global warming over the 20th century does not necessarily require corresponding variability in the rate-of-increase of the forced signal. The empirical EUN also indicates that the reduced GMT warming over the past decade or so is still consistent with a middle emission scenario's forced signal, but is likely inconsistent with the steepest emission scenario's forced signal. PMID:25898351
Design of exchange-correlation functionals through the correlation factor approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pavlíková Přecechtělová, Jana, E-mail: j.precechtelova@gmail.com, E-mail: Matthias.Ernzerhof@UMontreal.ca; Institut für Chemie, Theoretische Chemie / Quantenchemie, Sekr. C7, Technische Universität Berlin, Straße des 17. Juni 135, 10623 Berlin; Bahmann, Hilke
The correlation factor model is developed in which the spherically averaged exchange-correlation hole of Kohn-Sham theory is factorized into an exchange hole model and a correlation factor. The exchange hole model reproduces the exact exchange energy per particle. The correlation factor is constructed in such a manner that the exchange-correlation energy correctly reduces to exact exchange in the high density and rapidly varying limits. Four different correlation factor models are presented which satisfy varying sets of physical constraints. Three models are free from empirical adjustments to experimental data, while one correlation factor model draws on one empirical parameter. The correlationmore » factor models are derived in detail and the resulting exchange-correlation holes are analyzed. Furthermore, the exchange-correlation energies obtained from the correlation factor models are employed to calculate total energies, atomization energies, and barrier heights. It is shown that accurate, non-empirical functionals can be constructed building on exact exchange. Avenues for further improvements are outlined as well.« less
NASA Astrophysics Data System (ADS)
Alloui, Mebarka; Belaidi, Salah; Othmani, Hasna; Jaidane, Nejm-Eddine; Hochlaf, Majdi
2018-03-01
We performed benchmark studies on the molecular geometry, electron properties and vibrational analysis of imidazole using semi-empirical, density functional theory and post Hartree-Fock methods. These studies validated the use of AM1 for the treatment of larger systems. Then, we treated the structural, physical and chemical relationships for a series of imidazole derivatives acting as angiotensin II AT1 receptor blockers using AM1. QSAR studies were done for these imidazole derivatives using a combination of various physicochemical descriptors. A multiple linear regression procedure was used to design the relationships between molecular descriptor and the activity of imidazole derivatives. Results validate the derived QSAR model.
Estimating standard errors in feature network models.
Frank, Laurence E; Heiser, Willem J
2007-05-01
Feature network models are graphical structures that represent proximity data in a discrete space while using the same formalism that is the basis of least squares methods employed in multidimensional scaling. Existing methods to derive a network model from empirical data only give the best-fitting network and yield no standard errors for the parameter estimates. The additivity properties of networks make it possible to consider the model as a univariate (multiple) linear regression problem with positivity restrictions on the parameters. In the present study, both theoretical and empirical standard errors are obtained for the constrained regression parameters of a network model with known features. The performance of both types of standard error is evaluated using Monte Carlo techniques.
Empirical and semi-analytical models for predicting peak outflows caused by embankment dam failures
NASA Astrophysics Data System (ADS)
Wang, Bo; Chen, Yunliang; Wu, Chao; Peng, Yong; Song, Jiajun; Liu, Wenjun; Liu, Xin
2018-07-01
Prediction of peak discharge of floods has attracted great attention for researchers and engineers. In present study, nine typical nonlinear mathematical models are established based on database of 40 historical dam failures. The first eight models that were developed with a series of regression analyses are purely empirical, while the last one is a semi-analytical approach that was derived from an analytical solution of dam-break floods in a trapezoidal channel. Water depth above breach invert (Hw), volume of water stored above breach invert (Vw), embankment length (El), and average embankment width (Ew) are used as independent variables to develop empirical formulas of estimating the peak outflow from breached embankment dams. It is indicated from the multiple regression analysis that a function using the former two variables (i.e., Hw and Vw) produce considerably more accurate results than that using latter two variables (i.e., El and Ew). It is shown that the semi-analytical approach works best in terms of both prediction accuracy and uncertainty, and the established empirical models produce considerably reasonable results except the model only using El. Moreover, present models have been compared with other models available in literature for estimating peak discharge.
Dementia and Depression: A Process Model for Differential Diagnosis.
ERIC Educational Resources Information Center
Hill, Carrie L.; Spengler, Paul M.
1997-01-01
Delineates a process model for mental-health counselors to follow in formulating a differential diagnosis of dementia and depression in adults 65 years and older. The model is derived from empirical, theoretical, and clinical sources of evidence. Explores components of the clinical interview, of hypothesis formation, and of hypothesis testing.…
Interest Rates and Coupon Bonds in Quantum Finance
NASA Astrophysics Data System (ADS)
Baaquie, Belal E.
2009-09-01
1. Synopsis; 2. Interest rates and coupon bonds; 3. Options and option theory; 4. Interest rate and coupon bond options; 5. Quantum field theory of bond forward interest rates; 6. Libor Market Model of interest rates; 7. Empirical analysis of forward interest rates; 8. Libor Market Model of interest rate options; 9. Numeraires for bond forward interest rates; 10. Empirical analysis of interest rate caps; 11. Coupon bond European and Asian options; 12. Empirical analysis of interest rate swaptions; 13. Correlation of coupon bond options; 14. Hedging interest rate options; 15. Interest rate Hamiltonian and option theory; 16. American options for coupon bonds and interest rates; 17. Hamiltonian derivation of coupon bond options; Appendixes; Glossaries; List of symbols; Reference; Index.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ye, Sheng; Li, Hongyi; Huang, Maoyi
2014-07-21
Subsurface stormflow is an important component of the rainfall–runoff response, especially in steep terrain. Its contribution to total runoff is, however, poorly represented in the current generation of land surface models. The lack of physical basis of these common parameterizations precludes a priori estimation of the stormflow (i.e. without calibration), which is a major drawback for prediction in ungauged basins, or for use in global land surface models. This paper is aimed at deriving regionalized parameterizations of the storage–discharge relationship relating to subsurface stormflow from a top–down empirical data analysis of streamflow recession curves extracted from 50 eastern United Statesmore » catchments. Detailed regression analyses were performed between parameters of the empirical storage–discharge relationships and the controlling climate, soil and topographic characteristics. The regression analyses performed on empirical recession curves at catchment scale indicated that the coefficient of the power-law form storage–discharge relationship is closely related to the catchment hydrologic characteristics, which is consistent with the hydraulic theory derived mainly at the hillslope scale. As for the exponent, besides the role of field scale soil hydraulic properties as suggested by hydraulic theory, it is found to be more strongly affected by climate (aridity) at the catchment scale. At a fundamental level these results point to the need for more detailed exploration of the co-dependence of soil, vegetation and topography with climate.« less
A Moisture Function of Soil Heterotrophic Respiration Derived from Pore-scale Mechanisms
NASA Astrophysics Data System (ADS)
Yan, Z.; Todd-Brown, K. E.; Bond-Lamberty, B. P.; Bailey, V.; Liu, C.
2017-12-01
Soil heterotrophic respiration (HR) is an important process controlling carbon (C) flux, but its response to changes in soil water content (θ) is poorly understood. Earth system models (ESMs) use empirical moisture functions developed from specific sites to describe the HR-θ relationship in soils, introducing significant uncertainty. Generalized models derived from mechanisms that control substrate availability and microbial respiration are thus urgently needed. Here we derive, present, and test a novel moisture function fp developed from pore-scale mechanisms. This fp encapsulates primary physicochemical and biological processes controlling HR response to moisture variation in soils. We tested fp against a wide range of published data for different soil types, and found that fp reliably predicted diverse HR- relationships. The mathematical relationship between the parameters in fp and macroscopic soil properties such as porosity and organic C content was also established, enabling to estimate fp using soil properties. Compared with empirical moisture functions used in ESMs, this derived fp could reduce uncertainty in predicting the response of soil organic C stock to climate changes. In addition, this work is one of the first studies to upscale a mechanistic soil HR model based on pore-scale processes, thus linking the pore-scale mechanisms with macroscale observations.
NASA Astrophysics Data System (ADS)
Holden, Z.; Cushman, S.; Evans, J.; Littell, J. S.
2009-12-01
The resolution of current climate interpolation models limits our ability to adequately account for temperature variability in complex mountainous terrain. We empirically derive 30 meter resolution models of June-October day and nighttime temperature and April nighttime Vapor Pressure Deficit (VPD) using hourly data from 53 Hobo dataloggers stratified by topographic setting in mixed conifer forests near Bonners Ferry, ID. 66%, of the variability in average June-October daytime temperature is explained by 3 variables (elevation, relative slope position and topographic roughness) derived from 30 meter digital elevation models. 69% of the variability in nighttime temperatures among stations is explained by elevation, relative slope position and topographic dissection (450 meter window). 54% of variability in April nighttime VPD is explained by elevation, soil wetness and the NDVIc derived from Landsat. We extract temperature and VPD predictions at 411 intensified Forest Inventory and Analysis plots (FIA). We use these variables with soil wetness and solar radiation indices derived from a 30 meter DEM to predict the presence and absence of 10 common forest tree species and 25 shrub species. Classification accuracies range from 87% for Pinus ponderosa , to > 97% for most other tree species. Shrub model accuracies are also high with greater than 90% accuracy for the majority of species. Species distribution models based on the physical variables that drive species occurrence, rather than their topographic surrogates, will eventually allow us to predict potential future distributions of these species with warming climate at fine spatial scales.
Dierssen, Heidi M
2010-10-05
Phytoplankton biomass and productivity have been continuously monitored from ocean color satellites for over a decade. Yet, the most widely used empirical approach for estimating chlorophyll a (Chl) from satellites can be in error by a factor of 5 or more. Such variability is due to differences in absorption and backscattering properties of phytoplankton and related concentrations of colored-dissolved organic matter (CDOM) and minerals. The empirical algorithms have built-in assumptions that follow the basic precept of biological oceanography--namely, oligotrophic regions with low phytoplankton biomass are populated with small phytoplankton, whereas more productive regions contain larger bloom-forming phytoplankton. With a changing world ocean, phytoplankton composition may shift in response to altered environmental forcing, and CDOM and mineral concentrations may become uncoupled from phytoplankton stocks, creating further uncertainty and error in the empirical approaches. Hence, caution is warranted when using empirically derived Chl to infer climate-related changes in ocean biology. The Southern Ocean is already experiencing climatic shifts and shows substantial errors in satellite-derived Chl for different phytoplankton assemblages. Accurate global assessments of phytoplankton will require improved technology and modeling, enhanced field observations, and ongoing validation of our "eyes in space."
Time optimal control of a jet engine using a quasi-Hermite interpolation model. M.S. Thesis
NASA Technical Reports Server (NTRS)
Comiskey, J. G.
1979-01-01
This work made preliminary efforts to generate nonlinear numerical models of a two-spooled turbofan jet engine, and subject these models to a known method of generating global, nonlinear, time optimal control laws. The models were derived numerically, directly from empirical data, as a first step in developing an automatic modelling procedure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mou, J.I.; King, C.
The focus of this study is to develop a sensor fused process modeling and control methodology to model, assess, and then enhance the performance of a hexapod machine for precision product realization. Deterministic modeling technique was used to derive models for machine performance assessment and enhancement. Sensor fusion methodology was adopted to identify the parameters of the derived models. Empirical models and computational algorithms were also derived and implemented to model, assess, and then enhance the machine performance. The developed sensor fusion algorithms can be implemented on a PC-based open architecture controller to receive information from various sensors, assess themore » status of the process, determine the proper action, and deliver the command to actuators for task execution. This will enhance a hexapod machine`s capability to produce workpieces within the imposed dimensional tolerances.« less
NASA Astrophysics Data System (ADS)
Droghei, Riccardo; Salusti, Ettore
2013-04-01
Control of drilling parameters, as fluid pressure, mud weight, salt concentration is essential to avoid instabilities when drilling through shale sections. To investigate shale deformation, fundamental for deep oil drilling and hydraulic fracturing for gas extraction ("fracking"), a non-linear model of mechanic and chemo-poroelastic interactions among fluid, solute and the solid matrix is here discussed. The two equations of this model describe the isothermal evolution of fluid pressure and solute density in a fluid saturated porous rock. Their solutions are quick non-linear Burger's solitary waves, potentially destructive for deep operations. In such analysis the effect of diffusion, that can play a particular role in fracking, is investigated. Then, following Civan (1998), both diffusive and shock waves are applied to fine particles filtration due to such quick transients , their effect on the adjacent rocks and the resulting time-delayed evolution. Notice how time delays in simple porous media dynamics have recently been analyzed using a fractional derivative approach. To make a tentative comparison of these two deeply different methods,in our model we insert fractional time derivatives, i.e. a kind of time-average of the fluid-rocks interactions. Then the delaying effects of fine particles filtration is compared with fractional model time delays. All this can be seen as an empirical check of these fractional models.
NASA Astrophysics Data System (ADS)
Rawat, Kishan Singh; Sehgal, Vinay Kumar; Pradhan, Sanatan; Ray, Shibendu S.
2018-03-01
We have estimated soil moisture (SM) by using circular horizontal polarization backscattering coefficient (σ o_{RH}), differences of circular vertical and horizontal σ o (σ o_{RV} {-} σ o_{RH}) from FRS-1 data of Radar Imaging Satellite (RISAT-1) and surface roughness in terms of RMS height ({RMS}_{height}). We examined the performance of FRS-1 in retrieving SM under wheat crop at tillering stage. Results revealed that it is possible to develop a good semi-empirical model (SEM) to estimate SM of the upper soil layer using RISAT-1 SAR data rather than using existing empirical model based on only single parameter, i.e., σ o. Near surface SM measurements were related to σ o_{RH}, σ o_{RV} {-} σ o_{RH} derived using 5.35 GHz (C-band) image of RISAT-1 and {RMS}_{height}. The roughness component derived in terms of {RMS}_{height} showed a good positive correlation with σ o_{RV} {-} σ o_{RH} (R2 = 0.65). By considering all the major influencing factors (σ o_{RH}, σ o_{RV} {-} σ o_{RH}, and {RMS}_{height}), an SEM was developed where SM (volumetric) predicted values depend on σ o_{RH}, σ o_{RV} {-} σ o_{RH}, and {RMS}_{height}. This SEM showed R2 of 0.87 and adjusted R2 of 0.85, multiple R=0.94 and with standard error of 0.05 at 95% confidence level. Validation of the SM derived from semi-empirical model with observed measurement ({SM}_{Observed}) showed root mean square error (RMSE) = 0.06, relative-RMSE (R-RMSE) = 0.18, mean absolute error (MAE) = 0.04, normalized RMSE (NRMSE) = 0.17, Nash-Sutcliffe efficiency (NSE) = 0.91 ({≈ } 1), index of agreement (d) = 1, coefficient of determination (R2) = 0.87, mean bias error (MBE) = 0.04, standard error of estimate (SEE) = 0.10, volume error (VE) = 0.15, variance of the distribution of differences ({S}d2) = 0.004. The developed SEM showed better performance in estimating SM than Topp empirical model which is based only on σ o. By using the developed SEM, top soil SM can be estimated with low mean absolute percent error (MAPE) = 1.39 and can be used for operational applications.
Large wood influence on stream metabolism at a reach-scale in the Assabet River, Massachusetts
NASA Astrophysics Data System (ADS)
David, G. C. L.; Snyder, N. P.; Rosario, G. M.
2016-12-01
Total stream metabolism (TSM) represents the transfer of carbon through a channel by both primary production and respiration, and thus represents the movement of energy through a watershed. Large wood (LW) creates geomorphically complex channels by diverting flows, altering shear stresses on the channel bed and banks, and pool development. The increase in habitat complexity around LW is expected to increase TSM, but this change has not been directly measured. In this study, we measured changes in TSM around a LW jam in a Massachusetts river. Dissolved oxygen (DO) time series data are used to quantify gross primary production (GPP), ecosystem respiration (ER), which equal TSM when summed. Two primary objectives of this study are to (1) assess changes in TSM around LW and (2) compare empirical methods of deriving TSM to Grace et al.'s (2015) BASE model. We hypothesized that LW would increase TSM by providing larger pools, increasing coverage for fish and macroinvertebrates, increasing organic matter accumulation, and providing a place for primary producers to anchor and grow. The Assabet River is a 78 km2 drainage basin in central Massachusetts that provides public water supply to 7 towns. A change in TSM over a reach-scale was assessed using two YSI 6-Series Multiparameter Water Quality sondes over a 140 m long pool-riffle open meadow section. The reach included 6 pools and one LW jam. Every two weeks from July to November 2015, the sondes were moved to different pools. The sondes collected DO, temperature, depth, pH, salinity, light intensity, and turbidity at 15-minute intervals. Velocity (V) and discharge (Q) were measured weekly around the sondes and at established cross sections. Instantaneous V and Q were calculated for each sonde by modeling flows in HEC-RAS. Overall, TSM was heavily influenced by the pool size and indirectly to the LW jam which was associated with the largest pool. The largest error in TSM calculations is related to the empirically calculated reaeration flux (k), which represents oxygen inputs from the atmosphere. We used two well-established empirical equations to compare k values to the BASE model. The model agreed with empirically derived values during intermediate and high Q. Modeled GPP and ER diverged, sometimes by an order of magnitude, from the empirically derived results during the lowest flows.
GPS-Derived Precipitable Water Compared with the Air Force Weather Agency’s MM5 Model Output
2002-03-26
and less then 100 sensors are available throughout Europe . While the receiver density is currently comparable to the upper-air sounding network...profiles from 38 upper air sites throughout Europe . Based on these empirical formulae and simplifications, Bevis (1992) has determined that the error...Alaska using Bevis’ (1992) empirical correlation based on 8718 radiosonde calculations over 2 years. Other studies have been conducted in Europe and
Recent solar extreme ultraviolet irradiance observations and modeling: A review
NASA Technical Reports Server (NTRS)
Tobiska, W. Kent
1993-01-01
For more than 90 years, solar extreme ultraviolet (EUV) irradiance modeling has progressed from empirical blackbody radiation formulations, through fudge factors, to typically measured irradiances and reference spectra was well as time-dependent empirical models representing continua and line emissions. A summary of recent EUV measurements by five rockets and three satellites during the 1980s is presented along with the major modeling efforts. The most significant reference spectra are reviewed and threee independently derived empirical models are described. These include Hinteregger's 1981 SERF1, Nusinov's 1984 two-component, and Tobiska's 1990/1991/SERF2/EUV91 flux models. They each provide daily full-disk broad spectrum flux values from 2 to 105 nm at 1 AU. All the models depend to one degree or another on the long time series of the Atmosphere Explorer E (AE-E) EUV database. Each model uses ground- and/or space-based proxies to create emissions from solar atmospheric regions. Future challenges in EUV modeling are summarized including the basic requirements of models, the task of incorporating new observations and theory into the models, the task of comparing models with solar-terrestrial data sets, and long-term goals and modeling objectives. By the late 1990s, empirical models will potentially be improved through the use of proposed solar EUV irradiance measurements and images at selected wavelengths that will greatly enhance modeling and predictive capabilities.
Students' Acceptance of Tablet PCs in the Classroom
ERIC Educational Resources Information Center
Ifenthaler, Dirk; Schweinbenz, Volker
2016-01-01
In recent years digital technologies, such as tablet personal computers (TPCs), have become an integral part of a school's infrastructure and are seen as a promising way to facilitate students' learning processes. This study empirically tested a theoretical model derived from the technology acceptance model containing key constructs developed in…
Resistivity of liquid metals on Veljkovic-Slavic pseudopotential
NASA Astrophysics Data System (ADS)
Abdel-Azez, Khalef
1996-04-01
An empirical form of screened model pseudopotential, proposed by Veljkovic and Slavic, is exploited for the calculation of resistivity of seven liquid metals through the correct re- determination of its parameters. The model derives qualitative support from the close agreement obtained between the computed results and the experiment.
Menzerath-Altmann Law: Statistical Mechanical Interpretation as Applied to a Linguistic Organization
NASA Astrophysics Data System (ADS)
Eroglu, Sertac
2014-10-01
The distribution behavior described by the empirical Menzerath-Altmann law is frequently encountered during the self-organization of linguistic and non-linguistic natural organizations at various structural levels. This study presents a statistical mechanical derivation of the law based on the analogy between the classical particles of a statistical mechanical organization and the distinct words of a textual organization. The derived model, a transformed (generalized) form of the Menzerath-Altmann model, was termed as the statistical mechanical Menzerath-Altmann model. The derived model allows interpreting the model parameters in terms of physical concepts. We also propose that many organizations presenting the Menzerath-Altmann law behavior, whether linguistic or not, can be methodically examined by the transformed distribution model through the properly defined structure-dependent parameter and the energy associated states.
An Education for Peace Model That Centres on Belief Systems: The Theory behind The Model
ERIC Educational Resources Information Center
Willis, Alison
2017-01-01
The education for peace model (EFPM) presented in this paper was developed within a theoretical framework of complexity science and critical theory and was derived from a review of an empirical research project conducted in a conflict affected environment. The model positions belief systems at the centre and is socioecologically systemic in design…
Atomic scale simulations for improved CRUD and fuel performance modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andersson, Anders David Ragnar; Cooper, Michael William Donald
2017-01-06
A more mechanistic description of fuel performance codes can be achieved by deriving models and parameters from atomistic scale simulations rather than fitting models empirically to experimental data. The same argument applies to modeling deposition of corrosion products on fuel rods (CRUD). Here are some results from publications in 2016 carried out using the CASL allocation at LANL.
Validation of a Global Hydrodynamic Flood Inundation Model
NASA Astrophysics Data System (ADS)
Bates, P. D.; Smith, A.; Sampson, C. C.; Alfieri, L.; Neal, J. C.
2014-12-01
In this work we present first validation results for a hyper-resolution global flood inundation model. We use a true hydrodynamic model (LISFLOOD-FP) to simulate flood inundation at 1km resolution globally and then use downscaling algorithms to determine flood extent and depth at 90m spatial resolution. Terrain data are taken from a custom version of the SRTM data set that has been processed specifically for hydrodynamic modelling. Return periods of flood flows along the entire global river network are determined using: (1) empirical relationships between catchment characteristics and index flood magnitude in different hydroclimatic zones derived from global runoff data; and (2) an index flood growth curve, also empirically derived. Bankful return period flow is then used to set channel width and depth, and flood defence impacts are modelled using empirical relationships between GDP, urbanization and defence standard of protection. The results of these simulations are global flood hazard maps for a number of different return period events from 1 in 5 to 1 in 1000 years. We compare these predictions to flood hazard maps developed by national government agencies in the UK and Germany using similar methods but employing detailed local data, and to observed flood extent at a number of sites including St. Louis, USA and Bangkok in Thailand. Results show that global flood hazard models can have considerable skill given careful treatment to overcome errors in the publicly available data that are used as their input.
An Attempt to Derive the epsilon Equation from a Two-Point Closure
NASA Technical Reports Server (NTRS)
Canuto, V. M.; Cheng, Y.; Howard, A. M.
2010-01-01
The goal of this paper is to derive the equation for the turbulence dissipation rate epsilon for a shear-driven flow. In 1961, Davydov used a one-point closure model to derive the epsilon equation from first principles but the final result contained undetermined terms and thus lacked predictive power. Both in 1987 and in 2001, attempts were made to derive the epsilon equation from first principles using a two-point closure, but their methods relied on a phenomenological assumption. The standard practice has thus been to employ a heuristic form of the equation that contains three empirical ingredients: two constants, c(sub 1 epsilon), and c(sub 2 epsilon), and a diffusion term D(sub epsilon) In this work, a two-point closure is employed, yielding the following results: 1) the empirical constants get replaced by c(sub 1), c(sub 2), which are now functions of Kappa and epsilon; 2) c(sub 1) and c(sub 2) are not independent because a general relation between the two that are valid for any Kappa and epsilon are derived; 3) c(sub 1), c(sub 2) become constant with values close to the empirical values c(sub 1 epsilon), c(sub epsilon 2), (i.e., homogenous flows); and 4) the empirical form of the diffusion term D(sub epsilon) is no longer needed because it gets substituted by the Kappa-epsilon dependence of c(sub 1), c(sub 2), which plays the role of the diffusion, together with the diffusion of the turbulent kinetic energy D(sub Kappa), which now enters the new equation (i.e., inhomogeneous flows). Thus, the three empirical ingredients c(sub 1 epsilon), c(sub epsilon 2), D (sub epsilon)are replaced by a single function c(sub 1)(Kappa, epsilon ) or c(sub 2)(Kappa, epsilon ), plus a D(sub Kappa)term. Three tests of the new equation for epsilon are presented: one concerning channel flow and two concerning the shear-driven planetary boundary layer (PBL).
ERIC Educational Resources Information Center
Kim, Sangwon; Orpinas, Pamela; Kamphaus, Randy; Kelder, Steven H.
2011-01-01
This study empirically derived a multiple risk factors model of the development of aggression among middle school students in urban, low-income neighborhoods, using Hierarchical Linear Modeling (HLM). Results indicated that aggression increased from sixth to eighth grade. Additionally, the influences of four risk domains (individual, family,…
The influence of multi-season imagery on models of canopy cover: A case study
John W. Coulston; Dennis M. Jacobs; Chris R. King; Ivey C. Elmore
2013-01-01
Quantifying tree canopy cover in a spatially explicit fashion is important for broad-scale monitoring of ecosystems and for management of natural resources. Researchers have developed empirical models of tree canopy cover to produce geospatial products. For subpixel models, percent tree canopy cover estimates (derived from fine-scale imagery) serve as the response...
The Teaching-Research Gestalt: The Development of a Discipline-Based Scale
ERIC Educational Resources Information Center
Duff, Angus; Marriott, Neil
2017-01-01
This paper reports the development and empirical testing of a model of the factors that influence the teaching-research nexus. No prior work has attempted to create a measurement model of the nexus. The conceptual model is derived from 19 propositions grouped into four sets of factors relating to: rewards, researchers, curriculum, and students.…
Using Landsat to provide potato production estimates to Columbia Basin farmers and processors
NASA Technical Reports Server (NTRS)
1990-01-01
A summary of project activities relative to the estimation of potato yields in the Columbia Basin is given. Oregon State University is using a two-pronged approach to yield estimation, one using simulation models and the other using purely empirical models. The simulation modeling approach has used satellite observations to determine key dates in the development of the crop for each field identified as potatoes. In particular, these include planting dates, emergence dates, and harvest dates. These critical dates are fed into simulation models of crop growth and development to derive yield forecasts. Two empirical modeling approaches are illustrated. One relates tuber yield to estimates of cumulative intercepted solar radiation; the other relates tuber yield to the integral under the GVI curve.
NASA Astrophysics Data System (ADS)
Shaman, J.; Stieglitz, M.; Zebiak, S.; Cane, M.; Day, J. F.
2002-12-01
We present an ensemble local hydrologic forecast derived from the seasonal forecasts of the International Research Institute (IRI) for Climate Prediction. Three- month seasonal forecasts were used to resample historical meteorological conditions and generate ensemble forcing datasets for a TOPMODEL-based hydrology model. Eleven retrospective forecasts were run at a Florida and New York site. Forecast skill was assessed for mean area modeled water table depth (WTD), i.e. near surface soil wetness conditions, and compared with WTD simulated with observed data. Hydrology model forecast skill was evident at the Florida site but not at the New York site. At the Florida site, persistence of hydrologic conditions and local skill of the IRI seasonal forecast contributed to the local hydrologic forecast skill. This forecast will permit probabilistic prediction of future hydrologic conditions. At the Florida site, we have also quantified the link between modeled WTD (i.e. drought) and the amplification and transmission of St. Louis Encephalitis virus (SLEV). We derive an empirical relationship between modeled land surface wetness and levels of SLEV transmission associated with human clinical cases. We then combine the seasonal forecasts of local, modeled WTD with this empirical relationship and produce retrospective probabilistic seasonal forecasts of epidemic SLEV transmission in Florida. Epidemic SLEV transmission forecast skill is demonstrated. These findings will permit real-time forecast of drought and resultant SLEV transmission in Florida.
NASA Technical Reports Server (NTRS)
Mertens, Christoper J.; Winick, Jeremy R.; Russell, James M., III; Mlynczak, Martin G.; Evans, David S.; Bilitza, Dieter; Xu, Xiaojing
2007-01-01
The response of the ionospheric E-region to solar-geomagnetic storms can be characterized using observations of infrared 4.3 micrometers emission. In particular, we utilize nighttime TIMED/SABER measurements of broadband 4.3 micrometers limb emission and derive a new data product, the NO+(v) volume emission rate, which is our primary observation-based quantity for developing an empirical storm-time correction the IRI E-region electron density. In this paper we describe our E-region proxy and outline our strategy for developing the empirical storm model. In our initial studies, we analyzed a six day storm period during the Halloween 2003 event. The results of this analysis are promising and suggest that the ap-index is a viable candidate to use as a magnetic driver for our model.
Diffusion in silicate melts: III. Empirical models for multicomponent diffusion
NASA Astrophysics Data System (ADS)
Yan, Liang; Richter, Frank M.; Chamberlin, Laurinda
1997-12-01
Empirical models for multicomponent diffusion in an isotropic fluid were derived by splitting the component's dispersion velocity into two parts: (a) an intrinsic velocity which is proportional to each component's electrochemical potential gradient and independent of reference frame and (b) a net interaction velocity which is both model and reference frame dependent. Simple molecules (e.g., M pO q) were chosen as endmember components. The interaction velocity is assumed to be either the same for each component (leading to a common relaxation velocity U) or proportional to a common interaction force ( F). U or F is constrained by requiring no local buildup in either volume or charge. The most general form of the model-derived diffusion matrix [ D] can be written as a product of a model-dependent kinetic matrix [ L] and a model independent thermodynamic matrix [ G], [ D] = [ L] · [ G]. The elements of [ G] are functions of derivatives of chemical potential with respect to concentration. The elements of [ L] are functions of concentration and partial molar volume of the endmember components, Cio and Vio, and self diffusivity Di, and charge number zi of individual diffusing species. When component n is taken as the dependent variable they can be written in a common form L ij = D jδ ij + C io[V noD n - V joD j)A i + (p nz nD n - p jz jD j)B i] where the functional forms of the scaling factors Ai and Bi depend on the model considered. The off-diagonal element Lij ( i ≠ j) is directly proportional to the concentration of component i, and thus negligible when i is a dilute component. The salient feature of kinetic interaction or relaxation is to slow down larger (volume or charge) and faster diffusing components and to speed up smaller (volume or charge) and slower moving species, in order to prevent local volume or charge buildup. Empirical models for multicomponent diffusion were tested in the ternary system CaOAl 2O 3SiO 2 at 1500°C and 1 GPa over a large range of melt compositions. Model-derived diffusion matrices calculated using measured self diffusivities (Ca, Al, Si, and O), partial molar volumes, and activities were compared with experimentally derived diffusion matrices at two melt compositions. Chemical diffusion profiles computed using the model-derived diffusion matrices, accounting for the compositional dependency of self diffusivities and activity coefficients, were also compared with the experimentally measured ones. Good agreement was found between the ionic common-force model derived diffusion profiles and the experimentally measured ones. Secondary misfits could result from either inadequacies of the model or inaccuracies in activity-composition relationship. The results show that both kinetic interactions and thermodynamic nonideality contribute significantly to the observed diffusive coupling in the molten CaOAl 2O 3SiO 2.
Model improvements and validation of TerraSAR-X precise orbit determination
NASA Astrophysics Data System (ADS)
Hackel, S.; Montenbruck, O.; Steigenberger, P.; Balss, U.; Gisinger, C.; Eineder, M.
2017-05-01
The radar imaging satellite mission TerraSAR-X requires precisely determined satellite orbits for validating geodetic remote sensing techniques. Since the achieved quality of the operationally derived, reduced-dynamic (RD) orbit solutions limits the capabilities of the synthetic aperture radar (SAR) validation, an effort is made to improve the estimated orbit solutions. This paper discusses the benefits of refined dynamical models on orbit accuracy as well as estimated empirical accelerations and compares different dynamic models in a RD orbit determination. Modeling aspects discussed in the paper include the use of a macro-model for drag and radiation pressure computation, the use of high-quality atmospheric density and wind models as well as the benefit of high-fidelity gravity and ocean tide models. The Sun-synchronous dusk-dawn orbit geometry of TerraSAR-X results in a particular high correlation of solar radiation pressure modeling and estimated normal-direction positions. Furthermore, this mission offers a unique suite of independent sensors for orbit validation. Several parameters serve as quality indicators for the estimated satellite orbit solutions. These include the magnitude of the estimated empirical accelerations, satellite laser ranging (SLR) residuals, and SLR-based orbit corrections. Moreover, the radargrammetric distance measurements of the SAR instrument are selected for assessing the quality of the orbit solutions and compared to the SLR analysis. The use of high-fidelity satellite dynamics models in the RD approach is shown to clearly improve the orbit quality compared to simplified models and loosely constrained empirical accelerations. The estimated empirical accelerations are substantially reduced by 30% in tangential direction when working with the refined dynamical models. Likewise the SLR residuals are reduced from -3 ± 17 to 2 ± 13 mm, and the SLR-derived normal-direction position corrections are reduced from 15 to 6 mm, obtained from the 2012-2014 period. The radar range bias is reduced from -10.3 to -6.1 mm with the updated orbit solutions, which coincides with the reduced standard deviation of the SLR residuals. The improvements are mainly driven by the satellite macro-model for the purpose of solar radiation pressure modeling, improved atmospheric density models, and the use of state-of-the-art gravity field models.
NASA Astrophysics Data System (ADS)
Shanmugam, Palanisamy; Varunan, Theenathayalan; Nagendra Jaiganesh, S. N.; Sahay, Arvind; Chauhan, Prakash
2016-06-01
Prediction of the curve of the absorption coefficient of colored dissolved organic matter (CDOM) and differentiation between marine and terrestrially derived CDOM pools in coastal environments are hampered by a high degree of variability in the composition and concentration of CDOM, uncertainties in retrieved remote sensing reflectance and the weak signal-to-noise ratio of space-borne instruments. In the present study, a hybrid model is presented along with empirical methods to remotely determine the amount and type of CDOM in coastal and inland water environments. A large set of in-situ data collected on several oceanographic cruises and field campaigns from different regional waters was used to develop empirical methods for studying the distribution and dynamics of CDOM, dissolved organic carbon (DOC) and salinity. Our validation analyses demonstrated that the hybrid model is a better descriptor of CDOM absorption spectra compared to the existing models. Additional spectral slope parameters included in the present model to differentiate between terrestrially derived and marine CDOM pools make a substantial improvement over those existing models. Empirical algorithms to derive CDOM, DOC and salinity from remote sensing reflectance data demonstrated success in retrieval of these products with significantly low mean relative percent differences from large in-situ measurements. The performance of these algorithms was further assessed using three hyperspectral HICO images acquired simultaneously with our field measurements in productive coastal and lagoon waters on the southeast part of India. The validation match-ups of CDOM and salinity showed good agreement between HICO retrievals and field observations. Further analyses of these data showed significant temporal changes in CDOM and phytoplankton absorption coefficients with a distinct phase shift between these two products. Healthy phytoplankton cells and macrophytes were recognized to directly contribute to the autochthonous production of colored humic-like substances in variable amounts within the lagoon system, despite CDOM content being partly derived through river run-off and wetland discharges as well as from conservative mixing of different water masses. Spatial and temporal maps of CDOM, DOC and salinity products provided an interesting insight into these CDOM dynamics and conservative behavior within the lagoon and its extension in coastal and offshore waters of the Bay of Bengal. The hybrid model and empirical algorithms presented here can be useful to assess CDOM, DOC and salinity fields and their changes in response to increasing runoff of nutrient pollution, anthropogenic activities, hydrographic variations and climate oscillations.
The Structure of Psychopathology: Toward an Expanded Quantitative Empirical Model
Wright, Aidan G.C.; Krueger, Robert F.; Hobbs, Megan J.; Markon, Kristian E.; Eaton, Nicholas R.; Slade, Tim
2013-01-01
There has been substantial recent interest in the development of a quantitative, empirically based model of psychopathology. However, the majority of pertinent research has focused on analyses of diagnoses, as described in current official nosologies. This is a significant limitation because existing diagnostic categories are often heterogeneous. In the current research, we aimed to redress this limitation of the existing literature, and to directly compare the fit of categorical, continuous, and hybrid (i.e., combined categorical and continuous) models of syndromes derived from indicators more fine-grained than diagnoses. We analyzed data from a large representative epidemiologic sample (the 2007 Australian National Survey of Mental Health and Wellbeing; N = 8,841). Continuous models provided the best fit for each syndrome we observed (Distress, Obsessive Compulsivity, Fear, Alcohol Problems, Drug Problems, and Psychotic Experiences). In addition, the best fitting higher-order model of these syndromes grouped them into three broad spectra: Internalizing, Externalizing, and Psychotic Experiences. We discuss these results in terms of future efforts to refine emerging empirically based, dimensional-spectrum model of psychopathology, and to use the model to frame psychopathology research more broadly. PMID:23067258
ERIC Educational Resources Information Center
Corbin, J. Hope; Chu, Marilyn; Carney, Joanne; Donnelly, Susan; Clancy, Andrea
2017-01-01
School-university partnerships are widely promoted yet little is known about what contributes to their effectiveness. This paper presents a participatory formative evaluation of a state-funded school-university partnership. The study employed an empirically derived systems model--the Bergen Model of Collaborative Functioning (BMCF)--as the…
Evaluating high temporal and spatial resolution vegetation index for crop yield prediction
USDA-ARS?s Scientific Manuscript database
Remote sensing data have been widely used in estimating crop yield. Remote sensing derived parameters such as Vegetation Index (VI) were used either directly in building empirical models or by assimilating with crop growth models to predict crop yield. The abilities of remote sensing VI in crop yiel...
Constrained range expansion and climate change assessments
Yohay Carmel; Curtis H. Flather
2006-01-01
Modeling the future distribution of keystone species has proved to be an important approach to assessing the potential ecological consequences of climate change (Loehle and LeBlanc 1996; Hansen et al. 2001). Predictions of range shifts are typically based on empirical models derived from simple correlative relationships between climatic characteristics of occupied and...
Very empirical treatment of solvation and entropy: a force field derived from Log Po/w
NASA Astrophysics Data System (ADS)
Kellogg, Glen Eugene; Burnett, James C.; Abraham, Donald J.
2001-04-01
A non-covalent interaction force field model derived from the partition coefficient of 1-octanol/water solubility is described. This model, HINT for Hydropathic INTeractions, is shown to include, in very empirical and approximate terms, all components of biomolecular associations, including hydrogen bonding, Coulombic interactions, hydrophobic interactions, entropy and solvation/desolvation. Particular emphasis is placed on: (1) demonstrating the relationship between the total empirical HINT score and free energy of association, ΔG interaction; (2) showing that the HINT hydrophobic-polar interaction sub-score represents the energy cost of desolvation upon binding for interacting biomolecules; and (3) a new methodology for treating constrained water molecules as discrete independent small ligands. An example calculation is reported for dihydrofolate reductase (DHFR) bound with methotrexate (MTX). In that case the observed very tight binding, ΔG interaction≤-13.6 kcal mol-1, is largely due to ten hydrogen bonds between the ligand and enzyme with estimated strength ranging between -0.4 and -2.3 kcal mol-1. Four water molecules bridging between DHFR and MTX contribute an additional -1.7 kcal mol-1 stability to the complex. The HINT estimate of the cost of desolvation is +13.9 kcal mol-1.
NASA Astrophysics Data System (ADS)
Ghysels, M.; Mondelain, D.; Kassi, S.; Nikitin, A. V.; Rey, M.; Campargue, A.
2018-07-01
The methane absorption spectrum is studied at 297 K and 80 K in the center of the Tetradecad between 5695 and 5850 cm-1. The spectra are recorded by differential absorption spectroscopy (DAS) with a noise equivalent absorption of about αmin≈ 1.5 × 10-7 cm-1. Two empirical line lists are constructed including about 4000 and 2300 lines at 297 K and 80 K, respectively. Lines due to 13CH4 present in natural abundance were identified by comparison with a spectrum of pure 13CH4 recorded in the same temperature conditions. About 1700 empirical values of the lower state energy level, Eemp, were derived from the ratios of the line intensities at 80 K and 296 K. They provide accurate temperature dependence for most of the absorption in the region (93% and 82% at 80 K and 296 K, respectively). The quality of the derived empirical values is illustrated by the clear propensity of the corresponding lower state rotational quantum number, Jemp, to be close to integer values. Using an effective Hamiltonian model derived from a previously published ab initio potential energy surface, about 2060 lines are rovibrationnally assigned, adding about 1660 new assignments to those provided in the HITRAN database for 12CH4 in the region.
NASA Technical Reports Server (NTRS)
Makel, Darby B.; Rosenberg, Sanders D.
1990-01-01
The formation and deposition of carbon (soot) was studied in the Carbon Deposition Model for Oxygen-Hydrocarbon Combustion Program. An empirical, 1-D model for predicting soot formation and deposition in LO2/hydrocarbon gas generators/preburners was derived. The experimental data required to anchor the model were identified and a test program to obtain the data was defined. In support of the model development, cold flow mixing experiments using a high injection density injector were performed. The purpose of this investigation was to advance the state-of-the-art in LO2/hydrocarbon gas generator design by developing a reliable engineering model of gas generator operation. The model was formulated to account for the influences of fluid dynamics, chemical kinetics, and gas generator hardware design on soot formation and deposition.
Suppression cost forecasts in advance of wildfire seasons
Jeffrey P. Prestemon; Karen Abt; Krista Gebert
2008-01-01
Approaches for forecasting wildfire suppression costs in advance of a wildfire season are demonstrated for two lead times: fall and spring of the current fiscal year (Oct. 1âSept. 30). Model functional forms are derived from aggregate expressions of a least cost plus net value change model. Empirical estimates of these models are used to generate advance-of-season...
A model to predict stream water temperature across the conterminous USA
Catalina Segura; Peter Caldwell; Ge Sun; Steve McNulty; Yang Zhang
2014-01-01
Stream water temperature (ts) is a critical water quality parameter for aquatic ecosystems. However, ts records are sparse or nonexistent in many river systems. In this work, we present an empirical model to predict ts at the site scale across the USA. The model, derived using data from 171 reference sites selected from the Geospatial Attributes of Gages for Evaluating...
ERIC Educational Resources Information Center
Skinner, Ellen A.; Chi, Una
2012-01-01
Building on self-determination theory, this study presents a model of intrinsic motivation and engagement as "active ingredients" in garden-based education. The model was used to create reliable and valid measures of key constructs, and to guide the empirical exploration of motivational processes in garden-based learning. Teacher- and…
Simulation studies of chemical erosion on carbon based materials at elevated temperatures
NASA Astrophysics Data System (ADS)
Kenmotsu, T.; Kawamura, T.; Li, Zhijie; Ono, T.; Yamamura, Y.
1999-06-01
We simulated the fluence dependence of methane reaction yield in carbon with hydrogen bombardment using the ACAT-DIFFUSE code. The ACAT-DIFFUSE code is a simulation code based on a Monte Carlo method with a binary collision approximation and on solving diffusion equations. The chemical reaction model in carbon was studied by Roth or other researchers. Roth's model is suitable for the steady state methane reaction. But this model cannot estimate the fluence dependence of the methane reaction. Then, we derived an empirical formula based on Roth's model for methane reaction. In this empirical formula, we assumed the reaction region where chemical sputtering due to methane formation takes place. The reaction region corresponds to the peak range of incident hydrogen distribution in the target material. We adopted this empirical formula to the ACAT-DIFFUSE code. The simulation results indicate the similar fluence dependence compared with the experiment result. But, the fluence to achieve the steady state are different between experiment and simulation results.
Near transferable phenomenological n-body potentials for noble metals
NASA Astrophysics Data System (ADS)
Pontikis, Vassilis; Baldinozzi, Gianguido; Luneville, Laurence; Simeone, David
2017-09-01
We present a semi-empirical model of cohesion in noble metals with suitable parameters reproducing a selected set of experimental properties of perfect and defective lattices in noble metals. It consists of two short-range, n-body terms accounting respectively for attractive and repulsive interactions, the former deriving from the second moment approximation of the tight-binding scheme and the latter from the gas approximation of the kinetic energy of electrons. The stability of the face centred cubic versus the hexagonal compact stacking is obtained via a long-range, pairwise function of customary use with ionic pseudo-potentials. Lattice dynamics, molecular statics, molecular dynamics and nudged elastic band calculations show that, unlike previous potentials, this cohesion model reproduces and predicts quite accurately thermodynamic properties in noble metals. In particular, computed surface energies, largely underestimated by existing empirical cohesion models, compare favourably with measured values, whereas predicted unstable stacking-fault energy profiles fit almost perfectly ab initio evaluations from the literature. All together the results suggest that this semi-empirical model is nearly transferable.
Near transferable phenomenological n-body potentials for noble metals.
Pontikis, Vassilis; Baldinozzi, Gianguido; Luneville, Laurence; Simeone, David
2017-09-06
We present a semi-empirical model of cohesion in noble metals with suitable parameters reproducing a selected set of experimental properties of perfect and defective lattices in noble metals. It consists of two short-range, n-body terms accounting respectively for attractive and repulsive interactions, the former deriving from the second moment approximation of the tight-binding scheme and the latter from the gas approximation of the kinetic energy of electrons. The stability of the face centred cubic versus the hexagonal compact stacking is obtained via a long-range, pairwise function of customary use with ionic pseudo-potentials. Lattice dynamics, molecular statics, molecular dynamics and nudged elastic band calculations show that, unlike previous potentials, this cohesion model reproduces and predicts quite accurately thermodynamic properties in noble metals. In particular, computed surface energies, largely underestimated by existing empirical cohesion models, compare favourably with measured values, whereas predicted unstable stacking-fault energy profiles fit almost perfectly ab initio evaluations from the literature. All together the results suggest that this semi-empirical model is nearly transferable.
Electrochemical carbon dioxide concentrator: Math model
NASA Technical Reports Server (NTRS)
Marshall, R. D.; Schubert, F. H.; Carlson, J. N.
1973-01-01
A steady state computer simulation model of an Electrochemical Depolarized Carbon Dioxide Concentrator (EDC) has been developed. The mathematical model combines EDC heat and mass balance equations with empirical correlations derived from experimental data to describe EDC performance as a function of the operating parameters involved. The model is capable of accurately predicting performance over EDC operating ranges. Model simulation results agree with the experimental data obtained over the prediction range.
NASA Technical Reports Server (NTRS)
Tilley, D. G.
1986-01-01
Directional ocean wave spectra were derived from Shuttle Imaging Radar (SIR-B) imagery in regions where nearly simultaneous aircraft-based measurements of the wave spectra were also available as part of the NASA Shuttle Mission 41G experiments. The SIR-B response to a coherently speckled scene is used to estimate the stationary system transfer function in the 15 even terms of an eighth-order two-dimensional polynomial. Surface elevation contours are assigned to SIR-B ocean scenes Fourier filtered using a empirical model of the modulation transfer function calibrated with independent measurements of wave height. The empirical measurements of the wave height distribution are illustrated for a variety of sea states.
Empirical calibration of the near-infrared Ca II triplet - III. Fitting functions
NASA Astrophysics Data System (ADS)
Cenarro, A. J.; Gorgas, J.; Cardiel, N.; Vazdekis, A.; Peletier, R. F.
2002-02-01
Using a near-infrared stellar library of 706 stars with a wide coverage of atmospheric parameters, we study the behaviour of the CaII triplet strength in terms of effective temperature, surface gravity and metallicity. Empirical fitting functions for recently defined line-strength indices, namely CaT*, CaT and PaT, are provided. These functions can be easily implemented into stellar population models to provide accurate predictions for integrated CaII strengths. We also present a thorough study of the various error sources and their relation to the residuals of the derived fitting functions. Finally, the derived functional forms and the behaviour of the predicted CaII are compared with those of previous works in the field.
Comparisons of thermospheric density data sets and models
NASA Astrophysics Data System (ADS)
Doornbos, Eelco; van Helleputte, Tom; Emmert, John; Drob, Douglas; Bowman, Bruce R.; Pilinski, Marcin
During the past decade, continuous long-term data sets of thermospheric density have become available to researchers. These data sets have been derived from accelerometer measurements made by the CHAMP and GRACE satellites and from Space Surveillance Network (SSN) tracking data and related Two-Line Element (TLE) sets. These data have already resulted in a large number of publications on physical interpretation and improvement of empirical density modelling. This study compares four different density data sets and two empirical density models, for the period 2002-2009. These data sources are the CHAMP (1) and GRACE (2) accelerometer measurements, the long-term database of densities derived from TLE data (3), the High Accuracy Satellite Drag Model (4) run by Air Force Space Command, calibrated using SSN data, and the NRLMSISE-00 (5) and Jacchia-Bowman 2008 (6) empirical models. In describing these data sets and models, specific attention is given to differences in the geo-metrical and aerodynamic satellite modelling, applied in the conversion from drag to density measurements, which are main sources of density biases. The differences in temporal and spa-tial resolution of the density data sources are also described and taken into account. With these aspects in mind, statistics of density comparisons have been computed, both as a function of solar and geomagnetic activity levels, and as a function of latitude and local solar time. These statistics give a detailed view of the relative accuracy of the different data sets and of the biases between them. The differences are analysed with the aim at providing rough error bars on the data and models and pinpointing issues which could receive attention in future iterations of data processing algorithms and in future model development.
Solar wind driven empirical forecast models of the time derivative of the ground magnetic field
NASA Astrophysics Data System (ADS)
Wintoft, Peter; Wik, Magnus; Viljanen, Ari
2015-03-01
Empirical models are developed to provide 10-30-min forecasts of the magnitude of the time derivative of local horizontal ground geomagnetic field (|dBh/dt|) over Europe. The models are driven by ACE solar wind data. A major part of the work has been devoted to the search and selection of datasets to support the model development. To simplify the problem, but at the same time capture sudden changes, 30-min maximum values of |dBh/dt| are forecast with a cadence of 1 min. Models are tested both with and without the use of ACE SWEPAM plasma data. It is shown that the models generally capture sudden increases in |dBh/dt| that are associated with sudden impulses (SI). The SI is the dominant disturbance source for geomagnetic latitudes below 50° N and with minor contribution from substorms. However, at occasions, large disturbances can be seen associated with geomagnetic pulsations. For higher latitudes longer lasting disturbances, associated with substorms, are generally also captured. It is also shown that the models using only solar wind magnetic field as input perform in most cases equally well as models with plasma data. The models have been verified using different approaches including the extremal dependence index which is suitable for rare events.
EFFECTIVE USE OF SEDIMENT QUALITY GUIDELINES: WHICH GUIDELINE IS RIGHT FOR ME?
A bewildering array of sediment quality guidelines have been developed, but fortunately they mostly fall into two families: empirically-derived and theoretically-derived. The empirically-derived guidelines use large data bases of concurrent sediment chemistry and biological effe...
Development and system identification of a light unmanned aircraft for flying qualities research
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peters, M.E.; Andrisani, D. II
This paper describes the design, construction, flight testing and system identification of a light weight remotely piloted aircraft and its use in studying flying qualities in the longitudinal axis. The short period approximation to the longitudinal dynamics of the aircraft was used. Parameters in this model were determined a priori using various empirical estimators. These parameters were then estimated from flight data using a maximum likelihood parameter identification method. A comparison of the parameter values revealed that the stability derivatives obtained from the empirical estimators were reasonably close to the flight test results. However, the control derivatives determined by themore » empirical estimators were too large by a factor of two. The aircraft was also flown to determine how the longitudinal flying qualities of light weight remotely piloted aircraft compared to full size manned aircraft. It was shown that light weight remotely piloted aircraft require much faster short period dynamics to achieve level I flying qualities in an up-and-away flight task.« less
NASA Astrophysics Data System (ADS)
Gontis, V.; Kononovicius, A.
2017-10-01
We address the problem of long-range memory in the financial markets. There are two conceptually different ways to reproduce power-law decay of auto-correlation function: using fractional Brownian motion as well as non-linear stochastic differential equations. In this contribution we address this problem by analyzing empirical return and trading activity time series from the Forex. From the empirical time series we obtain probability density functions of burst and inter-burst duration. Our analysis reveals that the power-law exponents of the obtained probability density functions are close to 3 / 2, which is a characteristic feature of the one-dimensional stochastic processes. This is in a good agreement with earlier proposed model of absolute return based on the non-linear stochastic differential equations derived from the agent-based herding model.
NASA Technical Reports Server (NTRS)
Berman, A. L.
1976-01-01
In the last two decades, increasingly sophisticated deep space missions have placed correspondingly stringent requirements on navigational accuracy. As part of the effort to increase navigational accuracy, and hence the quality of radiometric data, much effort has been expended in an attempt to understand and compute the tropospheric effect on range (and hence range rate) data. The general approach adopted has been that of computing a zenith range refraction, and then mapping this refraction to any arbitrary elevation angle via an empirically derived function of elevation. The prediction of zenith range refraction derived from surface measurements of meteorological parameters is presented. Refractivity is separated into wet (water vapor pressure) and dry (atmospheric pressure) components. The integration of dry refractivity is shown to be exact. Attempts to integrate wet refractivity directly prove ineffective; however, several empirical models developed by the author and other researchers at JPL are discussed. The best current wet refraction model is here considered to be a separate day/night model, which is proportional to surface water vapor pressure and inversely proportional to surface temperature. Methods are suggested that might improve the accuracy of the wet range refraction model.
Validation of a new plasmapause model derived from CHAMP field-aligned current signatures
NASA Astrophysics Data System (ADS)
Heilig, Balázs; Darrouzet, Fabien; Vellante, Massimo; Lichtenberger, János; Lühr, Hermann
2014-05-01
Recently a new model for the plasmapause location in the equatorial plane was introduced based on magnetic field observations made by the CHAMP satellite in the topside ionosphere (Heilig and Lühr, 2013). Related signals are medium-scale field-aligned currents (MSFAC) (some 10km scale size). An empirical model for the MSFAC boundary was developed as a function of Kp and MLT. The MSFAC model then was compared to in situ plasmapause observations of IMAGE RPI. By considering this systematic displacement resulting from this comparison and by taking into account the diurnal variation and Kp-dependence of the residuals an empirical model of the plasmapause location that is based on MSFAC measurements from CHAMP was constructed. As a first step toward validation of the new plasmapause model we used in-situ (Van Allen Probes/EMFISIS, Cluster/WHISPER) and ground based (EMMA) plasma density observations. Preliminary results show a good agreement in general between the model and observations. Some observed differences stem from the different definitions of the plasmapause. A more detailed validation of the method can take place as soon as SWARM and VAP data become available. Heilig, B., and H. Lühr (2013) New plasmapause model derived from CHAMP field-aligned current signatures, Ann. Geophys., 31, 529-539, doi:10.5194/angeo-31-529-2013
Factors Affecting the Effectiveness and Use of Moodle: Students' Perception
ERIC Educational Resources Information Center
Damnjanovic, Vesna; Jednak, Sandra; Mijatovic, Ivana
2015-01-01
The purpose of this research paper is to identify the factors affecting the effectiveness of Moodle from the students' perspective. The research hypotheses derived from the suggested extended Seddon model have been empirically validated using the responses to a survey on e-learning usage among 255 users. We tested the model across higher education…
The Supply and Demand for College Educated Labor.
ERIC Educational Resources Information Center
Nollen, Stanley D.
In this study a model for the supply of college educated labor is developed from human capital theory. A demand model is added, derived from neoclassical production function theory. Empirical estimates are made for white males and white females, using cross-sectional data on states of the U.S., 1960-70. In human capital theory, education is an…
ERIC Educational Resources Information Center
Miller, Joshua D.; Lynam, Donald R.
2008-01-01
Assessment of the "Diagnostic and Statistical Manual of Mental Disorders" (4th Ed.; "DSM-IV") personality disorders (PDs) using five-factor model (FFM) prototypes and counts has shown substantial promise, with a few exceptions. Miller, Reynolds, and Pilkonis suggested that the expert-generated FFM dependent prototype might be misspecified in…
Wildfire Ignitions: A Review of the Science and Recommendations for Empirical Modeling
Jeffrey P. Prestemon; Todd J. Hawbaker; Michael Bowden; John Carpenter; Maureen T. Brooks; Karen L. Abt; Ronda Sutphen; Samuel Scranton
2013-01-01
Deriving from original work under the National Cohesive Wildland Fire Management Strategy completed in 2011, this report summarizes the state of knowledge regarding the underlying causes and the role of wildfire prevention efforts on all major categories of wildfires, including findings from research that have sought to model wildfire occurrences over fine and broad...
An economic analysis of harvest behavior: integrating forest and ownership characteristics
Donald F. Dennis
1989-01-01
This study provides insight into the determinants of timber supply from private forests through development of both theoretical and empirical models of harvest behavior. A microeconomic model encompasses the multiple objective nature of private ownership by examining the harvest decision for landowners who derive utility from forest amenities and from income used for...
Jeffrey P. Prestemon
2009-01-01
Timber product markets are subject to large shocks deriving from natural disturbances and policy shifts. Statistical modeling of shocks is often done to assess their economic importance. In this article, I simulate the statistical power of univariate and bivariate methods of shock detection using time series intervention models. Simulations show that bivariate methods...
ERIC Educational Resources Information Center
Collazo, Andrés A.
2018-01-01
A model derived from the theory of planned behavior was empirically assessed for understanding faculty intention to use student ratings for teaching improvement. A sample of 175 professors participated in the study. The model was statistically significant and had a very large explanatory power. Instrumental attitude, affective attitude, perceived…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Y; Hsi, W; Zhao, J
2016-06-15
Purpose: The Gaussian model for the lateral profiles in air is crucial for an accurate treatment planning system. The field size dependence of dose and the lateral beam profiles of scanning proton and carbon ion beams are due mainly to particles undergoing multiple Coulomb scattering in the beam line components and secondary particles produced by nuclear interactions in the target, both of which depend upon the energy and species of the beam. In this work, lateral profile shape parameters were fitted to measurements of field size dependence dose at the center of field size in air. Methods: Previous studies havemore » employed empirical fits to measured profile data to significantly reduce the QA time required for measurements. From this approach to derive the weight and sigma of lateral profiles in air, empirical model formulations were simulated for three selected energies for both proton and carbon beams. Results: The 20%–80% lateral penumbras predicted by the double model for proton and single model for carbon with the error functions agreed with the measurements within 1 mm. The standard deviation between measured and fitted field size dependence of dose for empirical model in air has a maximum accuracy of 0.74% for proton with double Gaussian, and of 0.57% for carbon with single Gaussian. Conclusion: We have demonstrated that the double Gaussian model of lateral beam profiles is significantly better than the single Gaussian model for proton while a single Gaussian model is sufficient for carbon. The empirical equation may be used to double check the separately obtained model that is currently used by the planning system. The empirical model in air for dose of spot scanning proton and carbon ion beams cannot be directly used for irregular shaped patient fields, but can be to provide reference values for clinical use and quality assurance.« less
NASA Astrophysics Data System (ADS)
Wu, Qing; Luu, Quang-Hung; Tkalich, Pavel; Chen, Ge
2018-04-01
Having great impacts on human lives, global warming and associated sea level rise are believed to be strongly linked to anthropogenic causes. Statistical approach offers a simple and yet conceptually verifiable combination of remotely connected climate variables and indices, including sea level and surface temperature. We propose an improved statistical reconstruction model based on the empirical dynamic control system by taking into account the climate variability and deriving parameters from Monte Carlo cross-validation random experiments. For the historic data from 1880 to 2001, we yielded higher correlation results compared to those from other dynamic empirical models. The averaged root mean square errors are reduced in both reconstructed fields, namely, the global mean surface temperature (by 24-37%) and the global mean sea level (by 5-25%). Our model is also more robust as it notably diminished the unstable problem associated with varying initial values. Such results suggest that the model not only enhances significantly the global mean reconstructions of temperature and sea level but also may have a potential to improve future projections.
Tsunami probability in the Caribbean Region
Parsons, T.; Geist, E.L.
2008-01-01
We calculated tsunami runup probability (in excess of 0.5 m) at coastal sites throughout the Caribbean region. We applied a Poissonian probability model because of the variety of uncorrelated tsunami sources in the region. Coastlines were discretized into 20 km by 20 km cells, and the mean tsunami runup rate was determined for each cell. The remarkable ???500-year empirical record compiled by O'Loughlin and Lander (2003) was used to calculate an empirical tsunami probability map, the first of three constructed for this study. However, it is unclear whether the 500-year record is complete, so we conducted a seismic moment-balance exercise using a finite-element model of the Caribbean-North American plate boundaries and the earthquake catalog, and found that moment could be balanced if the seismic coupling coefficient is c = 0.32. Modeled moment release was therefore used to generate synthetic earthquake sequences to calculate 50 tsunami runup scenarios for 500-year periods. We made a second probability map from numerically-calculated runup rates in each cell. Differences between the first two probability maps based on empirical and numerical-modeled rates suggest that each captured different aspects of tsunami generation; the empirical model may be deficient in primary plate-boundary events, whereas numerical model rates lack backarc fault and landslide sources. We thus prepared a third probability map using Bayesian likelihood functions derived from the empirical and numerical rate models and their attendant uncertainty to weight a range of rates at each 20 km by 20 km coastal cell. Our best-estimate map gives a range of 30-year runup probability from 0 - 30% regionally. ?? irkhaueser 2008.
Li, Ji; Gray, B.R.; Bates, D.M.
2008-01-01
Partitioning the variance of a response by design levels is challenging for binomial and other discrete outcomes. Goldstein (2003) proposed four definitions for variance partitioning coefficients (VPC) under a two-level logistic regression model. In this study, we explicitly derived formulae for multi-level logistic regression model and subsequently studied the distributional properties of the calculated VPCs. Using simulations and a vegetation dataset, we demonstrated associations between different VPC definitions, the importance of methods for estimating VPCs (by comparing VPC obtained using Laplace and penalized quasilikehood methods), and bivariate dependence between VPCs calculated at different levels. Such an empirical study lends an immediate support to wider applications of VPC in scientific data analysis.
Flood loss modelling with FLF-IT: a new flood loss function for Italian residential structures
NASA Astrophysics Data System (ADS)
Hasanzadeh Nafari, Roozbeh; Amadio, Mattia; Ngo, Tuan; Mysiak, Jaroslav
2017-07-01
The damage triggered by different flood events costs the Italian economy millions of euros each year. This cost is likely to increase in the future due to climate variability and economic development. In order to avoid or reduce such significant financial losses, risk management requires tools which can provide a reliable estimate of potential flood impacts across the country. Flood loss functions are an internationally accepted method for estimating physical flood damage in urban areas. In this study, we derived a new flood loss function for Italian residential structures (FLF-IT), on the basis of empirical damage data collected from a recent flood event in the region of Emilia-Romagna. The function was developed based on a new Australian approach (FLFA), which represents the confidence limits that exist around the parameterized functional depth-damage relationship. After model calibration, the performance of the model was validated for the prediction of loss ratios and absolute damage values. It was also contrasted with an uncalibrated relative model with frequent usage in Europe. In this regard, a three-fold cross-validation procedure was carried out over the empirical sample to measure the range of uncertainty from the actual damage data. The predictive capability has also been studied for some sub-classes of water depth. The validation procedure shows that the newly derived function performs well (no bias and only 10 % mean absolute error), especially when the water depth is high. Results of these validation tests illustrate the importance of model calibration. The advantages of the FLF-IT model over other Italian models include calibration with empirical data, consideration of the epistemic uncertainty of data, and the ability to change parameters based on building practices across Italy.
V and V Efforts of Auroral Precipitation Models: Preliminary Results
NASA Technical Reports Server (NTRS)
Zheng, Yihua; Kuznetsova, Masha; Rastaetter, Lutz; Hesse, Michael
2011-01-01
Auroral precipitation models have been valuable both in terms of space weather applications and space science research. Yet very limited testing has been performed regarding model performance. A variety of auroral models are available, including empirical models that are parameterized by geomagnetic indices or upstream solar wind conditions, now casting models that are based on satellite observations, or those derived from physics-based, coupled global models. In this presentation, we will show our preliminary results regarding V&V efforts of some of the models.
Empirical relations between large wood transport and catchment characteristics
NASA Astrophysics Data System (ADS)
Steeb, Nicolas; Rickenmann, Dieter; Rickli, Christian; Badoux, Alexandre
2017-04-01
The transport of vast amounts of large wood (LW) in water courses can considerably aggravate hazardous situations during flood events, and often strongly affects resulting flood damage. Large wood recruitment and transport are controlled by various factors which are difficult to assess and the prediction of transported LW volumes is difficult. Such information are, however, important for engineers and river managers to adequately dimension retention structures or to identify critical stream cross-sections. In this context, empirical formulas have been developed to estimate the volume of transported LW during a flood event (Rickenmann, 1997; Steeb et al., 2017). The data base of existing empirical wood load equations is, however, limited. The objective of the present study is to test and refine existing empirical equations, and to derive new relationships to reveal trends in wood loading. Data have been collected for flood events with LW occurrence in Swiss catchments of various sizes. This extended data set allows us to derive statistically more significant results. LW volumes were found to be related to catchment and transport characteristics, such as catchment size, forested area, forested stream length, water discharge, sediment load, or Melton ratio. Both the potential wood load and the fraction that is effectively mobilized during a flood event (effective wood load) are estimated. The difference of potential and effective wood load allows us to derive typical reduction coefficients that can be used to refine spatially explicit GIS models for potential LW recruitment.
The Small World of Psychopathology
Borsboom, Denny; Cramer, Angélique O. J.; Schmittmann, Verena D.; Epskamp, Sacha; Waldorp, Lourens J.
2011-01-01
Background Mental disorders are highly comorbid: people having one disorder are likely to have another as well. We explain empirical comorbidity patterns based on a network model of psychiatric symptoms, derived from an analysis of symptom overlap in the Diagnostic and Statistical Manual of Mental Disorders-IV (DSM-IV). Principal Findings We show that a) half of the symptoms in the DSM-IV network are connected, b) the architecture of these connections conforms to a small world structure, featuring a high degree of clustering but a short average path length, and c) distances between disorders in this structure predict empirical comorbidity rates. Network simulations of Major Depressive Episode and Generalized Anxiety Disorder show that the model faithfully reproduces empirical population statistics for these disorders. Conclusions In the network model, mental disorders are inherently complex. This explains the limited successes of genetic, neuroscientific, and etiological approaches to unravel their causes. We outline a psychosystems approach to investigate the structure and dynamics of mental disorders. PMID:22114671
NASA Astrophysics Data System (ADS)
West, Damien; West, Bruce J.
2012-07-01
There are a substantial number of empirical relations that began with the identification of a pattern in data; were shown to have a terse power-law description; were interpreted using existing theory; reached the level of "law" and given a name; only to be subsequently fade away when it proved impossible to connect the "law" with a larger body of theory and/or data. Various forms of allometry relations (ARs) have followed this path. The ARs in biology are nearly two hundred years old and those in ecology, geophysics, physiology and other areas of investigation are not that much younger. In general if X is a measure of the size of a complex host network and Y is a property of a complex subnetwork embedded within the host network a theoretical AR exists between the two when Y = aXb. We emphasize that the reductionistic models of AR interpret X and Y as dynamic variables, albeit the ARs themselves are explicitly time independent even though in some cases the parameter values change over time. On the other hand, the phenomenological models of AR are based on the statistical analysis of data and interpret X and Y as averages to yield the empirical AR:
A review of depolarization modeling for earth-space radio paths at frequencies above 10 GHz
NASA Technical Reports Server (NTRS)
Bostian, C. W.; Stutzman, W. L.; Gaines, J. M.
1982-01-01
A review is presented of models for the depolarization, caused by scattering from raindrops and ice crystals, that limits the performance of dual-polarized satellite communication systems at frequencies above 10 GHz. The physical mechanisms of depolarization as well as theoretical formulations and empirical data are examined. Three theoretical models, the transmission, attenuation-derived, and scaling models, are described and their relative merits are considered.
David Hulse; Allan Branscomb; Chris Enright; Bart Johnson; Cody Evers; John Bolte; Alan Ager
2016-01-01
This article offers a literature-supported conception and empirically grounded analysis of surprise by exploring the capacity of scenario-driven, agent-based simulation models to better anticipate it. Building on literature-derived definitions and typologies of surprise, and using results from a modeled 81,000 ha study area in a wildland-urban interface of western...
A methodology for reduced order modeling and calibration of the upper atmosphere
NASA Astrophysics Data System (ADS)
Mehta, Piyush M.; Linares, Richard
2017-10-01
Atmospheric drag is the largest source of uncertainty in accurately predicting the orbit of satellites in low Earth orbit (LEO). Accurately predicting drag for objects that traverse LEO is critical to space situational awareness. Atmospheric models used for orbital drag calculations can be characterized either as empirical or physics-based (first principles based). Empirical models are fast to evaluate but offer limited real-time predictive/forecasting ability, while physics based models offer greater predictive/forecasting ability but require dedicated parallel computational resources. Also, calibration with accurate data is required for either type of models. This paper presents a new methodology based on proper orthogonal decomposition toward development of a quasi-physical, predictive, reduced order model that combines the speed of empirical and the predictive/forecasting capabilities of physics-based models. The methodology is developed to reduce the high dimensionality of physics-based models while maintaining its capabilities. We develop the methodology using the Naval Research Lab's Mass Spectrometer Incoherent Scatter model and show that the diurnal and seasonal variations can be captured using a small number of modes and parameters. We also present calibration of the reduced order model using the CHAMP and GRACE accelerometer-derived densities. Results show that the method performs well for modeling and calibration of the upper atmosphere.
Why are you telling me that? A conceptual model of the social function of autobiographical memory.
Alea, Nicole; Bluck, Susan
2003-03-01
In an effort to stimulate and guide empirical work within a functional framework, this paper provides a conceptual model of the social functions of autobiographical memory (AM) across the lifespan. The model delineates the processes and variables involved when AMs are shared to serve social functions. Components of the model include: lifespan contextual influences, the qualitative characteristics of memory (emotionality and level of detail recalled), the speaker's characteristics (age, gender, and personality), the familiarity and similarity of the listener to the speaker, the level of responsiveness during the memory-sharing process, and the nature of the social relationship in which the memory sharing occurs (valence and length of the relationship). These components are shown to influence the type of social function served and/or, the extent to which social functions are served. Directions for future empirical work to substantiate the model and hypotheses derived from the model are provided.
Influences on the use of capital by public hospitals.
Anderson, D
1994-01-01
This paper examines key influences on the volume of capital employed by public hospitals. Empirical models are constructed and analysed separately for total capital employed and for plant and equipment only, using data from 68 Victorian hospitals. Such data provide an empirical base to guide government decisions on funding capital expenditure in hospitals. The analysis finds that the proportion of hospital expenditure devoted to outpatients and teaching, and the proportion of funding derived from government all influence the level of capital utilised per inpatient. The model provided a reasonable fit for plant and equipment, but much improved data coverage and consistent valuation of land and buildings are required to adequately explain influences on total capital.
Empirical Constraints on Proton and Electron Heating in the Fast Solar Wind
NASA Technical Reports Server (NTRS)
Cranmer, Steven R.; Matthaeus, William H.; Breech, Benjamin A.; Kasper, Justin C.
2009-01-01
This paper presents analyses of measured proton and electron temperatures in the high-speed solar wind that are used to calculate the separate rates of heat deposition for protons and electrons. It was found that the protons receive about 60% of the total plasma heating in the inner heliosphere, and that this fraction increases to approximately 80% by the orbit of Jupiter. The empirically derived partitioning of heat between protons and electrons is in rough agreement with theoretical predictions from a model of linear Vlasov wave damping. For a modeled power spectrum consisting only of Alfvenic fluctuations, the best agreement was found for a distribution of wavenumber vectors that evolves toward isotropy as distance increases.
Box-wing model approach for solar radiation pressure modelling in a multi-GNSS scenario
NASA Astrophysics Data System (ADS)
Tobias, Guillermo; Jesús García, Adrián
2016-04-01
The solar radiation pressure force is the largest orbital perturbation after the gravitational effects and the major error source affecting GNSS satellites. A wide range of approaches have been developed over the years for the modelling of this non gravitational effect as part of the orbit determination process. These approaches are commonly divided into empirical, semi-analytical and analytical, where their main difference relies on the amount of knowledge of a-priori physical information about the properties of the satellites (materials and geometry) and their attitude. It has been shown in the past that the pre-launch analytical models fail to achieve the desired accuracy mainly due to difficulties in the extrapolation of the in-orbit optical and thermic properties, the perturbations in the nominal attitude law and the aging of the satellite's surfaces, whereas empirical models' accuracies strongly depend on the amount of tracking data used for deriving the models, and whose performances are reduced as the area to mass ratio of the GNSS satellites increases, as it happens for the upcoming constellations such as BeiDou and Galileo. This paper proposes to use basic box-wing model for Galileo complemented with empirical parameters, based on the limited available information about the Galileo satellite's geometry. The satellite is modelled as a box, representing the satellite bus, and a wing representing the solar panel. The performance of the model will be assessed for GPS, GLONASS and Galileo constellations. The results of the proposed approach have been analyzed over a one year period. In order to assess the results two different SRP models have been used. Firstly, the proposed box-wing model and secondly, the new CODE empirical model, ECOM2. The orbit performances of both models are assessed using Satellite Laser Ranging (SLR) measurements, together with the evaluation of the orbit prediction accuracy. This comparison shows the advantages and disadvantages of taking the physical interactions between satellite and solar radiation into account in an empirical model with respect to a pure empirical model.
Review of Thawing Time Prediction Models Depending on Process Conditions and Product Characteristics
Kluza, Franciszek; Spiess, Walter E. L.; Kozłowicz, Katarzyna
2016-01-01
Summary Determining thawing times of frozen foods is a challenging problem as the thermophysical properties of the product change during thawing. A number of calculation models and solutions have been developed. The proposed solutions range from relatively simple analytical equations based on a number of assumptions to a group of empirical approaches that sometimes require complex calculations. In this paper analytical, empirical and graphical models are presented and critically reviewed. The conditions of solution, limitations and possible applications of the models are discussed. The graphical and semi--graphical models are derived from numerical methods. Using the numerical methods is not always possible as running calculations takes time, whereas the specialized software and equipment are not always cheap. For these reasons, the application of analytical-empirical models is more useful for engineering. It is demonstrated that there is no simple, accurate and feasible analytical method for thawing time prediction. Consequently, simplified methods are needed for thawing time estimation of agricultural and food products. The review reveals the need for further improvement of the existing solutions or development of new ones that will enable accurate determination of thawing time within a wide range of practical conditions of heat transfer during processing. PMID:27904387
A robust empirical seasonal prediction of winter NAO and surface climate.
Wang, L; Ting, M; Kushner, P J
2017-03-21
A key determinant of winter weather and climate in Europe and North America is the North Atlantic Oscillation (NAO), the dominant mode of atmospheric variability in the Atlantic domain. Skilful seasonal forecasting of the surface climate in both Europe and North America is reflected largely in how accurately models can predict the NAO. Most dynamical models, however, have limited skill in seasonal forecasts of the winter NAO. A new empirical model is proposed for the seasonal forecast of the winter NAO that exhibits higher skill than current dynamical models. The empirical model provides robust and skilful prediction of the December-January-February (DJF) mean NAO index using a multiple linear regression (MLR) technique with autumn conditions of sea-ice concentration, stratospheric circulation, and sea-surface temperature. The predictability is, for the most part, derived from the relatively long persistence of sea ice in the autumn. The lower stratospheric circulation and sea-surface temperature appear to play more indirect roles through a series of feedbacks among systems driving NAO evolution. This MLR model also provides skilful seasonal outlooks of winter surface temperature and precipitation over many regions of Eurasia and eastern North America.
Gao, Bo-Cai; Liu, Ming
2013-01-01
Surface reflectance spectra retrieved from remotely sensed hyperspectral imaging data using radiative transfer models often contain residual atmospheric absorption and scattering effects. The reflectance spectra may also contain minor artifacts due to errors in radiometric and spectral calibrations. We have developed a fast smoothing technique for post-processing of retrieved surface reflectance spectra. In the present spectral smoothing technique, model-derived reflectance spectra are first fit using moving filters derived with a cubic spline smoothing algorithm. A common gain curve, which contains minor artifacts in the model-derived reflectance spectra, is then derived. This gain curve is finally applied to all of the reflectance spectra in a scene to obtain the spectrally smoothed surface reflectance spectra. Results from analysis of hyperspectral imaging data collected with the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data are given. Comparisons between the smoothed spectra and those derived with the empirical line method are also presented. PMID:24129022
Ogburn, Sarah E.; Calder, Eliza S
2017-01-01
High concentration pyroclastic density currents (PDCs) are hot avalanches of volcanic rock and gas and are among the most destructive volcanic hazards due to their speed and mobility. Mitigating the risk associated with these flows depends upon accurate forecasting of possible impacted areas, often using empirical or physical models. TITAN2D, VolcFlow, LAHARZ, and ΔH/L or energy cone models each employ different rheologies or empirical relationships and therefore differ in appropriateness of application for different types of mass flows and topographic environments. This work seeks to test different statistically- and physically-based models against a range of PDCs of different volumes, emplaced under different conditions, over different topography in order to test the relative effectiveness, operational aspects, and ultimately, the utility of each model for use in hazard assessments. The purpose of this work is not to rank models, but rather to understand the extent to which the different modeling approaches can replicate reality in certain conditions, and to explore the dynamics of PDCs themselves. In this work, these models are used to recreate the inundation areas of the dense-basal undercurrent of all 13 mapped, land-confined, Soufrière Hills Volcano dome-collapse PDCs emplaced from 1996 to 2010 to test the relative effectiveness of different computational models. Best-fit model results and their input parameters are compared with results using observation- and deposit-derived input parameters. Additional comparison is made between best-fit model results and those using empirically-derived input parameters from the FlowDat global database, which represent “forward” modeling simulations as would be completed for hazard assessment purposes. Results indicate that TITAN2D is able to reproduce inundated areas well using flux sources, although velocities are often unrealistically high. VolcFlow is also able to replicate flow runout well, but does not capture the lateral spreading in distal regions of larger-volume flows. Both models are better at reproducing the inundated area of single-pulse, valley-confined, smaller-volume flows than sustained, highly unsteady, larger-volume flows, which are often partially unchannelized. The simple rheological models of TITAN2D and VolcFlow are not able to recreate all features of these more complex flows. LAHARZ is fast to run and can give a rough approximation of inundation, but may not be appropriate for all PDCs and the designation of starting locations is difficult. The ΔH/L cone model is also very quick to run and gives reasonable approximations of runout distance, but does not inherently model flow channelization or directionality and thus unrealistically covers all interfluves. Empirically-based models like LAHARZ and ΔH/L cones can be quick, first-approximations of flow runout, provided a database of similar flows, e.g., FlowDat, is available to properly calculate coefficients or ΔH/L. For hazard assessment purposes, geophysical models like TITAN2D and VolcFlow can be useful for producing both scenario-based or probabilistic hazard maps, but must be run many times with varying input parameters. LAHARZ and ΔH/L cones can be used to produce simple modeling-based hazard maps when run with a variety of input volumes, but do not explicitly consider the probability of occurrence of different volumes. For forward modeling purposes, the ability to derive potential input parameters from global or local databases is crucial, though important input parameters for VolcFlow cannot be empirically estimated. Not only does this work provide a useful comparison of the operational aspects and behavior of various models for hazard assessment, but it also enriches conceptual understanding of the dynamics of the PDCs themselves.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cristaldi, Alice; Ermolli, Ilaria, E-mail: alice.cristaldi@oaroma.inaf.it
Present-day semi-empirical models of solar irradiance (SI) variations reconstruct SI changes measured on timescales greater than a day by using spectra computed in one dimensional atmosphere models (1D models), which are representative of various solar surface features. Various recent studies have pointed out, however, that the spectra synthesized in 1D models do not reflect the radiative emission of the inhomogenous atmosphere revealed by high-resolution solar observations. We aimed to derive observation-based atmospheres from such observations and test their accuracy for SI estimates. We analyzed spectropolarimetric data of the Fe i 630 nm line pair in photospheric regions that are representativemore » of the granular quiet-Sun pattern (QS) and of small- and large-scale magnetic features, both bright and dark with respect to the QS. The data were taken on 2011 August 6, with the CRisp Imaging Spectropolarimeter at the Swedish Solar Telescope, under excellent seeing conditions. We derived atmosphere models of the observed regions from data inversion with the SIR code. We studied the sensitivity of results to spatial resolution and temporal evolution, and discuss the obtained atmospheres with respect to several 1D models. The atmospheres derived from our study agree well with most of the 1D models we compare our results with, both qualitatively and quantitatively (within 10%), except for pore regions. Spectral synthesis computations of the atmosphere obtained from the QS observations return an SI between 400 and 2400 nm that agrees, on average, within 2.2% with standard reference measurements, and within −0.14% with the SI computed on the QS atmosphere employed by the most advanced semi-empirical model of SI variations.« less
Burgess, Adrian P
2012-01-01
Although event-related potentials (ERPs) are widely used to study sensory, perceptual and cognitive processes, it remains unknown whether they are phase-locked signals superimposed upon the ongoing electroencephalogram (EEG) or result from phase-alignment of the EEG. Previous attempts to discriminate between these hypotheses have been unsuccessful but here a new test is presented based on the prediction that ERPs generated by phase-alignment will be associated with event-related changes in frequency whereas evoked-ERPs will not. Using empirical mode decomposition (EMD), which allows measurement of narrow-band changes in the EEG without predefining frequency bands, evidence was found for transient frequency slowing in recognition memory ERPs but not in simulated data derived from the evoked model. Furthermore, the timing of phase-alignment was frequency dependent with the earliest alignment occurring at high frequencies. Based on these findings, the Firefly model was developed, which proposes that both evoked and induced power changes derive from frequency-dependent phase-alignment of the ongoing EEG. Simulated data derived from the Firefly model provided a close match with empirical data and the model was able to account for i) the shape and timing of ERPs at different scalp sites, ii) the event-related desynchronization in alpha and synchronization in theta, and iii) changes in the power density spectrum from the pre-stimulus baseline to the post-stimulus period. The Firefly Model, therefore, provides not only a unifying account of event-related changes in the EEG but also a possible mechanism for cross-frequency information processing.
Burgess, Adrian P.
2012-01-01
Although event-related potentials (ERPs) are widely used to study sensory, perceptual and cognitive processes, it remains unknown whether they are phase-locked signals superimposed upon the ongoing electroencephalogram (EEG) or result from phase-alignment of the EEG. Previous attempts to discriminate between these hypotheses have been unsuccessful but here a new test is presented based on the prediction that ERPs generated by phase-alignment will be associated with event-related changes in frequency whereas evoked-ERPs will not. Using empirical mode decomposition (EMD), which allows measurement of narrow-band changes in the EEG without predefining frequency bands, evidence was found for transient frequency slowing in recognition memory ERPs but not in simulated data derived from the evoked model. Furthermore, the timing of phase-alignment was frequency dependent with the earliest alignment occurring at high frequencies. Based on these findings, the Firefly model was developed, which proposes that both evoked and induced power changes derive from frequency-dependent phase-alignment of the ongoing EEG. Simulated data derived from the Firefly model provided a close match with empirical data and the model was able to account for i) the shape and timing of ERPs at different scalp sites, ii) the event-related desynchronization in alpha and synchronization in theta, and iii) changes in the power density spectrum from the pre-stimulus baseline to the post-stimulus period. The Firefly Model, therefore, provides not only a unifying account of event-related changes in the EEG but also a possible mechanism for cross-frequency information processing. PMID:23049827
Maximum Entropy for the International Division of Labor.
Lei, Hongmei; Chen, Ying; Li, Ruiqi; He, Deli; Zhang, Jiang
2015-01-01
As a result of the international division of labor, the trade value distribution on different products substantiated by international trade flows can be regarded as one country's strategy for competition. According to the empirical data of trade flows, countries may spend a large fraction of export values on ubiquitous and competitive products. Meanwhile, countries may also diversify their exports share on different types of products to reduce the risk. In this paper, we report that the export share distribution curves can be derived by maximizing the entropy of shares on different products under the product's complexity constraint once the international market structure (the country-product bipartite network) is given. Therefore, a maximum entropy model provides a good fit to empirical data. The empirical data is consistent with maximum entropy subject to a constraint on the expected value of the product complexity for each country. One country's strategy is mainly determined by the types of products this country can export. In addition, our model is able to fit the empirical export share distribution curves of nearly every country very well by tuning only one parameter.
Study on the leakage flow through a clearance gap between two stationary walls
NASA Astrophysics Data System (ADS)
Zhao, W.; Billdal, J. T.; Nielsen, T. K.; Brekke, H.
2012-11-01
In the present paper, the leakage flow in the clearance gap between stationary walls was studied experimentally, theoretically and numerically by the computational fluid dynamics (CFD) in order to find the relationship between leakage flow, pressure difference and clearance gap. The experimental set-up of the clearance gap between two stationary walls is the simplification of the gap between the guide vane faces and facing plates in Francis turbines. This model was built in the Waterpower laboratory at Norwegian University of Science and Technology (NTNU). The empirical formula for calculating the leakage flow rate between the two stationary walls was derived from the empirical study. The experimental model is simulated by computational fluid dynamics employing the ANSYS CFX commercial software in order to study the flow structure. Both numerical simulation results and empirical formula results are in good agreement with the experimental results. The correction of the empirical formula is verified by experimental data and has been proven to be very useful in terms of quickly predicting the leakage flow rate in the guide vanes for hydraulic turbines.
Maximum Entropy for the International Division of Labor
Lei, Hongmei; Chen, Ying; Li, Ruiqi; He, Deli; Zhang, Jiang
2015-01-01
As a result of the international division of labor, the trade value distribution on different products substantiated by international trade flows can be regarded as one country’s strategy for competition. According to the empirical data of trade flows, countries may spend a large fraction of export values on ubiquitous and competitive products. Meanwhile, countries may also diversify their exports share on different types of products to reduce the risk. In this paper, we report that the export share distribution curves can be derived by maximizing the entropy of shares on different products under the product’s complexity constraint once the international market structure (the country-product bipartite network) is given. Therefore, a maximum entropy model provides a good fit to empirical data. The empirical data is consistent with maximum entropy subject to a constraint on the expected value of the product complexity for each country. One country’s strategy is mainly determined by the types of products this country can export. In addition, our model is able to fit the empirical export share distribution curves of nearly every country very well by tuning only one parameter. PMID:26172052
NASA Technical Reports Server (NTRS)
Goldhirsh, Julius; Krichevsky, Vladimir; Gebo, Norman
1992-01-01
Five years of rain rate and modeled slant path attenuation distributions at 20 GHz and 30 GHz derived from a network of 10 tipping bucket rain gages was examined. The rain gage network is located within a grid 70 km north-south and 47 km east-west in the Mid-Atlantic coast of the United States in the vicinity of Wallops Island, Virginia. Distributions were derived from the variable integration time data and from one minute averages. It was demonstrated that for realistic fade margins, the variable integration time results are adequate to estimate slant path attenuations at frequencies above 20 GHz using models which require one minute averages. An accurate empirical formula was developed to convert the variable integration time rain rates to one minute averages. Fade distributions at 20 GHz and 30 GHz were derived employing Crane's Global model because it was demonstrated to exhibit excellent accuracy with measured COMSTAR fades at 28.56 GHz.
A Time-dependent Heliospheric Model Driven by Empirical Boundary Conditions
NASA Astrophysics Data System (ADS)
Kim, T. K.; Arge, C. N.; Pogorelov, N. V.
2017-12-01
Consisting of charged particles originating from the Sun, the solar wind carries the Sun's energy and magnetic field outward through interplanetary space. The solar wind is the predominant source of space weather events, and modeling the solar wind propagation to Earth is a critical component of space weather research. Solar wind models are typically separated into coronal and heliospheric parts to account for the different physical processes and scales characterizing each region. Coronal models are often coupled with heliospheric models to propagate the solar wind out to Earth's orbit and beyond. The Wang-Sheeley-Arge (WSA) model is a semi-empirical coronal model consisting of a potential field source surface model and a current sheet model that takes synoptic magnetograms as input to estimate the magnetic field and solar wind speed at any distance above the coronal region. The current version of the WSA model takes the Air Force Data Assimilative Photospheric Flux Transport (ADAPT) model as input to provide improved time-varying solutions for the ambient solar wind structure. When heliospheric MHD models are coupled with the WSA model, density and temperature at the inner boundary are treated as free parameters that are tuned to optimal values. For example, the WSA-ENLIL model prescribes density and temperature assuming momentum flux and thermal pressure balance across the inner boundary of the ENLIL heliospheric MHD model. We consider an alternative approach of prescribing density and temperature using empirical correlations derived from Ulysses and OMNI data. We use our own modeling software (Multi-scale Fluid-kinetic Simulation Suite) to drive a heliospheric MHD model with ADAPT-WSA input. The modeling results using the two different approaches of density and temperature prescription suggest that the use of empirical correlations may be a more straightforward, consistent method.
Sburlati, Elizabeth S; Lyneham, Heidi J; Mufson, Laura H; Schniering, Carolyn A
2012-06-01
In order to treat adolescent depression, a number of empirically supported treatments (ESTs) have been developed from both the cognitive behavioral therapy (CBT) and interpersonal psychotherapy (IPT-A) frameworks. Research has shown that in order for these treatments to be implemented in routine clinical practice (RCP), effective therapist training must be generated and provided. However, before such training can be developed, a good understanding of the therapist competencies needed to implement these ESTs is required. Sburlati et al. (Clin Child Fam Psychol Rev 14:89-109, 2011) developed a model of therapist competencies for implementing CBT using the well-established Delphi technique. Given that IPT-A differs considerably to CBT, the current study aims to develop a model of therapist competencies for the implementation of IPT-A using a similar procedure as that applied in Sburlati et al. (Clin Child Fam Psychol Rev 14:89-109, 2011). This method involved: (1) identifying and reviewing an empirically supported IPT-A approach, (2) extracting therapist competencies required for the implementation of IPT-A, (3) consulting with a panel of IPT-A experts to generate an overall model of therapist competencies, and (4) validating the overall model with the IPT-A manual author. The resultant model offers an empirically derived set of competencies necessary for effectively treating adolescent depression using IPT-A and has wide implications for the development of therapist training, competence assessment measures, and evidence-based practice guidelines. This model, therefore, provides an empirical framework for the development of dissemination and implementation programs aimed at ensuring that adolescents with depression receive effective care in RCP settings. Key similarities and differences between CBT and IPT-A, and the therapist competencies required for implementing these treatments, are also highlighted throughout this article.
Koštrun, Sanja; Munic Kos, Vesna; Matanović Škugor, Maja; Palej Jakopović, Ivana; Malnar, Ivica; Dragojević, Snježana; Ralić, Jovica; Alihodžić, Sulejman
2017-06-16
The aim of this study was to investigate lipophilicity and cellular accumulation of rationally designed azithromycin and clarithromycin derivatives at the molecular level. The effect of substitution site and substituent properties on a global physico-chemical profile and cellular accumulation of investigated compounds was studied using calculated structural parameters as well as experimentally determined lipophilicity. In silico models based on the 3D structure of molecules were generated to investigate conformational effect on studied properties and to enable prediction of lipophilicity and cellular accumulation for this class of molecules based on non-empirical parameters. The applicability of developed models was explored on a validation and test sets and compared with previously developed empirical models. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
NASA Astrophysics Data System (ADS)
Joyce, M.; Chaboyer, B.
2018-03-01
Theoretical stellar evolution models are constructed and tailored to the best known, observationally derived characteristics of metal-poor ([Fe/H] ∼ ‑2.3) stars representing a range of evolutionary phases: subgiant HD 140283, globular cluster M92, and four single, main sequence stars with well-determined parallaxes: HIP 46120, HIP 54639, HIP 106924, and WOLF 1137. It is found that the use of a solar-calibrated value of the mixing length parameter α MLT in models of these objects is ineffective at reproducing their observed properties. Empirically calibrated values of α MLT are presented for each object, accounting for uncertainties in the input physics employed in the models. It is advocated that the implementation of an adaptive mixing length is necessary in order for stellar evolution models to maintain fidelity in the era of high-precision observations.
The Evolution of Social and Semantic Networks in Epistemic Communities
ERIC Educational Resources Information Center
Margolin, Drew Berkley
2012-01-01
This study describes and tests a model of scientific inquiry as an evolving, organizational phenomenon. Arguments are derived from organizational ecology and evolutionary theory. The empirical subject of study is an "epistemic community" of scientists publishing on a research topic in physics: the string theoretic concept of…
Strength of single-pole utility structures
Ronald W. Wolfe
2006-01-01
This section presents three basic methods for deriving and documenting Rn as an LTL value along with the coefficient of variation (COVR) for single-pole structures. These include the following: 1. An empirical analysis based primarily on tests of full-sized poles. 2. A theoretical analysis of mechanics-based models used in...
A Cluster Analytic Study of Osteoprotective Behavior in Undergraduates
ERIC Educational Resources Information Center
Sharp, Katherine; Thombs, Dennis L.
2003-01-01
Objective: To derive an empirical taxonomy of osteoprotective stages using the Precaution Adoption Process Model (PAPM) and to identify the predisposing factors associated with each stage. Methods: An anonymous survey was completed by 504 undergraduates at a Midwestern public university. Results: Cluster analytic findings indicate that only 2…
Interactions of Task and Subject Variables among Continuous Performance Tests
ERIC Educational Resources Information Center
Denney, Colin B.; Rapport, Mark D.; Chung, Kyong-Mee
2005-01-01
Background: Contemporary models of working memory suggest that target paradigm (TP) and target density (TD) should interact as influences on error rates derived from continuous performance tests (CPTs). The present study evaluated this hypothesis empirically in a typically developing, ethnically diverse sample of children. The extent to which…
Cawley, John; Dragone, Davide; Von Hinke Kessler Scholder, Stephanie
2016-01-01
This paper offers an economic model of smoking and body weight and provides new empirical evidence on the extent to which the demand for cigarettes is derived from the demand for weight loss. In the model, smoking causes weight loss in addition to having direct utility benefits and direct health consequences. It predicts that some individuals smoke for weight loss and that the practice is more common among those who consider themselves overweight and those who experience greater disutility from excess weight. We test these hypotheses using nationally representative data in which adolescents are directly asked whether they smoke to control their weight. We find that, among teenagers who smoke frequently, 46% of girls and 30% of boys are smoking in part to control their weight. As predicted by the model, this practice is significantly more common among those who describe themselves as too fat and among groups that tend to experience greater disutility from obesity. We conclude by discussing the implications of these findings for tax policy; specifically, the demand for cigarettes is less price elastic among those who smoke for weight loss, all else being equal. Public health efforts to reduce smoking initiation and encourage cessation may wish to design campaigns to alter the derived nature of cigarette demand, especially among adolescent girls. Copyright © 2014 John Wiley & Sons, Ltd.
ERIC Educational Resources Information Center
TRAVERS, ROBERT M.W.
THE REVIEWER FAULTS THE AUTHOR FOR "SIMPLE AND UNCRITICAL PRESENTATIONS OF IDEAS" THAT FAIL TO RESULT IN THE PROMISED "WORKABLE DOCUMENT FOR PRACTICING TEACHER EDUCATORS TO USE." THE MATERIAL "SHOWS SO COMPLETE A LACK OF CONCERN FOR SUCH MATTERS AS WHETHER A MODEL HAS OR HAS NOT BEEN DERIVED FROM EMPIRICAL RESEARCH, WHETHER THE MODEL HAS OR HAS…
Estimating Density Using Precision Satellite Orbits from Multiple Satellites
NASA Astrophysics Data System (ADS)
McLaughlin, Craig A.; Lechtenberg, Travis; Fattig, Eric; Krishna, Dhaval Mysore
2012-06-01
This article examines atmospheric densities estimated using precision orbit ephemerides (POE) from several satellites including CHAMP, GRACE, and TerraSAR-X. The results of the calibration of atmospheric densities along the CHAMP and GRACE-A orbits derived using POEs with those derived using accelerometers are compared for various levels of solar and geomagnetic activity to examine the consistency in calibration between the two satellites. Densities from CHAMP and GRACE are compared when GRACE is orbiting nearly directly above CHAMP. In addition, the densities derived simultaneously from CHAMP, GRACE-A, and TerraSAR-X are compared to the Jacchia 1971 and NRLMSISE-00 model densities to observe altitude effects and consistency in the offsets from the empirical models among all three satellites.
NASA Astrophysics Data System (ADS)
Meshgi, Ali; Schmitter, Petra; Babovic, Vladan; Chui, Ting Fong May
2014-11-01
Developing reliable methods to estimate stream baseflow has been a subject of interest due to its importance in catchment response and sustainable watershed management. However, to date, in the absence of complex numerical models, baseflow is most commonly estimated using statistically derived empirical approaches that do not directly incorporate physically-meaningful information. On the other hand, Artificial Intelligence (AI) tools such as Genetic Programming (GP) offer unique capabilities to reduce the complexities of hydrological systems without losing relevant physical information. This study presents a simple-to-use empirical equation to estimate baseflow time series using GP so that minimal data is required and physical information is preserved. A groundwater numerical model was first adopted to simulate baseflow for a small semi-urban catchment (0.043 km2) located in Singapore. GP was then used to derive an empirical equation relating baseflow time series to time series of groundwater table fluctuations, which are relatively easily measured and are physically related to baseflow generation. The equation was then generalized for approximating baseflow in other catchments and validated for a larger vegetation-dominated basin located in the US (24 km2). Overall, this study used GP to propose a simple-to-use equation to predict baseflow time series based on only three parameters: minimum daily baseflow of the entire period, area of the catchment and groundwater table fluctuations. It serves as an alternative approach for baseflow estimation in un-gauged systems when only groundwater table and soil information is available, and is thus complementary to other methods that require discharge measurements.
Integrating animal movement with habitat suitability for estimating dynamic landscape connectivity
van Toor, Mariëlle L.; Kranstauber, Bart; Newman, Scott H.; Prosser, Diann J.; Takekawa, John Y.; Technitis, Georgios; Weibel, Robert; Wikelski, Martin; Safi, Kamran
2018-01-01
Context High-resolution animal movement data are becoming increasingly available, yet having a multitude of empirical trajectories alone does not allow us to easily predict animal movement. To answer ecological and evolutionary questions at a population level, quantitative estimates of a species’ potential to link patches or populations are of importance. Objectives We introduce an approach that combines movement-informed simulated trajectories with an environment-informed estimate of the trajectories’ plausibility to derive connectivity. Using the example of bar-headed geese we estimated migratory connectivity at a landscape level throughout the annual cycle in their native range. Methods We used tracking data of bar-headed geese to develop a multi-state movement model and to estimate temporally explicit habitat suitability within the species’ range. We simulated migratory movements between range fragments, and calculated a measure we called route viability. The results are compared to expectations derived from published literature. Results Simulated migrations matched empirical trajectories in key characteristics such as stopover duration. The viability of the simulated trajectories was similar to that of the empirical trajectories. We found that, overall, the migratory connectivity was higher within the breeding than in wintering areas, corroborating previous findings for this species. Conclusions We show how empirical tracking data and environmental information can be fused for meaningful predictions of animal movements throughout the year and even outside the spatial range of the available data. Beyond predicting migratory connectivity, our framework will prove useful for modelling ecological processes facilitated by animal movement, such as seed dispersal or disease ecology.
Effects of Inventory Bias on Landslide Susceptibility Calculations
NASA Technical Reports Server (NTRS)
Stanley, T. A.; Kirschbaum, D. B.
2017-01-01
Many landslide inventories are known to be biased, especially inventories for large regions such as Oregon's SLIDO or NASA's Global Landslide Catalog. These biases must affect the results of empirically derived susceptibility models to some degree. We evaluated the strength of the susceptibility model distortion from postulated biases by truncating an unbiased inventory. We generated a synthetic inventory from an existing landslide susceptibility map of Oregon, then removed landslides from this inventory to simulate the effects of reporting biases likely to affect inventories in this region, namely population and infrastructure effects. Logistic regression models were fitted to the modified inventories. Then the process of biasing a susceptibility model was repeated with SLIDO data. We evaluated each susceptibility model with qualitative and quantitative methods. Results suggest that the effects of landslide inventory bias on empirical models should not be ignored, even if those models are, in some cases, useful. We suggest fitting models in well-documented areas and extrapolating across the study region as a possible approach to modeling landslide susceptibility with heavily biased inventories.
Effects of Inventory Bias on Landslide Susceptibility Calculations
NASA Technical Reports Server (NTRS)
Stanley, Thomas; Kirschbaum, Dalia B.
2017-01-01
Many landslide inventories are known to be biased, especially inventories for large regions such as Oregons SLIDO or NASAs Global Landslide Catalog. These biases must affect the results of empirically derived susceptibility models to some degree. We evaluated the strength of the susceptibility model distortion from postulated biases by truncating an unbiased inventory. We generated a synthetic inventory from an existing landslide susceptibility map of Oregon, then removed landslides from this inventory to simulate the effects of reporting biases likely to affect inventories in this region, namely population and infrastructure effects. Logistic regression models were fitted to the modified inventories. Then the process of biasing a susceptibility model was repeated with SLIDO data. We evaluated each susceptibility model with qualitative and quantitative methods. Results suggest that the effects of landslide inventory bias on empirical models should not be ignored, even if those models are, in some cases, useful. We suggest fitting models in well-documented areas and extrapolating across the study region as a possible approach to modelling landslide susceptibility with heavily biased inventories.
NASA Astrophysics Data System (ADS)
Quetin, G. R.; Swann, A. L. S.
2017-12-01
Successfully predicting the state of vegetation in a novel environment is dependent on our process level understanding of the ecosystem and its interactions with the environment. We derive a global empirical map of the sensitivity of vegetation to climate using the response of satellite-observed greenness and leaf area to interannual variations in temperature and precipitation. Our analysis provides observations of ecosystem functioning; the vegetation interactions with the physical environment, across a wide range of climates and provide a functional constraint for hypotheses engendered in process-based models. We infer mechanisms constraining ecosystem functioning by contrasting how the observed and simulated sensitivity of vegetation to climate varies across climate space. Our analysis yields empirical evidence for multiple physical and biological mediators of the sensitivity of vegetation to climate as a systematic change across climate space. Our comparison of remote sensing-based vegetation sensitivity with modeled estimates provides evidence for which physiological mechanisms - photosynthetic efficiency, respiration, water supply, atmospheric water demand, and sunlight availability - dominate the ecosystem functioning in places with different climates. Earth system models are generally successful in reproducing the broad sign and shape of ecosystem functioning across climate space. However, this general agreement breaks down in hot wet climates where models simulate less leaf area during a warmer year, while observations show a mixed response but overall more leaf area during warmer years. In addition, simulated ecosystem interaction with temperature is generally larger and changes more rapidly across a gradient of temperature than is observed. We hypothesize that the amplified interaction and change are both due to a lack of adaptation and acclimation in simulations. This discrepancy with observations suggests that simulated responses of vegetation to global warming, and feedbacks between vegetation and climate, are too strong in the models.
Mathematical modeling of synthetic unit hydrograph case study: Citarum watershed
NASA Astrophysics Data System (ADS)
Islahuddin, Muhammad; Sukrainingtyas, Adiska L. A.; Kusuma, M. Syahril B.; Soewono, Edy
2015-09-01
Deriving unit hydrograph is very important in analyzing watershed's hydrologic response of a rainfall event. In most cases, hourly measures of stream flow data needed in deriving unit hydrograph are not always available. Hence, one needs to develop methods for deriving unit hydrograph for ungagged watershed. Methods that have evolved are based on theoretical or empirical formulas relating hydrograph peak discharge and timing to watershed characteristics. These are usually referred to Synthetic Unit Hydrograph. In this paper, a gamma probability density function and its variant are used as mathematical approximations of a unit hydrograph for Citarum Watershed. The model is adjusted with real field condition by translation and scaling. Optimal parameters are determined by using Particle Swarm Optimization method with weighted objective function. With these models, a synthetic unit hydrograph can be developed and hydrologic parameters can be well predicted.
Modeling duckweed growth in wastewater treatment systems
Landesman, L.; Parker, N.C.; Fedler, C.B.; Konikoff, M.
2005-01-01
Species of the genera Lemnaceae, or duckweeds, are floating aquatic plants that show great promise for both wastewater treatment and livestock feed production. Research conducted in the Southern High Plains of Texas has shown that Lemna obscura grew well in cattle feedlot runoff water and produced leaf tissue with a high protein content. A model or mathematical expression derived from duckweed growth data was used to fit data from experiments conducted in a greenhouse in Lubbock, Texas. The relationship between duckweed growth and the total nitrogen concentration in the mediium follows the Mitscherlich Function and is similar to that of other plants. Empirically derived model equations have successfully predicted the growth response of Lemna obscura.
Internal Interdecadal Variability in CMIP5 Control Simulations
NASA Astrophysics Data System (ADS)
Cheung, A. H.; Mann, M. E.; Frankcombe, L. M.; England, M. H.; Steinman, B. A.; Miller, S. K.
2015-12-01
Here we make use of control simulations from the CMIP5 models to quantify the amplitude of the interdecadal internal variability component in Atlantic, Pacific, and Northern Hemisphere mean surface temperature. We compare against estimates derived from observations using a semi-empirical approach wherein the forced component as estimated using CMIP5 historical simulations is removed to yield an estimate of the residual, internal variability. While the observational estimates are largely consistent with those derived from the control simulations for both basins and the Northern Hemisphere, they lie in the upper range of the model distributions, suggesting the possibility of differences between the amplitudes of observed and modeled variability. We comment on some possible reasons for the disparity.
Babela, Robert; Jarcuska, Pavol; Uraz, Vladimir; Krčméry, Vladimír; Jadud, Branislav; Stevlik, Jan; Gould, Ian M
2017-11-01
No previous analyses have attempted to determine optimal therapy for upper respiratory tract infections on the basis of cost-minimization models and the prevalence of antimicrobial resistance among respiratory pathogens in Slovakia. This investigation compares macrolides and cephalosporines for empirical therapy and look at this new tool from the aspect of potential antibiotic policy decision-making process. We employed a decision tree model to determine the threshold level of macrolides and cephalosporines resistance among community respiratory pathogens that would make cephalosporines or macrolides cost-minimising. To obtain information on clinical outcomes and cost of URTIs, a systematic review of the literature was performed. The cost-minimization model of upper respiratory tract infections (URTIs) treatment was derived from the review of literature and published models. We found that the mean cost of empirical treatment with macrolides for an URTIs was €93.27 when the percentage of resistant Streptococcus pneumoniae in the community was 0%; at 5%, the mean cost was €96.45; at 10%, €99.63; at 20%, €105.99, and at 30%, €112.36. Our model demonstrated that when the percentage of macrolide resistant Streptococcus pneumoniae exceeds 13.8%, use of empirical cephalosporines rather than macrolides minimizes the treatment cost of URTIs. Empirical macrolide therapy is less expensive than cephalosporines therapy for URTIs unless macrolide resistance exceeds 13.8% in the community. Results have important antibiotic policy implications, since presented model can be use as an additional decision-making tool for new guidelines and reimbursement processes by local authorities in the era of continual increase in antibiotic resistance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, X.D.; Krylov, S.N.; Ren, L.
1997-11-01
Photoinduced toxicity of polycyclic aromatic hydrocarbons (PAHs) occurs via photosensitization reactions (e.g., generation of singlet-state oxygen) and by photomodification (photooxidation and/or photolysis) of the chemicals to more toxic species. The quantitative structure-activity relationship (QSAR) described in the companion paper predicted, in theory, that photosensitization and photomodification additively contribute to toxicity. To substantiate this QSAR modeling exercise it was necessary to show that toxicity can be described by empirically derived parameters. The toxicity of 16 PAHs to the duckweed Lemna gibba was measured as inhibition of leaf production in simulated solar radiation (a light source with a spectrum similar to thatmore » of sunlight). A predictive model for toxicity was generated based on the theoretical model developed in the companion paper. The photophysical descriptors required of each PAH for modeling were efficiency of photon absorbance, relative uptake, quantum yield for triplet-state formation, and the rate of photomodification. The photomodification rates of the PAHs showed a moderate correlation to toxicity, whereas a derived photosensitization factor (PSF; based on absorbance, triplet-state quantum yield, and uptake) for each PAH showed only a weak, complex correlation to toxicity. However, summing the rate of photomodification and the PSF resulted in a strong correlation to toxicity that had predictive value. When the PSF and a derived photomodification factor (PMF; based on the photomodification rate and toxicity of the photomodified PAHs) were summed, an excellent explanatory model of toxicity was produced, substantiating the additive contributions of the two factors.« less
The First Empirical Determination of the Fe10+ and Fe13+ Freeze-in Distances in the Solar Corona
NASA Astrophysics Data System (ADS)
Boe, Benjamin; Habbal, Shadia; Druckmüller, Miloslav; Landi, Enrico; Kourkchi, Ehsan; Ding, Adalbert; Starha, Pavel; Hutton, Joseph
2018-06-01
Heavy ions are markers of the physical processes responsible for the density and temperature distribution throughout the fine-scale magnetic structures that define the shape of the solar corona. One of their properties, whose empirical determination has remained elusive, is the “freeze-in” distance (R f ) where they reach fixed ionization states that are adhered to during their expansion with the solar wind. We present the first empirical inference of R f for {Fe}}{10+} and {Fe}}{13+} derived from multi-wavelength imaging observations of the corresponding Fe XI ({Fe}}{10+}) 789.2 nm and Fe XIV ({Fe}}{13+}) 530.3 nm emission acquired during the 2015 March 20 total solar eclipse. We find that the two ions freeze-in at different heliocentric distances. In polar coronal holes (CHs) R f is around 1.45 R ⊙ for {Fe}}{10+} and below 1.25 R ⊙ for {Fe}}{13+}. Along open field lines in streamer regions, R f ranges from 1.4 to 2 R ⊙ for {Fe}}{10+} and from 1.5 to 2.2 R ⊙ for {Fe}}{13+}. These first empirical R f values: (1) reflect the differing plasma parameters between CHs and streamers and structures within them, including prominences and coronal mass ejections; (2) are well below the currently quoted values derived from empirical model studies; and (3) place doubt on the reliability of plasma diagnostics based on the assumption of ionization equilibrium beyond 1.2 R ⊙.
A single-station empirical model for TEC over the Antarctic Peninsula using GPS-TEC data
NASA Astrophysics Data System (ADS)
Feng, Jiandi; Wang, Zhengtao; Jiang, Weiping; Zhao, Zhenzhen; Zhang, Bingbing
2017-02-01
Compared with regional or global total electron content (TEC) empirical models, single-station TEC empirical models may exhibit higher accuracy in describing TEC spatial and temporal variations for a single station. In this paper, a new single-station empirical total electron content (TEC) model, called SSM-month, for the O'Higgins Station in the Antarctic Peninsula is proposed by using Global Positioning System (GPS)-TEC data from 01 January 2004 to 30 June 2015. The diurnal variation of TEC in the O'Higgins Station may have changing features in different months, sometimes even in opposite forms, because of ionospheric phenomena, such as the Mid-latitude Summer Nighttime Anomaly (MSNA). To avoid the influence of different diurnal variations, the concept of monthly modeling is proposed in this study. The SSM-month model, which is established by month (including 12 submodels that correspond to the 12 months), can effectively describe the diurnal variation of TEC in different months. Each submodel of the SSM-month model exhibits good agreement with GPS-TEC input data. Overall, the SSM-month model fits the input data with a bias of 0.03 TECU (total electron content unit, 1 TECU = 1016 el m-2) and a standard deviation of 2.78 TECU. This model, which benefits from the modeling method, can effectively describe the MSNA phenomenon without implementing any modeling correction. TEC data derived from Center for Orbit Determination in Europe global ionosphere maps (CODE GIMs), International Reference Ionosphere 2012 (IRI2012), and NeQuick are compared with the SSM-month model in the years of 2001 and 2015-2016. Result shows that the SSM-month model exhibits good consistency with CODE GIMs, which is better than that of IRI2012 and NeQuick, in the O'Higgins Station on the test days.
NASA Astrophysics Data System (ADS)
Izotov, Y. I.; Stasińska, G.; Guseva, N. G.
2013-10-01
We verified the validity of the empirical method to derive the 4He abundance used in our previous papers by applying it to CLOUDY (v13.01) models. Using newly published He i emissivities for which we present convenient fits as well as the output CLOUDY case B hydrogen and He i line intensities, we found that the empirical method is able to reproduce the input CLOUDY 4He abundance with an accuracy of better than 1%. The CLOUDY output data also allowed us to derive the non-recombination contribution to the intensities of the strongest Balmer hydrogen Hα, Hβ, Hγ, and Hδ emission lines and the ionisation correction factors for He. With these improvements we used our updated empirical method to derive the 4He abundances and to test corrections for several systematic effects in a sample of 1610 spectra of low-metallicity extragalactic H ii regions, the largest sample used so far. From this sample we extracted a subsample of 111 H ii regions with Hβ equivalent width EW(Hβ) ≥ 150 Å, with excitation parameter x = O2+/O ≥ 0.8, and with helium mass fraction Y derived with an accuracy better than 3%. With this subsample we derived the primordial 4He mass fraction Yp = 0.254 ± 0.003 from linear regression Y - O/H. The derived value of Yp is higher at the 68% confidence level (CL) than that predicted by the standard big bang nucleosynthesis (SBBN) model, possibly implying the existence of different types of neutrino species in addition to the three known types of active neutrinos. Using the most recently derived primordial abundances D/H = (2.60 ± 0.12) × 10-5 and Yp = 0.254 ± 0.003 and the χ2 technique, we found that the best agreement between abundances of these light elements is achieved in a cosmological model with baryon mass density Ωbh2 = 0.0234 ± 0.0019 (68% CL) and an effective number of the neutrino species Neff = 3.51 ± 0.35 (68% CL). Based on observations collected at the European Southern Observatory, Chile, programs 073.B-0283(A), 081.C-0113(A), 65.N-0642(A), 68.B-0310(A), 69.C-0203(A), 69.D-0174(A), 70.B-0717(A), 70.C-0008(A), 71.B-0055(A).Based on observations at the Kitt Peak National Observatory, National Optical Astronomical Observatory, operated by the Association of Universities for Research in Astronomy, Inc., under contract with the National Science Foundation.Tables 2 and 3 are available in electronic form at http://www.aanda.org
NASA Technical Reports Server (NTRS)
Truhlik, V.; Triskova, L.
2012-01-01
A data-base of electron temperature (T(sub e)) comprising of most of the available LEO satellite measurements in the altitude range from 350 to 2000 km has been used for the development of a new global empirical model of T(sub e) for the International Reference Ionosphere (IRI). For the first time this will include variations with solar activity. Variations at five fixed altitude ranges centered at 350, 550, 850, 1400, and 2000 km and three seasons (summer, winter, and equinox) were represented by a system of associated Legendre polynomials (up to the 8th order) in terms of magnetic local time and the earlier introduced in vdip latitude. The solar activity variations of T(sub e) are represented by a correction term of the T(sub e) global pattern and it has been derived from the empirical latitudinal profiles of T(sub e) for day and night (Truhlik et al., 2009a). Comparisons of the new T(sub e) model with data and with the IRI 2007 Te model show that the new model agrees well with the data generally within standard deviation limits and that the model performs better than the current IRI T(sub e) model.
Simulating the Risk of Liver Fluke Infection using a Mechanistic Hydro-epidemiological Model
NASA Astrophysics Data System (ADS)
Beltrame, Ludovica; Dunne, Toby; Rose, Hannah; Walker, Josephine; Morgan, Eric; Vickerman, Peter; Wagener, Thorsten
2016-04-01
Liver Fluke (Fasciola hepatica) is a common parasite found in livestock and responsible for considerable economic losses throughout the world. Risk of infection is strongly influenced by climatic and hydrological conditions, which characterise the host environment for parasite development and transmission. Despite on-going control efforts, increases in fluke outbreaks have been reported in recent years in the UK, and have been often attributed to climate change. Currently used fluke risk models are based on empirical relationships derived between historical climate and incidence data. However, hydro-climate conditions are becoming increasingly non-stationary due to climate change and direct anthropogenic impacts such as land use change, making empirical models unsuitable for simulating future risk. In this study we introduce a mechanistic hydro-epidemiological model for Liver Fluke, which explicitly simulates habitat suitability for disease development in space and time, representing the parasite life cycle in connection with key environmental conditions. The model is used to assess patterns of Liver Fluke risk for two catchments in the UK under current and potential future climate conditions. Comparisons are made with a widely used empirical model employing different datasets, including data from regional veterinary laboratories. Results suggest that mechanistic models can achieve adequate predictive ability and support adaptive fluke control strategies under climate change scenarios.
Availability model of stand-alone photovoltaic system
NASA Astrophysics Data System (ADS)
Mazurek, G.
2017-08-01
In this paper we present a simple, empirical model of stand-alone photovoltaic power system availability. The model is a final result of five-year long studies and ground measurements of solar irradiation carried out in Central Europe. The obtained results facilitate sizing of PV modules that have to be installed with taking into account system's availability level in each month of a year. The model can be extended to different geographical locations, with help of local meteorological data or solar irradiation datasets derived from satellite measurements.
Calculation and Identification of the Aerodynamic Parameters for Small-Scaled Fixed-Wing UAVs.
Shen, Jieliang; Su, Yan; Liang, Qing; Zhu, Xinhua
2018-01-13
The establishment of the Aircraft Dynamic Model(ADM) constitutes the prerequisite for the design of the navigation and control system, but the aerodynamic parameters in the model could not be readily obtained especially for small-scaled fixed-wing UAVs. In this paper, the procedure of computing the aerodynamic parameters is developed. All the longitudinal and lateral aerodynamic derivatives are firstly calculated through semi-empirical method based on the aerodynamics, rather than the wind tunnel tests or fluid dynamics software analysis. Secondly, the residuals of each derivative are proposed to be identified or estimated further via Extended Kalman Filter(EKF), with the observations of the attitude and velocity from the airborne integrated navigation system. Meanwhile, the observability of the targeted parameters is analyzed and strengthened through multiple maneuvers. Based on a small-scaled fixed-wing aircraft driven by propeller, the airborne sensors are chosen and the model of the actuators are constructed. Then, real flight tests are implemented to verify the calculation and identification process. Test results tell the rationality of the semi-empirical method and show the improvement of accuracy of ADM after the compensation of the parameters.
Calculation and Identification of the Aerodynamic Parameters for Small-Scaled Fixed-Wing UAVs
Shen, Jieliang; Su, Yan; Liang, Qing; Zhu, Xinhua
2018-01-01
The establishment of the Aircraft Dynamic Model (ADM) constitutes the prerequisite for the design of the navigation and control system, but the aerodynamic parameters in the model could not be readily obtained especially for small-scaled fixed-wing UAVs. In this paper, the procedure of computing the aerodynamic parameters is developed. All the longitudinal and lateral aerodynamic derivatives are firstly calculated through semi-empirical method based on the aerodynamics, rather than the wind tunnel tests or fluid dynamics software analysis. Secondly, the residuals of each derivative are proposed to be identified or estimated further via Extended Kalman Filter (EKF), with the observations of the attitude and velocity from the airborne integrated navigation system. Meanwhile, the observability of the targeted parameters is analyzed and strengthened through multiple maneuvers. Based on a small-scaled fixed-wing aircraft driven by propeller, the airborne sensors are chosen and the model of the actuators are constructed. Then, real flight tests are implemented to verify the calculation and identification process. Test results tell the rationality of the semi-empirical method and show the improvement of accuracy of ADM after the compensation of the parameters. PMID:29342856
Lorenz, Bettina Anne-Sophie; Hartmann, Monika; Langen, Nina
2017-09-01
In order to provide a basis for the reduction of food losses, our study analyzes individual food choice, eating and leftover behavior in a university canteen by consideration of personal, social and environmental determinants. Based on an extended literature review, a structural equation model is derived and empirically tested for a sample of 343 students. The empirical estimates support the derived model with a good overall model fit and sufficient R 2 values for dependent variables. Hence, our results provide evidence for a general significant impact of behavioral intention and related personal and social determinants as well as for the relevance of environmental/situational determinants such as portion sizes and palatability of food for plate leftovers. Moreover, we find that environmental and personal determinants are interrelated and that the impact of different determinants is relative to perceived time constraints during a visit of the university canteen. Accordingly, we conclude that simple measures to decrease avoidable food waste may take effects via complex and interrelated behavioral structures and that future research should focus on these effects to understand and change food leftover behavior. Copyright © 2017 Elsevier Ltd. All rights reserved.
Stroke mortality variations in South-East Asia: empirical evidence from the field.
Hoy, Damian G; Rao, Chalapati; Hoa, Nguyen Phuong; Suhardi, S; Lwin, Aye Moe Moe
2013-10-01
Stroke is a leading cause of death in Asia; however, many estimates of stroke mortality are based on epidemiological models rather than empirical data. Since 2005, initiatives have been undertaken in a number of Asian countries to strengthen and analyse vital registration data. This has increased the availability of empirical data on stroke mortality. The aim of this paper is to present estimates of stroke mortality for Indonesia, Myanmar, Viet Nam, Thailand, and Malaysia, which have been derived using these empirical data. Age-specific stroke mortality rates were calculated in each of the five countries, and adjusted for data completeness or misclassification where feasible. All data were age-standardized and the resulting rates were compared with World Health Organization estimates, which are largely based on epidemiological models. Using empirical data, stroke ranked as the leading cause of death in all countries except Malaysia, where it ranked as the second leading cause. Age-standardized rates for males ranged from 94 per 100,000 in Thailand, to over 300 per 100,000 in Indonesia. In all countries, rates were higher for males than for females, and those compiled from empirical data were generally higher than modelled estimates published by World Health Organization. This study highlights the extent of stroke mortality in selected Asian countries, and provides important baseline information to investigate the aetiology of stroke in Asia and design appropriate public health strategies to address the rapidly growing burden from stroke. © 2012 The Authors. International Journal of Stroke © 2012 World Stroke Organization.
Stewart, Louis J; Trussel, John
2006-01-01
Although the use of derivatives, particularly interest rate swaps, has grown explosively over the past decade, derivative financial instrument use by nonprofits has received only limited attention in the research literature. Because little is known about the risk management activities of nonprofits, the impact of these instruments on the ability of nonprofits to raise capital may have significant public policy implications. The primary motivation of this study is to determine the types of derivatives used by nonprofits and estimate the frequency of their use among these organizations. Our study also extends contemporary finance theory by an empirical examination of the motivation for interest rate swap usage among nonprofits. Our empirical data came from 193 large nonprofit health care providers that issued debt to the public between 2000 and 2003. We used a univariate analysis and a multivariate analysis relying on logistic regression models to test alternative explanations of interest rate swaps usage by nonprofits, finding that more than 45 percent of our sample, 88 organizations, used interest rate swaps with an aggregate notional value in excess of $8.3 billion. Our empirical tests indicate the primary motive for nonprofits to use interest rate derivatives is to hedge their exposure to interest rate risk. Although these derivatives are a useful risk management tool, under conditions of falling bond market interest rates these derivatives may also expose a nonprofit swap user to the risk of a material unscheduled termination payment. Finally, we found considerable diversity in the informativeness of footnote disclosure among sample organizations that used interest rate swaps. Many nonprofits did not disclose these risks in their financial statements. In conclusion, we find financial managers in large nonprofits commonly use derivative financial instruments as risk management tools, but the use of interest rate swaps by nonprofits may expose them to other risks that are not adequately disclosed in their financial statements.
Linear dynamical modes as new variables for data-driven ENSO forecast
NASA Astrophysics Data System (ADS)
Gavrilov, Andrey; Seleznev, Aleksei; Mukhin, Dmitry; Loskutov, Evgeny; Feigin, Alexander; Kurths, Juergen
2018-05-01
A new data-driven model for analysis and prediction of spatially distributed time series is proposed. The model is based on a linear dynamical mode (LDM) decomposition of the observed data which is derived from a recently developed nonlinear dimensionality reduction approach. The key point of this approach is its ability to take into account simple dynamical properties of the observed system by means of revealing the system's dominant time scales. The LDMs are used as new variables for empirical construction of a nonlinear stochastic evolution operator. The method is applied to the sea surface temperature anomaly field in the tropical belt where the El Nino Southern Oscillation (ENSO) is the main mode of variability. The advantage of LDMs versus traditionally used empirical orthogonal function decomposition is demonstrated for this data. Specifically, it is shown that the new model has a competitive ENSO forecast skill in comparison with the other existing ENSO models.
Two Empirical Models for Land-falling Hurricane Gust Factors
NASA Technical Reports Server (NTRS)
Merceret, Franics J.
2008-01-01
Gaussian and lognormal models for gust factors as a function of height and mean windspeed in land-falling hurricanes are presented. The models were empirically derived using data from 2004 hurricanes Frances and Jeanne and independently verified using data from 2005 hurricane Wilma. The data were collected from three wind towers at Kennedy Space Center and Cape Canaveral Air Force Station with instrumentation at multiple levels from 12 to 500 feet above ground level. An additional 200-foot tower was available for the verification. Mean wind speeds from 15 to 60 knots were included in the data. The models provide formulas for the mean and standard deviation of the gust factor given the mean windspeed and height above ground. These statistics may then be used to assess the probability of exceeding a specified peak wind threshold of operational significance given a specified mean wind speed.
NASA Astrophysics Data System (ADS)
Narvaez, C.; Mendillo, M.; Trovato, J.
2017-12-01
A semi-empirical model of the maximum electron density (Nmax) of the martian ionosphere [MIRI-mark-1](1) was derived from an initial set radar observations by the MEX/MARSIS instrument. To extend the model to full electron density profiles, normalized shapes of Ne(h) from a theoretical model(2) were calibrated by MIRI's Nmax. Subsequent topside ionosphere observations from MAVEN indicated that topside shapes from MEX/MARSIS(3) offered improved morphology. The MEX topside shapes were then merged to the bottomside shapes from the theoretical model. Using a larger set of MEX/MARSIS observations (07/31/2005 - 05/24/2015), a new specification of Nmax as a function of solar zenith angle and solar flux is now used to calibrate the normalized Ne(h) profiles. The MIRI-mark-2 model includes the integral with height of Ne(h) to form total electron content (TEC) values. Validation of the MIRI TEC was accomplished using an independent set of TEC derived from the SHARAD(4) experiment on MRO. (1) M. Mendillo, A. Marusiak, P. Withers, D. Morgan and D. Gurnett, A New Semi-empirical Model of the Peak Electron Density of the Martian Ionosphere, Geophysical Research Letters, 40, 1-5, doi:10.1002/2013GL057631, 2013. (2) Mayyasi, M. and M. Mendillo (2015), Why the Viking descent probes found only one ionospheric layer at Mars, Geophys. Res. Lett., 42, 7359-7365, doi:10.1002/2015GL065575 (3) Němec, F., D. Morgan, D. Gurnett, and D. Andrews (2016), Empirical model of the Martian dayside ionosphere: Effects of crustal magnetic fields and solar ionizing flux at higher altitudes, J. Geophys. Res. Space Physics, 121, 1760-1771, doi:10.1002/2015/A022060.(4) Campbell, B., and T. Watters (2016), Phase compensation of MARSIS subsurface sounding and estimation of ionospheric properties: New insights from SHARAD results, J.Geophys. Res. Planets, 121, 180-193, doi:10.1002/2015JE004917.
Hattori, Masasi
2016-12-01
This paper presents a new theory of syllogistic reasoning. The proposed model assumes there are probabilistic representations of given signature situations. Instead of conducting an exhaustive search, the model constructs an individual-based "logical" mental representation that expresses the most probable state of affairs, and derives a necessary conclusion that is not inconsistent with the model using heuristics based on informativeness. The model is a unification of previous influential models. Its descriptive validity has been evaluated against existing empirical data and two new experiments, and by qualitative analyses based on previous empirical findings, all of which supported the theory. The model's behavior is also consistent with findings in other areas, including working memory capacity. The results indicate that people assume the probabilities of all target events mentioned in a syllogism to be almost equal, which suggests links between syllogistic reasoning and other areas of cognition. Copyright © 2016 The Author(s). Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Baaquie, Belal E.
2007-09-01
Foreword; Preface; Acknowledgements; 1. Synopsis; Part I. Fundamental Concepts of Finance: 2. Introduction to finance; 3. Derivative securities; Part II. Systems with Finite Number of Degrees of Freedom: 4. Hamiltonians and stock options; 5. Path integrals and stock options; 6. Stochastic interest rates' Hamiltonians and path integrals; Part III. Quantum Field Theory of Interest Rates Models: 7. Quantum field theory of forward interest rates; 8. Empirical forward interest rates and field theory models; 9. Field theory of Treasury Bonds' derivatives and hedging; 10. Field theory Hamiltonian of forward interest rates; 11. Conclusions; Appendix A: mathematical background; Brief glossary of financial terms; Brief glossary of physics terms; List of main symbols; References; Index.
Murrihy, Rachael C; Byrne, Mitchell K; Gonsalvez, Craig J
2009-02-01
Internationally, family doctors seeking to enhance their skills in evidence-based mental health treatment are attending brief training workshops, despite clear evidence in the literature that short-term, massed formats are not likely to improve skills in this complex area. Reviews of the educational literature suggest that an optimal model of training would incorporate distributed practice techniques; repeated practice over a lengthy time period, small-group interactive learning, mentoring relationships, skills-based training and an ongoing discussion of actual patients. This study investigates the potential role of group-based training incorporating multiple aspects of good pedagogy for training doctors in basic competencies in brief cognitive behaviour therapy (BCBT). Six groups of family doctors (n = 32) completed eight 2-hour sessions of BCBT group training over a 6-month period. A baseline control design was utilised with pre- and post-training measures of doctors' BCBT skills, knowledge and engagement in BCBT treatment. Family doctors' knowledge, skills in and actual use of BCBT with patients improved significantly over the course of training compared with the control period. This research demonstrates preliminary support for the efficacy of an empirically derived group training model for family doctors. Brief CBT group-based training could prove to be an effective and viable model for future doctor training.
New robust statistical procedures for the polytomous logistic regression models.
Castilla, Elena; Ghosh, Abhik; Martin, Nirian; Pardo, Leandro
2018-05-17
This article derives a new family of estimators, namely the minimum density power divergence estimators, as a robust generalization of the maximum likelihood estimator for the polytomous logistic regression model. Based on these estimators, a family of Wald-type test statistics for linear hypotheses is introduced. Robustness properties of both the proposed estimators and the test statistics are theoretically studied through the classical influence function analysis. Appropriate real life examples are presented to justify the requirement of suitable robust statistical procedures in place of the likelihood based inference for the polytomous logistic regression model. The validity of the theoretical results established in the article are further confirmed empirically through suitable simulation studies. Finally, an approach for the data-driven selection of the robustness tuning parameter is proposed with empirical justifications. © 2018, The International Biometric Society.
Soil loss is commonly estimated using the Revised Universal Soil Loss Equation (RUSLE). Since RUSLE is an empirically based soil loss model derived from surveys on plots, the high spatial and temporal variability of erosion in Mediterranean environments and scale effects provoke...
ERIC Educational Resources Information Center
Miller, Matthew J.; Yang, Minji; Hui, Kayi; Choi, Na-Yeun; Lim, Robert H.
2011-01-01
In the present study, we tested a theoretically and empirically derived partially indirect effects acculturation and enculturation model of Asian American college students' mental health and attitudes toward seeking professional psychological help. Latent variable path analysis with 296 self-identified Asian American college students supported the…
Effects of Career-Related Continuous Learning: A Case Study
ERIC Educational Resources Information Center
Rowold, Jens; Hochholdinger, Sabine; Schilling, Jan
2008-01-01
Purpose: Although proposed from theory, the assumption that career-related continuous learning (CRCL) has a positive impact on subsequent job performance has not been tested empirically. The present study aims to close this gap in the literature. A model is derived from theory that predicts a positive impact of CRCL, learning climate, and initial…
NASA Technical Reports Server (NTRS)
Hedin, A. E.
1979-01-01
A mass spectrometer and incoherent scatter empirical thermosphere model is used to measure the neutral temperature and neutral densities for N2, O2, O, Ar, He and H, the mean molecular weight, and the total mass density. The data is presented in tabular form.
Empirically Derived Optimal Growth Equations For Hardwoods and Softwoods in Arkansas
Don C. Bragg
2002-01-01
Accurate growth projections are critical to reliable forest models, and ecologically based simulators can improve siivicultural predictions because of their sensitivity to change and their capacity to produce long-term forecasts. Potential relative increment (PRI) optimal diameter growth equations for loblolly pine, shortleaf pine, sweetgum, and white oak were fit to...
Soil loss is commonly estimated using the Revised Universal Soil Loss Equation (RUSLE). Since RUSLE is an empirically based soil loss model derived from surveys on plots, the high spatial and temporal variability of erosion in Mediterranean environments and scale effects provo...
NASA Technical Reports Server (NTRS)
Brewin, Robert J.W.; Sathyendranath, Shubha; Muller, Dagmar; Brockmann, Carsten; Deschamps, Pierre-Yves; Devred, Emmanuel; Doerffer, Roland; Fomferra, Norman; Franz, Bryan; Grant, Mike;
2013-01-01
Satellite-derived remote-sensing reflectance (Rrs) can be used for mapping biogeochemically relevant variables, such as the chlorophyll concentration and the Inherent Optical Properties (IOPs) of the water, at global scale for use in climate-change studies. Prior to generating such products, suitable algorithms have to be selected that are appropriate for the purpose. Algorithm selection needs to account for both qualitative and quantitative requirements. In this paper we develop an objective methodology designed to rank the quantitative performance of a suite of bio-optical models. The objective classification is applied using the NASA bio-Optical Marine Algorithm Dataset (NOMAD). Using in situ Rrs as input to the models, the performance of eleven semianalytical models, as well as five empirical chlorophyll algorithms and an empirical diffuse attenuation coefficient algorithm, is ranked for spectrally-resolved IOPs, chlorophyll concentration and the diffuse attenuation coefficient at 489 nm. The sensitivity of the objective classification and the uncertainty in the ranking are tested using a Monte-Carlo approach (bootstrapping). Results indicate that the performance of the semi-analytical models varies depending on the product and wavelength of interest. For chlorophyll retrieval, empirical algorithms perform better than semi-analytical models, in general. The performance of these empirical models reflects either their immunity to scale errors or instrument noise in Rrs data, or simply that the data used for model parameterisation were not independent of NOMAD. Nonetheless, uncertainty in the classification suggests that the performance of some semi-analytical algorithms at retrieving chlorophyll is comparable with the empirical algorithms. For phytoplankton absorption at 443 nm, some semi-analytical models also perform with similar accuracy to an empirical model. We discuss the potential biases, limitations and uncertainty in the approach, as well as additional qualitative considerations for algorithm selection for climate-change studies. Our classification has the potential to be routinely implemented, such that the performance of emerging algorithms can be compared with existing algorithms as they become available. In the long-term, such an approach will further aid algorithm development for ocean-colour studies.
FUSION++: A New Data Assimilative Model for Electron Density Forecasting
NASA Astrophysics Data System (ADS)
Bust, G. S.; Comberiate, J.; Paxton, L. J.; Kelly, M.; Datta-Barua, S.
2014-12-01
There is a continuing need within the operational space weather community, both civilian and military, for accurate, robust data assimilative specifications and forecasts of the global electron density field, as well as derived RF application product specifications and forecasts obtained from the electron density field. The spatial scales of interest range from a hundred to a few thousand kilometers horizontally (synoptic large scale structuring) and meters to kilometers (small scale structuring that cause scintillations). RF space weather applications affected by electron density variability on these scales include navigation, communication and geo-location of RF frequencies ranging from 100's of Hz to GHz. For many of these applications, the necessary forecast time periods range from nowcasts to 1-3 hours. For more "mission planning" applications, necessary forecast times can range from hours to days. In this paper we present a new ionosphere-thermosphere (IT) specification and forecast model being developed at JHU/APL based upon the well-known data assimilation algorithms Ionospheric Data Assimilation Four Dimensional (IDA4D) and Estimating Model Parameters from Ionospheric Reverse Engineering (EMPIRE). This new forecast model, "Forward Update Simple IONosphere model Plus IDA4D Plus EMPIRE (FUSION++), ingests data from observations related to electron density, winds, electric fields and neutral composition and provides improved specification and forecast of electron density. In addition, the new model provides improved specification of winds, electric fields and composition. We will present a short overview and derivation of the methodology behind FUSION++, some preliminary results using real observational sources, example derived RF application products such as HF bi-static propagation, and initial comparisons with independent data sources for validation.
Topography and geology site effects from the intensity prediction model (ShakeMap) for Austria
NASA Astrophysics Data System (ADS)
del Puy Papí Isaba, María; Jia, Yan; Weginger, Stefan
2017-04-01
The seismicity in Austria can be categorized as moderated. Despite the fact that the hazard seems to be rather low, earthquakes can cause great damage and losses, specially in densely populated and industrialized areas. It is well known, that equations which predict intensity as a function of magnitude and distance, among other parameters, are useful tool for hazard and risk assessment. Therefore, this study aims to determine an empirical model of the ground shaking intensities (ShakeMap) of a series of earthquakes occurred in Austria between 1000 and 2014. Furthermore, the obtained empirical model will lead to further interpretation of both, contemporary and historical earthquakes. A total of 285 events, which epicenters were located in Austria, and a sum of 22.739 reported macreoseismic data points from Austria and adjoining countries, were used. These events are enclosed in the period 1000-2014 and characterized by having a local magnitude greater than 3. In the first state of the model development, the data was careful selected, e.g. solely intensities equal or greater than III were used. In a second state the data was adjusted to the selected empirical model. Finally, geology and topography corrections were obtained by means of the model residuals in order to derive intensity-based site amplification effects.
Landslide Hazard Probability Derived from Inherent and Dynamic Determinants
NASA Astrophysics Data System (ADS)
Strauch, Ronda; Istanbulluoglu, Erkan
2016-04-01
Landslide hazard research has typically been conducted independently from hydroclimate research. We unify these two lines of research to provide regional scale landslide hazard information for risk assessments and resource management decision-making. Our approach combines an empirical inherent landslide probability with a numerical dynamic probability, generated by combining routed recharge from the Variable Infiltration Capacity (VIC) macro-scale land surface hydrologic model with a finer resolution probabilistic slope stability model run in a Monte Carlo simulation. Landslide hazard mapping is advanced by adjusting the dynamic model of stability with an empirically-based scalar representing the inherent stability of the landscape, creating a probabilistic quantitative measure of geohazard prediction at a 30-m resolution. Climatology, soil, and topography control the dynamic nature of hillslope stability and the empirical information further improves the discriminating ability of the integrated model. This work will aid resource management decision-making in current and future landscape and climatic conditions. The approach is applied as a case study in North Cascade National Park Complex, a rugged terrain with nearly 2,700 m (9,000 ft) of vertical relief, covering 2757 sq km (1064 sq mi) in northern Washington State, U.S.A.
NASA Astrophysics Data System (ADS)
Mejnertsen, L.; Eastwood, J. P.; Hietala, H.; Schwartz, S. J.; Chittenden, J. P.
2018-01-01
Empirical models of the Earth's bow shock are often used to place in situ measurements in context and to understand the global behavior of the foreshock/bow shock system. They are derived statistically from spacecraft bow shock crossings and typically treat the shock surface as a conic section parameterized according to a uniform solar wind ram pressure, although more complex models exist. Here a global magnetohydrodynamic simulation is used to analyze the variability of the Earth's bow shock under real solar wind conditions. The shape and location of the bow shock is found as a function of time, and this is used to calculate the shock velocity over the shock surface. The results are compared to existing empirical models. Good agreement is found in the variability of the subsolar shock location. However, empirical models fail to reproduce the two-dimensional shape of the shock in the simulation. This is because significant solar wind variability occurs on timescales less than the transit time of a single solar wind phase front over the curved shock surface. Empirical models must therefore be used with care when interpreting spacecraft data, especially when observations are made far from the Sun-Earth line. Further analysis reveals a bias to higher shock speeds when measured by virtual spacecraft. This is attributed to the fact that the spacecraft only observes the shock when it is in motion. This must be accounted for when studying bow shock motion and variability with spacecraft data.
The rate of bubble growth in a superheated liquid in pool boiling
NASA Astrophysics Data System (ADS)
Abdollahi, Mohammad Reza; Jafarian, Mehdi; Jamialahmadi, Mohammad
2017-12-01
A semi-empirical model for the estimation of the rate of bubble growth in nucleate pool boiling is presented, considering a new equation to estimate the temperature history of the bubble in the bulk of liquid. The conservation equations of energy, mass and momentum have been firstly derived and solved analytically. The present analytical model of the bubble growth predicts that the radius of the bubble grows as a function of √{t}.{\\operatorname{erf}}( N√{t}) , while so far the bubble growth rate has been mainly correlated to √{t} in the previous studies. In the next step, the analytical solutions were used to develop a new semi-empirical equation. To achieve this, firstly the analytical solution were non-dimensionalised and then the experimental data, available in the literature, were applied to tune the dimensionless coefficients appeared in the dimensionless equation. Finally, the reliability of the proposed semi-empirical model was assessed through comparison of the model predictions with the available experimental data in the literature, which were not applied in the tuning of the dimensionless parameters of the model. The comparison of the model predictions with other proposed models in the literature was also performed. These comparisons show that this model enables more accurate predictions than previously proposed models with a deviation of less than 10% in a wide range of operating conditions.
Principles of parametric estimation in modeling language competition
Zhang, Menghan; Gong, Tao
2013-01-01
It is generally difficult to define reasonable parameters and interpret their values in mathematical models of social phenomena. Rather than directly fitting abstract parameters against empirical data, we should define some concrete parameters to denote the sociocultural factors relevant for particular phenomena, and compute the values of these parameters based upon the corresponding empirical data. Taking the example of modeling studies of language competition, we propose a language diffusion principle and two language inheritance principles to compute two critical parameters, namely the impacts and inheritance rates of competing languages, in our language competition model derived from the Lotka–Volterra competition model in evolutionary biology. These principles assign explicit sociolinguistic meanings to those parameters and calculate their values from the relevant data of population censuses and language surveys. Using four examples of language competition, we illustrate that our language competition model with thus-estimated parameter values can reliably replicate and predict the dynamics of language competition, and it is especially useful in cases lacking direct competition data. PMID:23716678
A reduced-order model from high-dimensional frictional hysteresis
Biswas, Saurabh; Chatterjee, Anindya
2014-01-01
Hysteresis in material behaviour includes both signum nonlinearities as well as high dimensionality. Available models for component-level hysteretic behaviour are empirical. Here, we derive a low-order model for rate-independent hysteresis from a high-dimensional massless frictional system. The original system, being given in terms of signs of velocities, is first solved incrementally using a linear complementarity problem formulation. From this numerical solution, to develop a reduced-order model, basis vectors are chosen using the singular value decomposition. The slip direction in generalized coordinates is identified as the minimizer of a dissipation-related function. That function includes terms for frictional dissipation through signum nonlinearities at many friction sites. Luckily, it allows a convenient analytical approximation. Upon solution of the approximated minimization problem, the slip direction is found. A final evolution equation for a few states is then obtained that gives a good match with the full solution. The model obtained here may lead to new insights into hysteresis as well as better empirical modelling thereof. PMID:24910522
Principles of parametric estimation in modeling language competition.
Zhang, Menghan; Gong, Tao
2013-06-11
It is generally difficult to define reasonable parameters and interpret their values in mathematical models of social phenomena. Rather than directly fitting abstract parameters against empirical data, we should define some concrete parameters to denote the sociocultural factors relevant for particular phenomena, and compute the values of these parameters based upon the corresponding empirical data. Taking the example of modeling studies of language competition, we propose a language diffusion principle and two language inheritance principles to compute two critical parameters, namely the impacts and inheritance rates of competing languages, in our language competition model derived from the Lotka-Volterra competition model in evolutionary biology. These principles assign explicit sociolinguistic meanings to those parameters and calculate their values from the relevant data of population censuses and language surveys. Using four examples of language competition, we illustrate that our language competition model with thus-estimated parameter values can reliably replicate and predict the dynamics of language competition, and it is especially useful in cases lacking direct competition data.
NASA Astrophysics Data System (ADS)
Bassiouni, Maoya; Higgins, Chad W.; Still, Christopher J.; Good, Stephen P.
2018-06-01
Vegetation controls on soil moisture dynamics are challenging to measure and translate into scale- and site-specific ecohydrological parameters for simple soil water balance models. We hypothesize that empirical probability density functions (pdfs) of relative soil moisture or soil saturation encode sufficient information to determine these ecohydrological parameters. Further, these parameters can be estimated through inverse modeling of the analytical equation for soil saturation pdfs, derived from the commonly used stochastic soil water balance framework. We developed a generalizable Bayesian inference framework to estimate ecohydrological parameters consistent with empirical soil saturation pdfs derived from observations at point, footprint, and satellite scales. We applied the inference method to four sites with different land cover and climate assuming (i) an annual rainfall pattern and (ii) a wet season rainfall pattern with a dry season of negligible rainfall. The Nash-Sutcliffe efficiencies of the analytical model's fit to soil observations ranged from 0.89 to 0.99. The coefficient of variation of posterior parameter distributions ranged from < 1 to 15 %. The parameter identifiability was not significantly improved in the more complex seasonal model; however, small differences in parameter values indicate that the annual model may have absorbed dry season dynamics. Parameter estimates were most constrained for scales and locations at which soil water dynamics are more sensitive to the fitted ecohydrological parameters of interest. In these cases, model inversion converged more slowly but ultimately provided better goodness of fit and lower uncertainty. Results were robust using as few as 100 daily observations randomly sampled from the full records, demonstrating the advantage of analyzing soil saturation pdfs instead of time series to estimate ecohydrological parameters from sparse records. Our work combines modeling and empirical approaches in ecohydrology and provides a simple framework to obtain scale- and site-specific analytical descriptions of soil moisture dynamics consistent with soil moisture observations.
De Vries, Rowen J; Marsh, Steven
2015-11-08
Internal lead shielding is utilized during superficial electron beam treatments of the head and neck, such as lip carcinoma. Methods for predicting backscattered dose include the use of empirical equations or performing physical measurements. The accuracy of these empirical equations required verification for the local electron beams. In this study, a Monte Carlo model of a Siemens Artiste linac was developed for 6, 9, 12, and 15 MeV electron beams using the EGSnrc MC package. The model was verified against physical measurements to an accuracy of better than 2% and 2mm. Multiple MC simulations of lead interfaces at different depths, corresponding to mean electron energies in the range of 0.2-14 MeV at the interfaces, were performed to calculate electron backscatter values. The simulated electron backscatter was compared with current empirical equations to ascertain their accuracy. The major finding was that the current set of backscatter equations does not accurately predict electron backscatter, particularly in the lower energies region. A new equation was derived which enables estimation of electron backscatter factor at any depth upstream from the interface for the local treatment machines. The derived equation agreed to within 1.5% of the MC simulated electron backscatter at the lead interface and upstream positions. Verification of the equation was performed by comparing to measurements of the electron backscatter factor using Gafchromic EBT2 film. These results show a mean value of 0.997 ± 0.022 to 1σ of the predicted values of electron backscatter. The new empirical equation presented can accurately estimate electron backscatter factor from lead shielding in the range of 0.2 to 14 MeV for the local linacs.
Universality of market superstatistics
NASA Astrophysics Data System (ADS)
Denys, Mateusz; Gubiec, Tomasz; Kutner, Ryszard; Jagielski, Maciej; Stanley, H. Eugene
2016-10-01
We use a key concept of the continuous-time random walk formalism, i.e., continuous and fluctuating interevent times in which mutual dependence is taken into account, to model market fluctuation data when traders experience excessive (or superthreshold) losses or excessive (or superthreshold) profits. We analytically derive a class of "superstatistics" that accurately model empirical market activity data supplied by Bogachev, Ludescher, Tsallis, and Bunde that exhibit transition thresholds. We measure the interevent times between excessive losses and excessive profits and use the mean interevent discrete (or step) time as a control variable to derive a universal description of empirical data collapse. Our dominant superstatistic value is a power-law corrected by the lower incomplete gamma function, which asymptotically tends toward robustness but initially gives an exponential. We find that the scaling shape exponent that drives our superstatistics subordinates itself and a "superscaling" configuration emerges. Thanks to the Weibull copula function, our approach reproduces the empirically proven dependence between successive interevent times. We also use the approach to calculate a dynamic risk function and hence the dynamic VaR, which is significant in financial risk analysis. Our results indicate that there is a functional (but not literal) balance between excessive profits and excessive losses that can be described using the same body of superstatistics but different calibration values and driving parameters. We also extend our original approach to cover empirical seismic activity data (e.g., given by Corral), the interevent times of which range from minutes to years. Superpositioned superstatistics is another class of superstatistics that protects power-law behavior both for short- and long-time behaviors. These behaviors describe well the collapse of seismic activity data and capture so-called volatility clustering phenomena.
Marsh, Steven
2015-01-01
Internal lead shielding is utilized during superficial electron beam treatments of the head and neck, such as lip carcinoma. Methods for predicting backscattered dose include the use of empirical equations or performing physical measurements. The accuracy of these empirical equations required verification for the local electron beams. In this study, a Monte Carlo model of a Siemens Artiste linac was developed for 6, 9, 12, and 15 MeV electron beams using the EGSnrc MC package. The model was verified against physical measurements to an accuracy of better than 2% and 2 mm. Multiple MC simulations of lead interfaces at different depths, corresponding to mean electron energies in the range of 0.2–14 MeV at the interfaces, were performed to calculate electron backscatter values. The simulated electron backscatter was compared with current empirical equations to ascertain their accuracy. The major finding was that the current set of backscatter equations does not accurately predict electron backscatter, particularly in the lower energies region. A new equation was derived which enables estimation of electron backscatter factor at any depth upstream from the interface for the local treatment machines. The derived equation agreed to within 1.5% of the MC simulated electron backscatter at the lead interface and upstream positions. Verification of the equation was performed by comparing to measurements of the electron backscatter factor using Gafchromic EBT2 film. These results show a mean value of 0.997±0.022 to 1σ of the predicted values of electron backscatter. The new empirical equation presented can accurately estimate electron backscatter factor from lead shielding in the range of 0.2 to 14 MeV for the local linacs. PACS numbers: 87.53.Bn, 87.55.K‐, 87.56.bd PMID:26699566
Thermospheric mass density model error variance as a function of time scale
NASA Astrophysics Data System (ADS)
Emmert, J. T.; Sutton, E. K.
2017-12-01
In the increasingly crowded low-Earth orbit environment, accurate estimation of orbit prediction uncertainties is essential for collision avoidance. Poor characterization of such uncertainty can result in unnecessary and costly avoidance maneuvers (false positives) or disregard of a collision risk (false negatives). Atmospheric drag is a major source of orbit prediction uncertainty, and is particularly challenging to account for because it exerts a cumulative influence on orbital trajectories and is therefore not amenable to representation by a single uncertainty parameter. To address this challenge, we examine the variance of measured accelerometer-derived and orbit-derived mass densities with respect to predictions by thermospheric empirical models, using the data-minus-model variance as a proxy for model uncertainty. Our analysis focuses mainly on the power spectrum of the residuals, and we construct an empirical model of the variance as a function of time scale (from 1 hour to 10 years), altitude, and solar activity. We find that the power spectral density approximately follows a power-law process but with an enhancement near the 27-day solar rotation period. The residual variance increases monotonically with altitude between 250 and 550 km. There are two components to the variance dependence on solar activity: one component is 180 degrees out of phase (largest variance at solar minimum), and the other component lags 2 years behind solar maximum (largest variance in the descending phase of the solar cycle).
NASA Astrophysics Data System (ADS)
Maldonado, Sergio; Borthwick, Alistair G. L.
2018-02-01
We derive a two-layer depth-averaged model of sediment transport and morphological evolution for application to bedload-dominated problems. The near-bed transport region is represented by the lower (bedload) layer which has an arbitrarily constant, vanishing thickness (of approx. 10 times the sediment particle diameter), and whose average sediment concentration is free to vary. Sediment is allowed to enter the upper layer, and hence the total load may also be simulated, provided that concentrations of suspended sediment remain low. The model conforms with established theories of bedload, and is validated satisfactorily against empirical expressions for sediment transport rates and the morphodynamic experiment of a migrating mining pit by Lee et al. (1993 J. Hydraul. Eng. 119, 64-80 (doi:10.1061/(ASCE)0733-9429(1993)119:1(64))). Investigation into the effect of a local bed gradient on bedload leads to derivation of an analytical, physically meaningful expression for morphological diffusion induced by a non-zero local bed slope. Incorporation of the proposed morphological diffusion into a conventional morphodynamic model (defined as a coupling between the shallow water equations, Exner equation and an empirical formula for bedload) improves model predictions when applied to the evolution of a mining pit, without the need either to resort to special numerical treatment of the equations or to use additional tuning parameters.
Maldonado, Sergio; Borthwick, Alistair G L
2018-02-01
We derive a two-layer depth-averaged model of sediment transport and morphological evolution for application to bedload-dominated problems. The near-bed transport region is represented by the lower (bedload) layer which has an arbitrarily constant, vanishing thickness (of approx. 10 times the sediment particle diameter), and whose average sediment concentration is free to vary. Sediment is allowed to enter the upper layer, and hence the total load may also be simulated, provided that concentrations of suspended sediment remain low. The model conforms with established theories of bedload, and is validated satisfactorily against empirical expressions for sediment transport rates and the morphodynamic experiment of a migrating mining pit by Lee et al. (1993 J. Hydraul. Eng. 119 , 64-80 (doi:10.1061/(ASCE)0733-9429(1993)119:1(64))). Investigation into the effect of a local bed gradient on bedload leads to derivation of an analytical, physically meaningful expression for morphological diffusion induced by a non-zero local bed slope. Incorporation of the proposed morphological diffusion into a conventional morphodynamic model (defined as a coupling between the shallow water equations, Exner equation and an empirical formula for bedload) improves model predictions when applied to the evolution of a mining pit, without the need either to resort to special numerical treatment of the equations or to use additional tuning parameters.
NASA Astrophysics Data System (ADS)
Hofer, Marlis; MöLg, Thomas; Marzeion, Ben; Kaser, Georg
2010-06-01
Recently initiated observation networks in the Cordillera Blanca (Peru) provide temporally high-resolution, yet short-term, atmospheric data. The aim of this study is to extend the existing time series into the past. We present an empirical-statistical downscaling (ESD) model that links 6-hourly National Centers for Environmental Prediction (NCEP)/National Center for Atmospheric Research (NCAR) reanalysis data to air temperature and specific humidity, measured at the tropical glacier Artesonraju (northern Cordillera Blanca). The ESD modeling procedure includes combined empirical orthogonal function and multiple regression analyses and a double cross-validation scheme for model evaluation. Apart from the selection of predictor fields, the modeling procedure is automated and does not include subjective choices. We assess the ESD model sensitivity to the predictor choice using both single-field and mixed-field predictors. Statistical transfer functions are derived individually for different months and times of day. The forecast skill largely depends on month and time of day, ranging from 0 to 0.8. The mixed-field predictors perform better than the single-field predictors. The ESD model shows added value, at all time scales, against simpler reference models (e.g., the direct use of reanalysis grid point values). The ESD model forecast 1960-2008 clearly reflects interannual variability related to the El Niño/Southern Oscillation but is sensitive to the chosen predictor type.
Glavis-Bloom, Justin; Modi, Payal; Nasrin, Sabiha; Rege, Soham; Chu, Chieh; Schmid, Christopher H; Alam, Nur H
2015-01-01
Introduction: Diarrhea remains one of the most common and most deadly conditions affecting children worldwide. Accurately assessing dehydration status is critical to determining treatment course, yet no clinical diagnostic models for dehydration have been empirically derived and validated for use in resource-limited settings. Methods: In the Dehydration: Assessing Kids Accurately (DHAKA) prospective cohort study, a random sample of children under 5 with acute diarrhea was enrolled between February and June 2014 in Bangladesh. Local nurses assessed children for clinical signs of dehydration on arrival, and then serial weights were obtained as subjects were rehydrated. For each child, the percent weight change with rehydration was used to classify subjects with severe dehydration (>9% weight change), some dehydration (3–9%), or no dehydration (<3%). Clinical variables were then entered into logistic regression and recursive partitioning models to develop the DHAKA Dehydration Score and DHAKA Dehydration Tree, respectively. Models were assessed for their accuracy using the area under their receiver operating characteristic curve (AUC) and for their reliability through repeat clinical exams. Bootstrapping was used to internally validate the models. Results: A total of 850 children were enrolled, with 771 included in the final analysis. Of the 771 children included in the analysis, 11% were classified with severe dehydration, 45% with some dehydration, and 44% with no dehydration. Both the DHAKA Dehydration Score and DHAKA Dehydration Tree had significant AUCs of 0.79 (95% CI = 0.74, 0.84) and 0.76 (95% CI = 0.71, 0.80), respectively, for the diagnosis of severe dehydration. Additionally, the DHAKA Dehydration Score and DHAKA Dehydration Tree had significant positive likelihood ratios of 2.0 (95% CI = 1.8, 2.3) and 2.5 (95% CI = 2.1, 2.8), respectively, and significant negative likelihood ratios of 0.23 (95% CI = 0.13, 0.40) and 0.28 (95% CI = 0.18, 0.44), respectively, for the diagnosis of severe dehydration. Both models demonstrated 90% agreement between independent raters and good reproducibility using bootstrapping. Conclusion: This study is the first to empirically derive and internally validate accurate and reliable clinical diagnostic models for dehydration in a resource-limited setting. After external validation, frontline providers may use these new tools to better manage acute diarrhea in children. PMID:26374802
Levine, Adam C; Glavis-Bloom, Justin; Modi, Payal; Nasrin, Sabiha; Rege, Soham; Chu, Chieh; Schmid, Christopher H; Alam, Nur H
2015-08-18
Diarrhea remains one of the most common and most deadly conditions affecting children worldwide. Accurately assessing dehydration status is critical to determining treatment course, yet no clinical diagnostic models for dehydration have been empirically derived and validated for use in resource-limited settings. In the Dehydration: Assessing Kids Accurately (DHAKA) prospective cohort study, a random sample of children under 5 with acute diarrhea was enrolled between February and June 2014 in Bangladesh. Local nurses assessed children for clinical signs of dehydration on arrival, and then serial weights were obtained as subjects were rehydrated. For each child, the percent weight change with rehydration was used to classify subjects with severe dehydration (>9% weight change), some dehydration (3-9%), or no dehydration (<3%). Clinical variables were then entered into logistic regression and recursive partitioning models to develop the DHAKA Dehydration Score and DHAKA Dehydration Tree, respectively. Models were assessed for their accuracy using the area under their receiver operating characteristic curve (AUC) and for their reliability through repeat clinical exams. Bootstrapping was used to internally validate the models. A total of 850 children were enrolled, with 771 included in the final analysis. Of the 771 children included in the analysis, 11% were classified with severe dehydration, 45% with some dehydration, and 44% with no dehydration. Both the DHAKA Dehydration Score and DHAKA Dehydration Tree had significant AUCs of 0.79 (95% CI = 0.74, 0.84) and 0.76 (95% CI = 0.71, 0.80), respectively, for the diagnosis of severe dehydration. Additionally, the DHAKA Dehydration Score and DHAKA Dehydration Tree had significant positive likelihood ratios of 2.0 (95% CI = 1.8, 2.3) and 2.5 (95% CI = 2.1, 2.8), respectively, and significant negative likelihood ratios of 0.23 (95% CI = 0.13, 0.40) and 0.28 (95% CI = 0.18, 0.44), respectively, for the diagnosis of severe dehydration. Both models demonstrated 90% agreement between independent raters and good reproducibility using bootstrapping. This study is the first to empirically derive and internally validate accurate and reliable clinical diagnostic models for dehydration in a resource-limited setting. After external validation, frontline providers may use these new tools to better manage acute diarrhea in children. © Levine et al.
Semi-empirical proton binding constants for natural organic matter
NASA Astrophysics Data System (ADS)
Matynia, Anthony; Lenoir, Thomas; Causse, Benjamin; Spadini, Lorenzo; Jacquet, Thierry; Manceau, Alain
2010-03-01
Average proton binding constants ( KH,i) for structure models of humic (HA) and fulvic (FA) acids were estimated semi-empirically by breaking down the macromolecules into reactive structural units (RSUs), and calculating KH,i values of the RSUs using linear free energy relationships (LFER) of Hammett. Predicted log KH,COOH and log KH,Ph-OH are 3.73 ± 0.13 and 9.83 ± 0.23 for HA, and 3.80 ± 0.20 and 9.87 ± 0.31 for FA. The predicted constants for phenolic-type sites (Ph-OH) are generally higher than those derived from potentiometric titrations, but the difference may not be significant in view of the considerable uncertainty of the acidity constants determined from acid-base measurements at high pH. The predicted constants for carboxylic-type sites agree well with titration data analyzed with Model VI (4.10 ± 0.16 for HA, 3.20 ± 0.13 for FA; Tipping, 1998), the Impermeable Sphere model (3.50-4.50 for HA; Avena et al., 1999), and the Stockholm Humic Model (4.10 ± 0.20 for HA, 3.50 ± 0.40 for FA; Gustafsson, 2001), but differ by about one log unit from those obtained by Milne et al. (2001) with the NICA-Donnan model (3.09 ± 0.51 for HA, 2.65 ± 0.43 for FA), and used to derive recommended generic values. To clarify this ambiguity, 10 high-quality titration data from Milne et al. (2001) were re-analyzed with the new predicted equilibrium constants. The data are described equally well with the previous and new sets of values ( R2 ⩾ 0.98), not necessarily because the NICA-Donnan model is overparametrized, but because titration lacks the sensitivity needed to quantify the full binding properties of humic substances. Correlations between NICA-Donnan parameters are discussed, but general progress is impeded by the unknown number of independent parameters that can be varied during regression of a model fit to titration data. The high consistency between predicted and experimental KH,COOH values, excluding those of Milne et al. (2001), gives faith in the proposed semi-empirical structural approach, and its usefulness to assess the plausibility of proton stability constants derived from simulations of titration data.
NASA Astrophysics Data System (ADS)
Varotsos, C. A.; Efstathiou, M. N.
2018-03-01
In this paper we investigate the evolution of the energy emitted by CO2 and NO from the Earth's thermosphere on a global scale using both observational and empirically derived data. In the beginning, we analyze the daily power observations of CO2 and NO received from the Sounding of the Atmosphere using Broadband Emission Radiometry (SABER) equipment on the NASA Thermosphere-Ionosphere-Mesosphere Energetics and Dynamics (TIMED) satellite for the entire period 2002-2016. We then perform the same analysis on the empirical daily power emitted by CO2 and NO that were derived recently from the infrared energy budget of the thermosphere during 1947-2016. The tool used for the analysis of the observational and empirical datasets is the detrended fluctuation analysis, in order to investigate whether the power emitted by CO2 and by NO from the thermosphere exhibits power-law behavior. The results obtained from both observational and empirical data do not support the establishment of the power-law behavior. This conclusion reveals that the empirically derived data are characterized by the same intrinsic properties as those of the observational ones, thus enhancing the validity of their reliability.
Advancing Empirical Scholarship to Further Develop Evaluation Theory and Practice
ERIC Educational Resources Information Center
Christie, Christina A.
2011-01-01
Good theory development is grounded in empirical inquiry. In the context of educational evaluation, the development of empirically grounded theory has important benefits for the field and the practitioner. In particular, a shift to empirically derived theory will assist in advancing more systematic and contextually relevant evaluation practice, as…
NASA Astrophysics Data System (ADS)
Khademian, Amir; Abdollahipour, Hamed; Bagherpour, Raheb; Faramarzi, Lohrasb
2017-10-01
In addition to the numerous planning and executive challenges, underground excavation in urban areas is always followed by certain destructive effects especially on the ground surface; ground settlement is the most important of these effects for which estimation there exist different empirical, analytical and numerical methods. Since geotechnical models are associated with considerable model uncertainty, this study characterized the model uncertainty of settlement estimation models through a systematic comparison between model predictions and past performance data derived from instrumentation. To do so, the amount of surface settlement induced by excavation of the Qom subway tunnel was estimated via empirical (Peck), analytical (Loganathan and Poulos) and numerical (FDM) methods; the resulting maximum settlement value of each model were 1.86, 2.02 and 1.52 cm, respectively. The comparison of these predicted amounts with the actual data from instrumentation was employed to specify the uncertainty of each model. The numerical model outcomes, with a relative error of 3.8%, best matched the reality and the analytical method, with a relative error of 27.8%, yielded the highest level of model uncertainty.
Treatment of childhood traumatic grief.
Cohen, Judith A; Mannarino, Anthony P
2004-12-01
Childhood traumatic grief (CTG) is a condition in which trauma symptoms impinge on children's ability to negotiate the normal grieving process. Clinical characteristics of CTG and their implications for treatment are discussed, and data from a small number of open-treatment studies of traumatically bereaved children are reviewed. An empirically derived treatment model for CTG is described; this model addresses both trauma and grief symptoms and includes a parental treatment component. Future research directions are also addressed.
What Quasars Really Look Like: Unification of the Emission and Absorption Line Regions
NASA Technical Reports Server (NTRS)
Elvis, Martin
2000-01-01
We propose a simple unifying structure for the inner regions of quasars and AGN. This empirically derived model links together the broad absorption line (BALS), the narrow UV/X-ray ionized absorbers, the BELR, and the 5 Compton scattering/fluorescing regions into a single structure. The model also suggests an alternative origin for the large-scale bi-conical outflows. Some other potential implications of this structure are discussed.
Spatio-temporal modelling of rainfall in the Murray-Darling Basin
NASA Astrophysics Data System (ADS)
Nowak, Gen; Welsh, A. H.; O'Neill, T. J.; Feng, Lingbing
2018-02-01
The Murray-Darling Basin (MDB) is a large geographical region in southeastern Australia that contains many rivers and creeks, including Australia's three longest rivers, the Murray, the Murrumbidgee and the Darling. Understanding rainfall patterns in the MDB is very important due to the significant impact major events such as droughts and floods have on agricultural and resource productivity. We propose a model for modelling a set of monthly rainfall data obtained from stations in the MDB and for producing predictions in both the spatial and temporal dimensions. The model is a hierarchical spatio-temporal model fitted to geographical data that utilises both deterministic and data-derived components. Specifically, rainfall data at a given location are modelled as a linear combination of these deterministic and data-derived components. A key advantage of the model is that it is fitted in a step-by-step fashion, enabling appropriate empirical choices to be made at each step.
NASA Astrophysics Data System (ADS)
Yang, Xiaochen; Zhang, Qinghe; Hao, Linnan
2015-03-01
A water-fluid mud coupling model is developed based on the unstructured grid finite volume coastal ocean model (FVCOM) to investigate the fluid mud motion. The hydrodynamics and sediment transport of the overlying water column are solved using the original three-dimensional ocean model. A horizontal two-dimensional fluid mud model is integrated into the FVCOM model to simulate the underlying fluid mud flow. The fluid mud interacts with the water column through the sediment flux, current, and shear stress. The friction factor between the fluid mud and the bed, which is traditionally determined empirically, is derived with the assumption that the vertical distribution of shear stress below the yield surface of fluid mud is identical to that of uniform laminar flow of Newtonian fluid in the open channel. The model is validated by experimental data and reasonable agreement is found. Compared with numerical cases with fixed friction factors, the results simulated with the derived friction factor exhibit the best agreement with the experiment, which demonstrates the necessity of the derivation of the friction factor.
Seasonal forecast of St. Louis encephalitis virus transmission, Florida.
Shaman, Jeffrey; Day, Jonathan F; Stieglitz, Marc; Zebiak, Stephen; Cane, Mark
2004-05-01
Disease transmission forecasts can help minimize human and domestic animal health risks by indicating where disease control and prevention efforts should be focused. For disease systems in which weather-related variables affect pathogen proliferation, dispersal, or transmission, the potential for disease forecasting exists. We present a seasonal forecast of St. Louis encephalitis virus transmission in Indian River County, Florida. We derive an empiric relationship between modeled land surface wetness and levels of SLEV transmission in humans. We then use these data to forecast SLEV transmission with a seasonal lead. Forecast skill is demonstrated, and a real-time seasonal forecast of epidemic SLEV transmission is presented. This study demonstrates how weather and climate forecast skill-verification analyses may be applied to test the predictability of an empiric disease forecast model.
Seasonal Forecast of St. Louis Encephalitis Virus Transmission, Florida
Day, Jonathan F.; Stieglitz, Marc; Zebiak, Stephen; Cane, Mark
2004-01-01
Disease transmission forecasts can help minimize human and domestic animal health risks by indicating where disease control and prevention efforts should be focused. For disease systems in which weather-related variables affect pathogen proliferation, dispersal, or transmission, the potential for disease forecasting exists. We present a seasonal forecast of St. Louis encephalitis virus transmission in Indian River County, Florida. We derive an empirical relationship between modeled land surface wetness and levels of SLEV transmission in humans. We then use these data to forecast SLEV transmission with a seasonal lead. Forecast skill is demonstrated, and a real-time seasonal forecast of epidemic SLEV transmission is presented. This study demonstrates how weather and climate forecast skill verification analyses may be applied to test the predictability of an empirical disease forecast model. PMID:15200812
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wosnik, Martin; Bachant, Pete; Neary, Vincent Sinclair
CACTUS, developed by Sandia National Laboratories, is an open-source code for the design and analysis of wind and hydrokinetic turbines. While it has undergone extensive validation for both vertical axis and horizontal axis wind turbines, and it has been demonstrated to accurately predict the performance of horizontal (axial-flow) hydrokinetic turbines, its ability to predict the performance of crossflow hydrokinetic turbines has yet to be tested. The present study addresses this problem by comparing the predicted performance curves derived from CACTUS simulations of the U.S. Department of Energy’s 1:6 scale reference model crossflow turbine to those derived by experimental measurements inmore » a tow tank using the same model turbine at the University of New Hampshire. It shows that CACTUS cannot accurately predict the performance of this crossflow turbine, raising concerns on its application to crossflow hydrokinetic turbines generally. The lack of quality data on NACA 0021 foil aerodynamic (hydrodynamic) characteristics over the wide range of angles of attack (AoA) and Reynolds numbers is identified as the main cause for poor model prediction. A comparison of several different NACA 0021 foil data sources, derived using both physical and numerical modeling experiments, indicates significant discrepancies at the high AoA experienced by foils on crossflow turbines. Users of CACTUS for crossflow hydrokinetic turbines are, therefore, advised to limit its application to higher tip speed ratios (lower AoA), and to carefully verify the reliability and accuracy of their foil data. Accurate empirical data on the aerodynamic characteristics of the foil is the greatest limitation to predicting performance for crossflow turbines with semi-empirical models like CACTUS. Future improvements of CACTUS for crossflow turbine performance prediction will require the development of accurate foil aerodynamic characteristic data sets within the appropriate ranges of Reynolds numbers and AoA.« less
Suarez, M.B.; Gonzalez, Luis A.; Ludvigson, Greg A.
2011-01-01
This study aims to investigate the global hydrologic cycle during the mid-Cretaceous greenhouse by utilizing the oxygen isotopic composition of pedogenic carbonates (calcite and siderite) as proxies for the oxygen isotopic composition of precipitation. The data set builds on the Aptian-Albian sphaerosiderite ??18O data set presented by Ufnar et al. (2002) by incorporating additional low latitude data including pedogenic and early meteoric diagenetic calcite ??18O. Ufnar et al. (2002) used the proxy data derived from the North American Cretaceous Western Interior Basin (KWIB) in a mass balance model to estimate precipitation-evaporation fluxes. We have revised this mass balance model to handle sphaerosiderite and calcite proxies, and to account for longitudinal travel by tropical air masses. We use empirical and general circulation model (GCM) temperature gradients for the mid-Cretaceous, and the empirically derived ??18O composition of groundwater as constraints in our mass balance model. Precipitation flux, evaporation flux, relative humidity, seawater composition, and continental feedback are adjusted to generate model calculated groundwater ??18O compositions (proxy for precipitation ??18O) that match the empirically-derived groundwater ??18O compositions to within ??0.5???. The model is calibrated against modern precipitation data sets.Four different Cretaceous temperature estimates were used: the leaf physiognomy estimates of Wolfe and Upchurch (1987) and Spicer and Corfield (1992), the coolest and warmest Cretaceous estimates compiled by Barron (1983) and model outputs from the GENESIS-MOM GCM by Zhou et al. (2008). Precipitation and evaporation fluxes for all the Cretaceous temperature gradients utilized in the model are greater than modern precipitation and evaporation fluxes. Balancing the model also requires relative humidity in the subtropical dry belt to be significantly reduced. As expected calculated precipitation rates are all greater than modern precipitation rates. Calculated global average precipitation rates range from 371mm/year to 1196mm/year greater than modern precipitation rates. Model results support the hypothesis that increased rainout produces ??18O-depleted precipitation.Sensitivity testing of the model indicates that the amount of water vapor in the air mass, and its origin and pathway, significantly affect the oxygen isotopic composition of precipitation. Precipitation ??18O is also sensitive to seawater ??18O and enriched tropical seawater was necessary to simulate proxy data (consistent with fossil and geologic evidence for a warmer and evaporatively enriched Tethys). Improved constraints in variables such as seawater ??18O can help improve boundary conditions for mid-Cretaceous climate simulations. ?? 2011 Elsevier B.V.
Kabore, Achille; Biritwum, Nana-Kwadwo; Downs, Philip W.; Soares Magalhaes, Ricardo J.; Zhang, Yaobi; Ottesen, Eric A.
2013-01-01
Background Mapping the distribution of schistosomiasis is essential to determine where control programs should operate, but because it is impractical to assess infection prevalence in every potentially endemic community, model-based geostatistics (MBG) is increasingly being used to predict prevalence and determine intervention strategies. Methodology/Principal Findings To assess the accuracy of MBG predictions for Schistosoma haematobium infection in Ghana, school surveys were evaluated at 79 sites to yield empiric prevalence values that could be compared with values derived from recently published MBG predictions. Based on these findings schools were categorized according to WHO guidelines so that practical implications of any differences could be determined. Using the mean predicted values alone, 21 of the 25 empirically determined ‘high-risk’ schools requiring yearly praziquantel would have been undertreated and almost 20% of the remaining schools would have been treated despite empirically-determined absence of infection – translating into 28% of the children in the 79 schools being undertreated and 12% receiving treatment in the absence of any demonstrated need. Conclusions/Significance Using the current predictive map for Ghana as a spatial decision support tool by aggregating prevalence estimates to the district level was clearly not adequate for guiding the national program, but the alternative of assessing each school in potentially endemic areas of Ghana or elsewhere is not at all feasible; modelling must be a tool complementary to empiric assessments. Thus for practical usefulness, predictive risk mapping should not be thought of as a one-time exercise but must, as in the current study, be an iterative process that incorporates empiric testing and model refining to create updated versions that meet the needs of disease control operational managers. PMID:23505584
Prediction of Environmental Impact of High-Energy Materials with Atomistic Computer Simulations
2010-11-01
from a training set of compounds. Other methods include Quantitative Struc- ture-Activity Relationship ( QSAR ) and Quantitative Structure-Property...26 28 the development of QSPR/ QSAR models, in contrast to boiling points and critical parameters derived from empirical correlations, to improve...Quadratic Configuration Interaction Singles Doubles QSAR Quantitative Structure-Activity Relationship QSPR Quantitative Structure-Property
Use of empirically derived source-destination models to map regional conservation corridors
Samuel A. Cushman; Kevin S. McKelvey; Michael K. Schwartz
2008-01-01
The ability of populations to be connected across large landscapes via dispersal is critical to longterm viability for many species. One means to mitigate population isolation is the protection of movement corridors among habitat patches. Nevertheless, the utility of small, narrow, linear features as habitat corridors has been hotly debated. Here, we argue that...
ERIC Educational Resources Information Center
Hamlin, Bob; Ellinger, Andrea D.; Beattie, Rona S.
2004-01-01
The concept of managers assuming developmental roles such as coaches and learning facilitators has gained considerable attention in recent years as organizations seek to leverage learning by creating infrastructures that foster employee learning and development. Despite the increased focus on coaching, the literature base remains atheoretical.…
Comparing timber and lumber from plantation and natural stands of ponderosa pine
Eini C. Lowell; Christine L. Todoroki; Ed. Thomas
2009-01-01
Data derived from empirical studies, coupled with modeling and simulation techniques, were used to compare tree and product quality from two stands of small-diameter ponderosa pine trees growing in northern California: one plantation, the other natural. The plantation had no management following establishment, and the natural stand had no active management. Fifty trees...
The Massive Star Content of Circumnuclear Star Clusters in M83
NASA Astrophysics Data System (ADS)
Wofford, A.; Chandar, R.; Leitherer, C.
2011-06-01
The circumnuclear starburst of M83 (NGC 5236), the nearest such example (4.6 Mpc), constitutes an ideal site for studying the massive star IMF at high metallicity (12+log[O/H]=9.1±0.2, Bresolin & Kennicutt 2002). We analyzed archival HST/STIS FUV imaging and spectroscopy of 13 circumnuclear star clusters in M83. We compared the observed spectra with two types of single stellar population (SSP) models; semi-empirical models, which are based on an empirical library of Galactic O and B stars observed with IUE (Robert et al. 1993), and theoretical models, which are based on a new theoretical UV library of hot massive stars described in Leitherer et al. (2010) and computed with WM-Basic (Pauldrach et al. 2001). The models were generated with Starburst99 (Leitherer & Chen 2009). We derived the reddenings, the ages, and the masses of the clusters from model fits to the FUV spectroscopy, as well as from optical HST/WFC3 photometry.
ERIC Educational Resources Information Center
Coromaldi, Manuela; Zoli, Mariangela
2012-01-01
Theoretical and empirical studies have recently adopted a multidimensional concept of poverty. There is considerable debate about the most appropriate degree of multidimensionality to retain in the analysis. In this work we add to the received literature in two ways. First, we derive indicators of multiple deprivation by applying a particular…
Koopmeiners, Joseph S; Feng, Ziding
2011-01-01
The receiver operating characteristic (ROC) curve, the positive predictive value (PPV) curve and the negative predictive value (NPV) curve are three measures of performance for a continuous diagnostic biomarker. The ROC, PPV and NPV curves are often estimated empirically to avoid assumptions about the distributional form of the biomarkers. Recently, there has been a push to incorporate group sequential methods into the design of diagnostic biomarker studies. A thorough understanding of the asymptotic properties of the sequential empirical ROC, PPV and NPV curves will provide more flexibility when designing group sequential diagnostic biomarker studies. In this paper we derive asymptotic theory for the sequential empirical ROC, PPV and NPV curves under case-control sampling using sequential empirical process theory. We show that the sequential empirical ROC, PPV and NPV curves converge to the sum of independent Kiefer processes and show how these results can be used to derive asymptotic results for summaries of the sequential empirical ROC, PPV and NPV curves.
Koopmeiners, Joseph S.; Feng, Ziding
2013-01-01
The receiver operating characteristic (ROC) curve, the positive predictive value (PPV) curve and the negative predictive value (NPV) curve are three measures of performance for a continuous diagnostic biomarker. The ROC, PPV and NPV curves are often estimated empirically to avoid assumptions about the distributional form of the biomarkers. Recently, there has been a push to incorporate group sequential methods into the design of diagnostic biomarker studies. A thorough understanding of the asymptotic properties of the sequential empirical ROC, PPV and NPV curves will provide more flexibility when designing group sequential diagnostic biomarker studies. In this paper we derive asymptotic theory for the sequential empirical ROC, PPV and NPV curves under case-control sampling using sequential empirical process theory. We show that the sequential empirical ROC, PPV and NPV curves converge to the sum of independent Kiefer processes and show how these results can be used to derive asymptotic results for summaries of the sequential empirical ROC, PPV and NPV curves. PMID:24039313
A model of rotationally-sampled wind turbulence for predicting fatigue loads in wind turbines
NASA Technical Reports Server (NTRS)
Spera, David A.
1995-01-01
Empirical equations are presented with which to model rotationally-sampled (R-S) turbulence for input to structural-dynamic computer codes and the calculation of wind turbine fatigue loads. These equations are derived from R-S turbulence data which were measured at the vertical-plane array in Clayton, New Mexico. For validation, the equations are applied to the calculation of cyclic flapwise blade loads for the NASA/DOE Mod-2 2.5-MW experimental HAWT's (horizontal-axis wind turbines), and the results compared to measured cyclic loads. Good correlation is achieved, indicating that the R-S turbulence model developed in this study contains the characteristics of the wind which produce many of the fatigue loads sustained by wind turbines. Empirical factors are included which permit the prediction of load levels at specified percentiles of occurrence, which is required for the generation of fatigue load spectra and the prediction of the fatigue lifetime of structures.
Kanematsu, Nobuyuki
2009-03-07
Dose calculation for radiotherapy with protons and heavier ions deals with a large volume of path integrals involving a scattering power of body tissue. This work provides a simple model for such demanding applications. There is an approximate linearity between RMS end-point displacement and range of incident particles in water, empirically found in measurements and detailed calculations. This fact was translated into a simple linear formula, from which the scattering power that is only inversely proportional to the residual range was derived. The simplicity enabled the analytical formulation for ions stopping in water, which was designed to be equivalent with the extended Highland model and agreed with measurements within 2% or 0.02 cm in RMS displacement. The simplicity will also improve the efficiency of numerical path integrals in the presence of heterogeneity.
Updates on Force Limiting Improvements
NASA Technical Reports Server (NTRS)
Kolaini, Ali R.; Scharton, Terry
2013-01-01
The following conventional force limiting methods currently practiced in deriving force limiting specifications assume one-dimensional translation source and load apparent masses: Simple TDOF model; Semi-empirical force limits; Apparent mass, etc.; Impedance method. Uncorrelated motion of the mounting points for components mounted on panels and correlated, but out-of-phase, motions of the support structures are important and should be considered in deriving force limiting specifications. In this presentation "rock-n-roll" motions of the components supported by panels, which leads to a more realistic force limiting specifications are discussed.
Jet Aeroacoustics: Noise Generation Mechanism and Prediction
NASA Technical Reports Server (NTRS)
Tam, Christopher
1998-01-01
This report covers the third year research effort of the project. The research work focussed on the fine scale mixing noise of both subsonic and supersonic jets and the effects of nozzle geometry and tabs on subsonic jet noise. In publication 1, a new semi-empirical theory of jet mixing noise from fine scale turbulence is developed. By an analogy to gas kinetic theory, it is shown that the source of noise is related to the time fluctuations of the turbulence kinetic theory. On starting with the Reynolds Averaged Navier-Stokes equations, a formula for the radiated noise is derived. An empirical model of the space-time correlation function of the turbulence kinetic energy is adopted. The form of the model is in good agreement with the space-time two-point velocity correlation function measured by Davies and coworkers. The parameters of the correlation are related to the parameters of the k-epsilon turbulence model. Thus the theory is self-contained. Extensive comparisons between the computed noise spectrum of the theory and experimental measured have been carried out. The parameters include jet Mach number from 0.3 to 2.0 and temperature ratio from 1.0 to 4.8. Excellent agreements are found in the spectrum shape, noise intensity and directivity. It is envisaged that the theory would supercede all semi-empirical and totally empirical jet noise prediction methods in current use.
Sediment on Mars: settling faster, moving slower
NASA Astrophysics Data System (ADS)
Kuhn, N. J.
2013-12-01
Using empirical approaches developed on Earth to assess Martian hydrology based on conglomerates such as those found at Gale crater may deliver false results because Martian gravity potentially alters flow-sediment interaction compared to Earth. In this study, we report the results of our Mars Sedimentation Experiments (MarsSedEx I and II) which used settling tubes during reduced gravity flights in November 2012 (and scheduled for November 2013) on board Zero g's G-Force 1. The settling velocity data collected during the flights are compared to several models for terrestrial settling velocities. The results indicate that settling velocities on Mars are underestimated by up to 30 to 50%, depending on the selected model. As a consequence, transport distances of sediment particles increase by a similar proportion in a given flow. We suspect that the underestimation of settling velocity is caused by poor capture of flow hydraulics under reduced gravity. While MarsSedEx I (and II) results are only very preliminary, they indicate that applying empirically derived models for Earth to conglomerates such as those found at Garle crater to derive properties of surface runoff carries the risk of significantly misjudging flow depth and velocities. In the light of the potentially strong influence of topography on runoff generation on Mars, we may therefore end up looking for water in the wrong place.
Pre-main-sequence isochrones - II. Revising star and planet formation time-scales
NASA Astrophysics Data System (ADS)
Bell, Cameron P. M.; Naylor, Tim; Mayne, N. J.; Jeffries, R. D.; Littlefair, S. P.
2013-09-01
We have derived ages for 13 young (<30 Myr) star-forming regions and find that they are up to a factor of 2 older than the ages typically adopted in the literature. This result has wide-ranging implications, including that circumstellar discs survive longer (≃ 10-12 Myr) and that the average Class I lifetime is greater (≃1 Myr) than currently believed. For each star-forming region, we derived two ages from colour-magnitude diagrams. First, we fitted models of the evolution between the zero-age main sequence and terminal-age main sequence to derive a homogeneous set of main-sequence ages, distances and reddenings with statistically meaningful uncertainties. Our second age for each star-forming region was derived by fitting pre-main-sequence stars to new semi-empirical model isochrones. For the first time (for a set of clusters younger than 50 Myr), we find broad agreement between these two ages, and since these are derived from two distinct mass regimes that rely on different aspects of stellar physics, it gives us confidence in the new age scale. This agreement is largely due to our adoption of empirical colour-Teff relations and bolometric corrections for pre-main-sequence stars cooler than 4000 K. The revised ages for the star-forming regions in our sample are: ˜2 Myr for NGC 6611 (Eagle Nebula; M 16), IC 5146 (Cocoon Nebula), NGC 6530 (Lagoon Nebula; M 8) and NGC 2244 (Rosette Nebula); ˜6 Myr for σ Ori, Cep OB3b and IC 348; ≃10 Myr for λ Ori (Collinder 69); ≃11 Myr for NGC 2169; ≃12 Myr for NGC 2362; ≃13 Myr for NGC 7160; ≃14 Myr for χ Per (NGC 884); and ≃20 Myr for NGC 1960 (M 36).
Revising Star and Planet Formation Timescales
NASA Astrophysics Data System (ADS)
Bell, Cameron P. M.; Naylor, Tim; Mayne, N. J.; Jeffries, R. D.; Littlefair, S. P.
2013-07-01
We have derived ages for 13 young (<30 Myr) star-forming regions and find that they are up to a factor of 2 older than the ages typically adopted in the literature. This result has wide-ranging implications, including that circumstellar discs survive longer (≃ 10-12 Myr) and that the average Class I lifetime is greater (≃1 Myr) than currently believed. For each star-forming region, we derived two ages from colour-magnitude diagrams. First, we fitted models of the evolution between the zero-age main sequence and terminal-age main sequence to derive a homogeneous set of main-sequence ages, distances and reddenings with statistically meaningful uncertainties. Our second age for each star-forming region was derived by fitting pre-main-sequence stars to new semi-empirical model isochrones. For the first time (for a set of clusters younger than 50 Myr), we find broad agreement between these two ages, and since these are derived from two distinct mass regimes that rely on different aspects of stellar physics, it gives us confidence in the new age scale. This agreement is largely due to our adoption of empirical colour-Teff relations and bolometric corrections for pre-main-sequence stars cooler than 4000 K. The revised ages for the star-forming regions in our sample are: 2 Myr for NGC 6611 (Eagle Nebula; M 16), IC 5146 (Cocoon Nebula), NGC 6530 (Lagoon Nebula; M 8) and NGC 2244 (Rosette Nebula); 6 Myr for σ Ori, Cep OB3b and IC 348; ≃10 Myr for λ Ori (Collinder 69); ≃11 Myr for NGC 2169; ≃12 Myr for NGC 2362; ≃13 Myr for NGC 7160; ≃14 Myr for χ Per (NGC 884); and ≃20 Myr for NGC 1960 (M 36).
NASA Astrophysics Data System (ADS)
Monteys, Xavier; Harris, Paul; Caloca, Silvia
2014-05-01
The coastal shallow water zone can be a challenging and expensive environment within which to acquire bathymetry and other oceanographic data using traditional survey methods. Dangers and limited swath coverage make some of these areas unfeasible to survey using ship borne systems, and turbidity can preclude marine LIDAR. As a result, an extensive part of the coastline worldwide remains completely unmapped. Satellite EO multispectral data, after processing, allows timely, cost efficient and quality controlled information to be used for planning, monitoring, and regulating coastal environments. It has the potential to deliver repetitive derivation of medium resolution bathymetry, coastal water properties and seafloor characteristics in shallow waters. Over the last 30 years satellite passive imaging methods for bathymetry extraction, implementing analytical or empirical methods, have had a limited success predicting water depths. Different wavelengths of the solar light penetrate the water column to varying depths. They can provide acceptable results up to 20 m but become less accurate in deeper waters. The study area is located in the inner part of Dublin Bay, on the East coast of Ireland. The region investigated is a C-shaped inlet covering an area of 10 km long and 5 km wide with water depths ranging from 0 to 10 m. The methodology employed on this research uses a ratio of reflectance from SPOT 5 satellite bands, differing to standard linear transform algorithms. High accuracy water depths were derived using multibeam data. The final empirical model uses spatially weighted geographical tools to retrieve predicted depths. The results of this paper confirm that SPOT satellite scenes are suitable to predict depths using empirical models in very shallow embayments. Spatial regression models show better adjustments in the predictions over non-spatial models. The spatial regression equation used provides realistic results down to 6 m below the water surface, with reliable and error controlled depths. Bathymetric extraction approaches involving satellite imagery data are regarded as a fast, successful and economically advantageous solution to automatic water depth calculation in shallow and complex environments.
Increasing Functional Communication in Non-Speaking Preschool Children: Comparison of PECS and VOCA
ERIC Educational Resources Information Center
Bock, Stacey Jones; Stoner, Julia B.; Beck, Ann R.; Hanley, Laurie; Prochnow, Jessica
2005-01-01
For individuals who have complex communication needs and for the interventionists who work with them, the collection of empirically derived data that support the use of an intervention approach is critical. The purposes of this study were to continue building an empirically derived base of support for, and to compare the relative effectiveness of…
Model risk for European-style stock index options.
Gençay, Ramazan; Gibson, Rajna
2007-01-01
In empirical modeling, there have been two strands for pricing in the options literature, namely the parametric and nonparametric models. Often, the support for the nonparametric methods is based on a benchmark such as the Black-Scholes (BS) model with constant volatility. In this paper, we study the stochastic volatility (SV) and stochastic volatility random jump (SVJ) models as parametric benchmarks against feedforward neural network (FNN) models, a class of neural network models. Our choice for FNN models is due to their well-studied universal approximation properties of an unknown function and its partial derivatives. Since the partial derivatives of an option pricing formula are risk pricing tools, an accurate estimation of the unknown option pricing function is essential for pricing and hedging. Our findings indicate that FNN models offer themselves as robust option pricing tools, over their sophisticated parametric counterparts in predictive settings. There are two routes to explain the superiority of FNN models over the parametric models in forecast settings. These are nonnormality of return distributions and adaptive learning.
Shen, Kunling; Xiong, Tengbin; Tan, Seng Chuen; Wu, Jiuhong
2016-01-01
Influenza is a common viral respiratory infection that causes epidemics and pandemics in the human population. Oseltamivir is a neuraminidase inhibitor-a new class of antiviral therapy for influenza. Although its efficacy and safety have been established, there is uncertainty regarding whether influenza-like illness (ILI) in children is best managed by oseltamivir at the onset of illness, and its cost-effectiveness in children has not been studied in China. To evaluate the cost-effectiveness of post rapid influenza diagnostic test (RIDT) treatment with oseltamivir and empiric treatment with oseltamivir comparing with no antiviral therapy against influenza for children with ILI. We developed a decision-analytic model based on previously published evidence to simulate and evaluate 1-year potential clinical and economic outcomes associated with three managing strategies for children presenting with symptoms of influenza. Model inputs were derived from literature and expert opinion of clinical practice and research in China. Outcome measures included costs and quality-adjusted life year (QALY). All the interventions were compared with incremental cost-effectiveness ratios (ICER). In base case analysis, empiric treatment with oseltamivir consistently produced the greatest gains in QALY. When compared with no antiviral therapy, the empiric treatment with oseltamivir strategy is very cost effective with an ICER of RMB 4,438. When compared with the post RIDT treatment with oseltamivir, the empiric treatment with oseltamivir strategy is dominant. Probabilistic sensitivity analysis projected that there is a 100% probability that empiric oseltamivir treatment would be considered as a very cost-effective strategy compared to the no antiviral therapy, according to the WHO recommendations for cost-effectiveness thresholds. The same was concluded with 99% probability for empiric oseltamivir treatment being a very cost-effective strategy compared to the post RIDT treatment with oseltamivir. In the Chinese setting of current health system, our modelling based simulation analysis suggests that empiric treatment with oseltamivir to be a cost-saving and very cost-effective strategy in managing children with ILI.
Using Whole-House Field Tests to Empirically Derive Moisture Buffering Model Inputs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Woods, J.; Winkler, J.; Christensen, D.
2014-08-01
Building energy simulations can be used to predict a building's interior conditions, along with the energy use associated with keeping these conditions comfortable. These models simulate the loads on the building (e.g., internal gains, envelope heat transfer), determine the operation of the space conditioning equipment, and then calculate the building's temperature and humidity throughout the year. The indoor temperature and humidity are affected not only by the loads and the space conditioning equipment, but also by the capacitance of the building materials, which buffer changes in temperature and humidity. This research developed an empirical method to extract whole-house model inputsmore » for use with a more accurate moisture capacitance model (the effective moisture penetration depth model). The experimental approach was to subject the materials in the house to a square-wave relative humidity profile, measure all of the moisture transfer terms (e.g., infiltration, air conditioner condensate) and calculate the only unmeasured term: the moisture absorption into the materials. After validating the method with laboratory measurements, we performed the tests in a field house. A least-squares fit of an analytical solution to the measured moisture absorption curves was used to determine the three independent model parameters representing the moisture buffering potential of this house and its furnishings. Follow on tests with realistic latent and sensible loads showed good agreement with the derived parameters, especially compared to the commonly-used effective capacitance approach. These results show that the EMPD model, once the inputs are known, is an accurate moisture buffering model.« less
NASA Astrophysics Data System (ADS)
Venzmer, M. S.; Bothmer, V.
2018-03-01
Context. The Parker Solar Probe (PSP; formerly Solar Probe Plus) mission will be humanitys first in situ exploration of the solar corona with closest perihelia at 9.86 solar radii (R⊙) distance to the Sun. It will help answer hitherto unresolved questions on the heating of the solar corona and the source and acceleration of the solar wind and solar energetic particles. The scope of this study is to model the solar-wind environment for PSPs unprecedented distances in its prime mission phase during the years 2018 to 2025. The study is performed within the Coronagraphic German And US SolarProbePlus Survey (CGAUSS) which is the German contribution to the PSP mission as part of the Wide-field Imager for Solar PRobe. Aim. We present an empirical solar-wind model for the inner heliosphere which is derived from OMNI and Helios data. The German-US space probes Helios 1 and Helios 2 flew in the 1970s and observed solar wind in the ecliptic within heliocentric distances of 0.29 au to 0.98 au. The OMNI database consists of multi-spacecraft intercalibrated in situ data obtained near 1 au over more than five solar cycles. The international sunspot number (SSN) and its predictions are used to derive dependencies of the major solar-wind parameters on solar activity and to forecast their properties for the PSP mission. Methods: The frequency distributions for the solar-wind key parameters, magnetic field strength, proton velocity, density, and temperature, are represented by lognormal functions. In addition, we consider the velocity distributions bi-componental shape, consisting of a slower and a faster part. Functional relations to solar activity are compiled with use of the OMNI data by correlating and fitting the frequency distributions with the SSN. Further, based on the combined data set from both Helios probes, the parameters frequency distributions are fitted with respect to solar distance to obtain power law dependencies. Thus an empirical solar-wind model for the inner heliosphere confined to the ecliptic region is derived, accounting for solar activity and for solar distance through adequate shifts of the lognormal distributions. Finally, the inclusion of SSN predictions and the extrapolation down to PSPs perihelion region enables us to estimate the solar-wind environment for PSPs planned trajectory during its mission duration. Results: The CGAUSS empirical solar-wind model for PSP yields dependencies on solar activity and solar distance for the solar-wind parameters' frequency distributions. The estimated solar-wind median values for PSPs first perihelion in 2018 at a solar distance of 0.16 au are 87 nT, 340 km s-1, 214 cm-3, and 503 000 K. The estimates for PSPs first closest perihelion, occurring in 2024 at 0.046 au (9.86 R⊙), are 943 nT, 290 km s-1, 2951 cm-3, and 1 930 000 K. Since the modeled velocity and temperature values below approximately 20 R⊙appear overestimated in comparison with existing observations, this suggests that PSP will directly measure solar-wind acceleration and heating processes below 20 R⊙ as planned.
NASA Astrophysics Data System (ADS)
Kanki, R.; Uchiyama, Y.; Miyazaki, D.; Takano, A.; Miyazawa, Y.; Yamazaki, H.
2014-12-01
Mesoscale oceanic structure and variability are required to be reproduced as accurately as possible in realistic regional ocean modeling. Uchiyama et al. (2012) demonstrated with a submesoscale eddy-resolving JCOPE2-ROMS downscaling oceanic modeling system that the mesoscale reproducibility of the Kuroshio meandering along Japan is significantly improved by introducing a simple restoration to data which we call "TS nudging" (a.k.a. robust diagnosis) where the prognostic temperature and salinity fields are weakly nudged four-dimensionally towards the assimilative JCOPE2 reanalysis (Miyazawa et al., 2009). However, there is not always a reliable reanalysis for oceanic downscaling in an arbitrary region and at an arbitrary time, and therefore alternative dataset should be prepared. Takano et al. (2009) proposed an empirical method to estimate mesoscale 3-D thermal structure from the near real-time AVISO altimetry data along with the ARGO float data based on the two-layer model of Goni et al. (1996). In the present study, we consider the TS data derived from this method as a candidate. We thus conduct a synoptic forward modeling of the Kuroshio using the JCOPE2-ROMS downscaling system to explore potential utility of this empirical TS dataset (hereinafter TUM-TS) by carrying out two runs with the T-S nudging towards 1) the JCOPE2-TS and 2) TUM-TS fields. An example of the comparison between the two ROMS test runs is shown in the attached figure showing the annually averaged surface EKE. Both of TUM-TS and JCOPE2-TS are found to help reproducing the mesoscale variance of the Koroshio and its extension as well as its mean paths, surface KE and EKE reasonably well. Therefore, the AVISO-ARGO derived empirical 3-D TS estimation is potentially exploitable for the dataset to conduct the T-S nudging to reproduce mesoscale oceanic structure.
An interpersonal neurobiological-informed treatment model for childhood traumatic grief.
Crenshaw, David A
This article expands an earlier model of the tasks of grieving (1990, [1995], [2001]) by building on science based findings derived from research in attachment theory, neuroscience, interpersonal neurobiology, and childhood traumatic grief (CTG). The proposed treatment model is a prescriptive approach that spells out specific tasks to be undertaken by children suffering traumatic grief under the direction of a therapist who is trained in trauma-informed therapy approaches and draws heavily on the empirically derived childhood traumatic grief treatment model developed by Cohen and Mannarino (2004; Cohen, Mannarino, & Deblinger, 2006). This model expands on their work by proposing specific tasks that are informed by attachment theory research and the interpersonal neurobiological research (Schore, 2003a, 2003b; Siegel, 1999). Particular emphasis is placed on developing a coherent and meaningful narrative since this has been found as a crucial factor in recovery from trauma in attachment research (Siegel, 1999; Siegel & Hartzell, 2003).
2009-04-01
Shelf, and into the Gulf of Mexico, empirically derived chl ; increases were observed in the Tortugas Gyre circulation feature, and in adjacent...Mexico, empirically derived chl a increases were observed in the Tortugas Gyre circulation feature, and in adjacent waters. Analy- sis of the...hurricane interaction also influenced the Tortugas Gyre, a recognized circulation feature in the southern Gulf of Mexico induced by the flow of the
Incremental Lexical Learning in Speech Production: A Computational Model and Empirical Evaluation
ERIC Educational Resources Information Center
Oppenheim, Gary Michael
2011-01-01
Naming a picture of a dog primes the subsequent naming of a picture of a dog (repetition priming) and interferes with the subsequent naming of a picture of a cat (semantic interference). Behavioral studies suggest that these effects derive from persistent changes in the way that words are activated and selected for production, and some have…
Guoyi Zhou; Ge Sun; Xu Wang; Chuanyan Zhou; Steven G. McNulty; James M. Vose; Devendra M. Amatya
2008-01-01
It is critical that evapotranspiration (ET) be quantified accurately so that scientists can evaluate the effects of land management and global change on water availability, streamflow, nutrient and sediment loading, and ecosystem productivity in watersheds. The objective of this study was to derive a new semi-empirical ET modeled using a dimension analysis method that...
F. Mauro; Vicente Monleon; H. Temesgen
2015-01-01
Small area estimation (SAE) techniques have been successfully applied in forest inventories to provide reliable estimates for domains where the sample size is small (i.e. small areas). Previous studies have explored the use of either Area Level or Unit Level Empirical Best Linear Unbiased Predictors (EBLUPs) in a univariate framework, modeling each variable of interest...
Verification of target motion effects on SAR imagery using the Gotcha GMTI challenge dataset
NASA Astrophysics Data System (ADS)
Hack, Dan E.; Saville, Michael A.
2010-04-01
This paper investigates the relationship between a ground moving target's kinematic state and its SAR image. While effects such as cross-range offset, defocus, and smearing appear well understood, their derivations in the literature typically employ simplifications of the radar/target geometry and assume point scattering targets. This study adopts a geometrical model for understanding target motion effects in SAR imagery, termed the target migration path, and focuses on experimental verification of predicted motion effects using both simulated and empirical datasets based on the Gotcha GMTI challenge dataset. Specifically, moving target imagery is generated from three data sources: first, simulated phase history for a moving point target; second, simulated phase history for a moving vehicle derived from a simulated Mazda MPV X-band signature; and third, empirical phase history from the Gotcha GMTI challenge dataset. Both simulated target trajectories match the truth GPS target position history from the Gotcha GMTI challenge dataset, allowing direct comparison between all three imagery sets and the predicted target migration path. This paper concludes with a discussion of the parallels between the target migration path and the measurement model within a Kalman filtering framework, followed by conclusions.
Semi-empirical models of the wind in cool supergiant stars
NASA Technical Reports Server (NTRS)
Kuin, N. P. M.; Ahmad, Imad A.
1988-01-01
A self-consistent semi-empirical model for the wind of the supergiant in zeta Aurigae type systems is proposed. The damping of the Alfven waves which are assumed to drive the wind is derived from the observed velocity profile. Solution of the ionization balance and energy equation gives the temperature structure for given stellar magnetic field and wave flux. Physically acceptable solutions of the temperature structure place limits on the stellar magnetic field. A crude formula for a critical mass loss rate is derived. For a mass loss rate below the critical value the wind cannot be cool. Comparison between the observed and the critical mass loss rate suggests that the proposed theory may provide an explanation for the coronal dividing line in the Hertzsprung-Russell diagram. The physical explanation may be that the atmosphere has a cool wind, unless it is physically impossible to have one. Stars which cannot have a cool wind release their nonthermal energy in an outer atmosphere at coronal temperatures. It is possible that in the absence of a substantial stellar wind the magnetic field has less incentive to extend radially outward, and coronal loop structures may become more dominant.
A better sequence-read simulator program for metagenomics.
Johnson, Stephen; Trost, Brett; Long, Jeffrey R; Pittet, Vanessa; Kusalik, Anthony
2014-01-01
There are many programs available for generating simulated whole-genome shotgun sequence reads. The data generated by many of these programs follow predefined models, which limits their use to the authors' original intentions. For example, many models assume that read lengths follow a uniform or normal distribution. Other programs generate models from actual sequencing data, but are limited to reads from single-genome studies. To our knowledge, there are no programs that allow a user to generate simulated data following non-parametric read-length distributions and quality profiles based on empirically-derived information from metagenomics sequencing data. We present BEAR (Better Emulation for Artificial Reads), a program that uses a machine-learning approach to generate reads with lengths and quality values that closely match empirically-derived distributions. BEAR can emulate reads from various sequencing platforms, including Illumina, 454, and Ion Torrent. BEAR requires minimal user input, as it automatically determines appropriate parameter settings from user-supplied data. BEAR also uses a unique method for deriving run-specific error rates, and extracts useful statistics from the metagenomic data itself, such as quality-error models. Many existing simulators are specific to a particular sequencing technology; however, BEAR is not restricted in this way. Because of its flexibility, BEAR is particularly useful for emulating the behaviour of technologies like Ion Torrent, for which no dedicated sequencing simulators are currently available. BEAR is also the first metagenomic sequencing simulator program that automates the process of generating abundances, which can be an arduous task. BEAR is useful for evaluating data processing tools in genomics. It has many advantages over existing comparable software, such as generating more realistic reads and being independent of sequencing technology, and has features particularly useful for metagenomics work.
NASA Astrophysics Data System (ADS)
Montes-Hugo, M.; Bouakba, H.; Arnone, R.
2014-06-01
The understanding of phytoplankton dynamics in the Gulf of the Saint Lawrence (GSL) is critical for managing major fisheries off the Canadian East coast. In this study, the accuracy of two atmospheric correction techniques (NASA standard algorithm, SA, and Kuchinke's spectral optimization, KU) and three ocean color inversion models (Carder's empirical for SeaWiFS (Sea-viewing Wide Field-of-View Sensor), EC, Lee's quasi-analytical, QAA, and Garver- Siegel-Maritorena semi-empirical, GSM) for estimating the phytoplankton absorption coefficient at 443 nm (aph(443)) and the chlorophyll concentration (chl) in the GSL is examined. Each model was validated based on SeaWiFS images and shipboard measurements obtained during May of 2000 and April 2001. In general, aph(443) estimates derived from coupling KU and QAA models presented the smallest differences with respect to in situ determinations as measured by High Pressure liquid Chromatography measurements (median absolute bias per cruise up to 0.005, RMSE up to 0.013). A change on the inversion approach used for estimating aph(443) values produced up to 43.4% increase on prediction error as inferred from the median relative bias per cruise. Likewise, the impact of applying different atmospheric correction schemes was secondary and represented an additive error of up to 24.3%. By using SeaDAS (SeaWiFS Data Analysis System) default values for the optical cross section of phytoplankton (i.e., aph(443) = aph(443)/chl = 0.056 m2mg-1), the median relative bias of our chl estimates as derived from the most accurate spaceborne aph(443) retrievals and with respect to in situ determinations increased up to 29%.
NASA Astrophysics Data System (ADS)
Burnham, Christian J.; Futera, Zdenek; English, Niall J.
2018-03-01
The force-matching method has been applied to parameterise an empirical potential model for water-water and water-hydrogen intermolecular interactions for use in clathrate-hydrate simulations containing hydrogen guest molecules. The underlying reference simulations constituted ab initio molecular dynamics (AIMD) of clathrate hydrates with various occupations of hydrogen-molecule guests. It is shown that the resultant model is able to reproduce AIMD-derived free-energy curves for the movement of a tagged hydrogen molecule between the water cages that make up the clathrate, thus giving us confidence in the model. Furthermore, with the aid of an umbrella-sampling algorithm, we calculate barrier heights for the force-matched model, yielding the free-energy barrier for a tagged molecule to move between cages. The barrier heights are reasonably large, being on the order of 30 kJ/mol, and are consistent with our previous studies with empirical models [C. J. Burnham and N. J. English, J. Phys. Chem. C 120, 16561 (2016) and C. J. Burnham et al., Phys. Chem. Chem. Phys. 19, 717 (2017)]. Our results are in opposition to the literature, which claims that this system may have very low barrier heights. We also compare results to that using the more ad hoc empirical model of Alavi et al. [J. Chem. Phys. 123, 024507 (2005)] and find that this model does very well when judged against the force-matched and ab initio simulation data.
NASA Astrophysics Data System (ADS)
Baaquie, Belal E.; Liang, Cui
2007-01-01
The industry standard for pricing an interest-rate caplet is Black's formula. Another distinct price of the same caplet can be derived using a quantum field theory model of the forward interest rates. An empirical study is carried out to compare the two caplet pricing formulae. Historical volatility and correlation of forward interest rates are used to generate the field theory caplet price; another approach is to fit a parametric formula for the effective volatility using market caplet price. The study shows that the field theory model generates the price of a caplet and cap fairly accurately. Black's formula for a caplet is compared with field theory pricing formula. It is seen that the field theory formula for caplet price has many advantages over Black's formula.
Empirical conversion of the vertical profile of reflectivity from Ku-band to S-band frequency
NASA Astrophysics Data System (ADS)
Cao, Qing; Hong, Yang; Qi, Youcun; Wen, Yixin; Zhang, Jian; Gourley, Jonathan J.; Liao, Liang
2013-02-01
ABSTRACT This paper presents an empirical method for converting reflectivity from Ku-band (13.8 GHz) to S-band (2.8 GHz) for several hydrometeor species, which facilitates the incorporation of Tropical Rainfall Measuring Mission (TRMM) Precipitation Radar (PR) measurements into quantitative precipitation estimation (QPE) products from the U.S. Next-Generation Radar (NEXRAD). The development of empirical dual-frequency relations is based on theoretical simulations, which have assumed appropriate scattering and microphysical models for liquid and solid hydrometeors (raindrops, snow, and ice/hail). Particle phase, shape, orientation, and density (especially for snow particles) have been considered in applying the T-matrix method to compute the scattering amplitudes. Gamma particle size distribution (PSD) is utilized to model the microphysical properties in the ice region, melting layer, and raining region of precipitating clouds. The variability of PSD parameters is considered to study the characteristics of dual-frequency reflectivity, especially the variations in radar dual-frequency ratio (DFR). The empirical relations between DFR and Ku-band reflectivity have been derived for particles in different regions within the vertical structure of precipitating clouds. The reflectivity conversion using the proposed empirical relations has been tested using real data collected by TRMM-PR and a prototype polarimetric WSR-88D (Weather Surveillance Radar 88 Doppler) radar, KOUN. The processing and analysis of collocated data demonstrate the validity of the proposed empirical relations and substantiate their practical significance for reflectivity conversion, which is essential to the TRMM-based vertical profile of reflectivity correction approach in improving NEXRAD-based QPE.
Haynos, Ann F.; Pearson, Carolyn M.; Utzinger, Linsey M.; Wonderlich, Stephen A.; Crosby, Ross D.; Mitchell, James E.; Crow, Scott J.; Peterson, Carol B.
2016-01-01
Objective Evidence suggests that eating disorder subtypes reflecting under-controlled, over-controlled, and low psychopathology personality traits constitute reliable phenotypes that differentiate treatment response. This study is the first to use statistical analyses to identify these subtypes within treatment-seeking individuals with bulimia nervosa (BN) and to use these statistically derived clusters to predict clinical outcomes. Methods Using variables from the Dimensional Assessment of Personality Pathology–Basic Questionnaire, K-means cluster analyses identified under-controlled, over-controlled, and low psychopathology subtypes within BN patients (n = 80) enrolled in a treatment trial. Generalized linear models examined the impact of personality subtypes on Eating Disorder Examination global score, binge eating frequency, and purging frequency cross-sectionally at baseline and longitudinally at end of treatment (EOT) and follow-up. In the longitudinal models, secondary analyses were conducted to examine personality subtype as a potential moderator of response to Cognitive Behavioral Therapy-Enhanced (CBT-E) or Integrative Cognitive-Affective Therapy for BN (ICAT-BN). Results There were no baseline clinical differences between groups. In the longitudinal models, personality subtype predicted binge eating (p = .03) and purging (p = .01) frequency at EOT and binge eating frequency at follow-up (p = .045). The over-controlled group demonstrated the best outcomes on these variables. In secondary analyses, there was a treatment by subtype interaction for purging at follow-up (p = .04), which indicated a superiority of CBT-E over ICAT-BN for reducing purging among the over-controlled group. Discussion Empirically derived personality subtyping is appears to be a valid classification system with potential to guide eating disorder treatment decisions. PMID:27611235
Westen, Drew; Shedler, Jonathan; Bradley, Bekh; DeFife, Jared A.
2013-01-01
Objective The authors describe a system for diagnosing personality pathology that is empirically derived, clinically relevant, and practical for day-to-day use. Method A random national sample of psychiatrists and clinical psychologists (N=1,201) described a randomly selected current patient with any degree of personality dysfunction (from minimal to severe) using the descriptors in the Shedler-Westen Assessment Procedure–II and completed additional research forms. Results The authors applied factor analysis to identify naturally occurring diagnostic groupings within the patient sample. The analysis yielded 10 clinically coherent personality diagnoses organized into three higher-order clusters: internalizing, externalizing, and borderline-dysregulated. The authors selected the most highly rated descriptors to construct a diagnostic prototype for each personality syndrome. In a second, independent sample, research interviewers and patients’ treating clinicians were able to diagnose the personality syndromes with high agreement and minimal comorbidity among diagnoses. Conclusions The empirically derived personality prototypes described here provide a framework for personality diagnosis that is both empirically based and clinically relevant. PMID:22193534
Galactic and solar radiation exposure to aircrew during a solar cycle.
Lewis, B J; Bennett, L G I; Green, A R; McCall, M J; Ellaschuk, B; Butler, A; Pierre, M
2002-01-01
An on-going investigation using a tissue-equivalent proportional counter (TEPC) has been carried out to measure the ambient dose equivalent rate of the cosmic radiation exposure of aircrew during a solar cycle. A semi-empirical model has been derived from these data to allow for the interpolation of the dose rate for any global position. The model has been extended to an altitude of up to 32 km with further measurements made on board aircraft and several balloon flights. The effects of changing solar modulation during the solar cycle are characterised by correlating the dose rate data to different solar potential models. Through integration of the dose-rate function over a great circle flight path or between given waypoints, a Predictive Code for Aircrew Radiation Exposure (PCAIRE) has been further developed for estimation of the route dose from galactic cosmic radiation exposure. This estimate is provided in units of ambient dose equivalent as well as effective dose, based on E/H x (10) scaling functions as determined from transport code calculations with LUIN and FLUKA. This experimentally based treatment has also been compared with the CARI-6 and EPCARD codes that are derived solely from theoretical transport calculations. Using TEPC measurements taken aboard the International Space Station, ground based neutron monitoring, GOES satellite data and transport code analysis, an empirical model has been further proposed for estimation of aircrew exposure during solar particle events. This model has been compared to results obtained during recent solar flare events.
EPIC-Simulated and MODIS-Derived Leaf Area Index (LAI) ...
Leaf Area Index (LAI) is an important parameter in assessing vegetation structure for characterizing forest canopies over large areas at broad spatial scales using satellite remote sensing data. However, satellite-derived LAI products can be limited by obstructed atmospheric conditions yielding sub-optimal values, or complete non-returns. The United States Environmental Protection Agency’s Exposure Methods and Measurements and Computational Exposure Divisions are investigating the viability of supplemental modelled LAI inputs into satellite-derived data streams to support various regional and local scale air quality models for retrospective and future climate assessments. In this present study, one-year (2002) of plot level stand characteristics at four study sites located in Virginia and North Carolina are used to calibrate species-specific plant parameters in a semi-empirical biogeochemical model. The Environmental Policy Integrated Climate (EPIC) model was designed primarily for managed agricultural field crop ecosystems, but also includes managed woody species that span both xeric and mesic sites (e.g., mesquite, pine, oak, etc.). LAI was simulated using EPIC at a 4 km2 and 12 km2 grid coincident with the regional Community Multiscale Air Quality Model (CMAQ) grid. LAI comparisons were made between model-simulated and MODIS-derived LAI. Field/satellite-upscaled LAI was also compared to the corresponding MODIS LAI value. Preliminary results show field/satel
ERIC Educational Resources Information Center
Peterson, Carol B.; Crow, Scott J.; Swanson, Sonja A.; Crosby, Ross D.; Wonderlich, Stephen A.; Mitchell, James E.; Agras, W. Stewart; Halmi, Katherine A.
2011-01-01
Objective: The purpose of this investigation was to derive an empirical classification of eating disorder symptoms in a heterogeneous eating disorder sample using latent class analysis (LCA) and to examine the longitudinal stability of these latent classes (LCs) and the stability of DSM-IV eating disorder (ED) diagnoses. Method: A total of 429…
ERIC Educational Resources Information Center
Beitzel, Brian D.
2013-01-01
The Student Response to Faculty Instruction (SRFI) is an instrument designed to measure the student perspective on courses in higher education. The SRFI was derived from decades of empirical studies of student evaluations of teaching. This article describes the development of the SRFI and its psychometric attributes demonstrated in two pilot study…
Interplanetary density models as inferred from solar Type III bursts
NASA Astrophysics Data System (ADS)
Oppeneiger, Lucas; Boudjada, Mohammed Y.; Lammer, Helmut; Lichtenegger, Herbert
2016-04-01
We report on the density models derived from spectral features of solar Type III bursts. They are generated by beams of electrons travelling outward from the Sun along open magnetic field lines. Electrons generate Langmuir waves at the plasma frequency along their ray paths through the corona and the interplanetary medium. A large frequency band is covered by the Type III bursts from several MHz down to few kHz. In this analysis, we consider the previous empirical density models proposed to describe the electron density in the interplanetary medium. We show that those models are mainly based on the analysis of Type III bursts generated in the interplanetary medium and observed by satellites (e.g. RAE, HELIOS, VOYAGER, ULYSSES,WIND). Those models are confronted to stereoscopic observations of Type III bursts recorded by WIND, ULYSSES and CASSINI spacecraft. We discuss the spatial evolution of the electron beam along the interplanetary medium where the trajectory is an Archimedean spiral. We show that the electron beams and the source locations are depending on the choose of the empirical density models.
Correction of Single Frequency Altimeter Measurements for Ionosphere Delay
NASA Technical Reports Server (NTRS)
Schreiner, William S.; Markin, Robert E.; Born, George H.
1997-01-01
This study is a preliminary analysis of the accuracy of various ionosphere models to correct single frequency altimeter height measurements for Ionospheric path delay. In particular, research focused on adjusting empirical and parameterized ionosphere models in the parameterized real-time ionospheric specification model (PRISM) 1.2 using total electron content (TEC) data from the global positioning system (GPS). The types of GPS data used to adjust PRISM included GPS line-of-sight (LOS) TEC data mapped to the vertical, and a grid of GPS derived TEC data in a sun-fixed longitude frame. The adjusted PRISM TEC values, as well as predictions by IRI-90, a climatotogical model, were compared to TOPEX/Poseidon (T/P) TEC measurements from the dual-frequency altimeter for a number of T/P tracks. When adjusted with GPS LOS data, the PRISM empirical model predicted TEC over 24 1 h data sets for a given local time to with in a global error of 8.60 TECU rms during a midnight centered ionosphere and 9.74 TECU rms during a noon centered ionosphere. Using GPS derived sun-fixed TEC data, the PRISM parameterized model predicted TEC within an error of 8.47 TECU rms centered at midnight and 12.83 TECU rms centered at noon. From these best results, it is clear that the proposed requirement of 3-4 TECU global rms for TOPEX/Poseidon Follow-On will be very difficult to meet, even with a substantial increase in the number of GPS ground stations, with any realizable combination of the aforementioned models or data assimilation schemes.
Thomas, Jennifer J; Eddy, Kamryn T; Ruscio, John; Ng, King Lam; Casale, Kristen E; Becker, Anne E; Lee, Sing
2015-05-01
We examined whether empirically derived eating disorder (ED) categories in Hong Kong Chinese patients (N = 454) would be consistent with recognizable lifetime ED phenotypes derived from latent structure models of European and American samples. We performed latent profile analysis (LPA) using indicator variables from data collected during routine assessment, and then applied taxometric analysis to determine whether latent classes were qualitatively versus quantitatively distinct. Latent profile analysis identified four classes: (i) binge/purge (47%); (ii) non-fat-phobic low-weight (34%); (iii) fat-phobic low-weight (12%); and (iv) overweight disordered eating (6%). Taxometric analysis identified qualitative (categorical) distinctions between the binge/purge and non-fat-phobic low-weight classes, and also between the fat-phobic and non-fat-phobic low-weight classes. Distinctions between the fat-phobic low-weight and binge/purge classes were indeterminate. Empirically derived categories in Hong Kong showed recognizable correspondence with recognizable lifetime ED phenotypes. Although taxometric findings support two distinct classes of low weight EDs, LPA findings also support heterogeneity among non-fat-phobic individuals. Copyright © 2015 John Wiley & Sons, Ltd and Eating Disorders Association.
Measuring Community Resilience to Coastal Hazards along the Northern Gulf of Mexico
Lam, Nina S. N.; Reams, Margaret; Li, Kenan; Li, Chi; Mata, Lillian P.
2016-01-01
The abundant research examining aspects of social-ecological resilience, vulnerability, and hazards and risk assessment has yielded insights into these concepts and suggested the importance of quantifying them. Quantifying resilience is complicated by several factors including the varying definitions of the term applied in the research, difficulties involved in selecting and aggregating indicators of resilience, and the lack of empirical validation for the indices derived. This paper applies a new model, called the resilience inference measurement (RIM) model, to quantify resilience to climate-related hazards for 52 U.S. counties along the northern Gulf of Mexico. The RIM model uses three elements (exposure, damage, and recovery indicators) to denote two relationships (vulnerability and adaptability), and employs both K-means clustering and discriminant analysis to derive the resilience rankings, thus enabling validation and inference. The results yielded a classification accuracy of 94.2% with 28 predictor variables. The approach is theoretically sound and can be applied to derive resilience indices for other study areas at different spatial and temporal scales. PMID:27499707
Measuring Community Resilience to Coastal Hazards along the Northern Gulf of Mexico.
Lam, Nina S N; Reams, Margaret; Li, Kenan; Li, Chi; Mata, Lillian P
2016-02-01
The abundant research examining aspects of social-ecological resilience, vulnerability, and hazards and risk assessment has yielded insights into these concepts and suggested the importance of quantifying them. Quantifying resilience is complicated by several factors including the varying definitions of the term applied in the research, difficulties involved in selecting and aggregating indicators of resilience, and the lack of empirical validation for the indices derived. This paper applies a new model, called the resilience inference measurement (RIM) model, to quantify resilience to climate-related hazards for 52 U.S. counties along the northern Gulf of Mexico. The RIM model uses three elements (exposure, damage, and recovery indicators) to denote two relationships (vulnerability and adaptability), and employs both K-means clustering and discriminant analysis to derive the resilience rankings, thus enabling validation and inference. The results yielded a classification accuracy of 94.2% with 28 predictor variables. The approach is theoretically sound and can be applied to derive resilience indices for other study areas at different spatial and temporal scales.
1993-12-21
Latent(Lower Solid), Net Infrared (Dashed), and Net viii Heat Loss (Upper Solid - the Other 3 Surmmed) are Plotted, with Positive Values :ndicating...gained from solar insolation, Qs, and the heat lost from the surface due to latent, Qe, sensible, Qh, and net infrared radiation, Qb is positive...five empirically derived dimensionless constants in the model. With the introduction of two new unknowns, <E> and < ww2 >, the prediction of the upper
Structure of a randomly grown 2-d network.
Ajazi, Fioralba; Napolitano, George M; Turova, Tatyana; Zaurbek, Izbassar
2015-10-01
We introduce a growing random network on a plane as a model of a growing neuronal network. The properties of the structure of the induced graph are derived. We compare our results with available data. In particular, it is shown that depending on the parameters of the model the system undergoes in time different phases of the structure. We conclude with a possible explanation of some empirical data on the connections between neurons. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
LANDSAT 4 band 6 data evaluation
NASA Technical Reports Server (NTRS)
1983-01-01
Computer modelled atmospheric transmittance and path radiance values were compared with empirical values derived from aircraft underflight data. Aircraft thermal infrared imagery and calibration data were available on two dates as were corresponding atmospheric radiosonde data. The radiosonde data were used as input to the LOWTRAN 5A code. The aircraft data were calibrated and utilized to generate analogous measurements. The results of the analysis indicate that there is a tendancy for the LOWTRAN model to underestimate atmospheric path radiance and overestimate atmospheric transmittance.
Effects of heat conduction on artificial viscosity methods for shock capturing
Cook, Andrew W.
2013-12-01
Here we investigate the efficacy of artificial thermal conductivity for shock capturing. The conductivity model is derived from artificial bulk and shear viscosities, such that stagnation enthalpy remains constant across shocks. By thus fixing the Prandtl number, more physical shock profiles are obtained, only on a larger scale. The conductivity model does not contain any empirical constants. It increases the net dissipation of a computational algorithm but is found to better preserve symmetry and produce more robust solutions for strong-shock problems.
Validation of Slosh Modeling Approach Using STAR-CCM+
NASA Technical Reports Server (NTRS)
Benson, David J.; Ng, Wanyi
2018-01-01
Without an adequate understanding of propellant slosh, the spacecraft attitude control system may be inadequate to control the spacecraft or there may be an unexpected loss of science observation time due to higher slosh settling times. Computational fluid dynamics (CFD) is used to model propellant slosh. STAR-CCM+ is a commercially available CFD code. This paper seeks to validate the CFD modeling approach via a comparison between STAR-CCM+ liquid slosh modeling results and experimental, empirically, and analytically derived results. The geometries examined are a bare right cylinder tank and a right cylinder with a single ring baffle.
Two-population dynamics in a growing network model
NASA Astrophysics Data System (ADS)
Ivanova, Kristinka; Iordanov, Ivan
2012-02-01
We introduce a growing network evolution model with nodal attributes. The model describes the interactions between potentially violent V and non-violent N agents who have different affinities in establishing connections within their own population versus between the populations. The model is able to generate all stable triads observed in real social systems. In the framework of rate equations theory, we employ the mean-field approximation to derive analytical expressions of the degree distribution and the local clustering coefficient for each type of nodes. Analytical derivations agree well with numerical simulation results. The assortativity of the potentially violent network qualitatively resembles the connectivity pattern in terrorist networks that was recently reported. The assortativity of the network driven by aggression shows clearly different behavior than the assortativity of the networks with connections of non-aggressive nature in agreement with recent empirical results of an online social system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andersson, Anders D.; Tonks, Michael R.; Casillas, Luis
2014-10-31
In light water reactor fuel, gaseous fission products segregate to grain boundaries, resulting in the nucleation and growth of large intergranular fission gas bubbles. Based on the mechanisms established from density functional theory (DFT) and empirical potential calculations 1, continuum models for diffusion of xenon (Xe), uranium (U) vacancies and U interstitials in UO 2 have been derived for both intrinsic conditions and under irradiation. Segregation of Xe to grain boundaries is described by combining the bulk diffusion model with a model for the interaction between Xe atoms and three different grain boundaries in UO 2 ( Σ5 tilt, Σ5more » twist and a high angle random boundary),as derived from atomistic calculations. All models are implemented in the MARMOT phase field code, which is used to calculate effective Xe and U diffusivities as well as redistribution for a few simple microstructures.« less
Absolute, SI-traceable lunar irradiance tie-points for the USGS Lunar Model
NASA Astrophysics Data System (ADS)
Brown, Steven W.; Eplee, Robert E.; Xiong, Xiaoxiong J.
2017-10-01
The United States Geological Survey (USGS) has developed an empirical model, known as the Robotic Lunar Observatory (ROLO) Model, that predicts the reflectance of the Moon for any Sun-sensor-Moon configuration over the spectral range from 350 nm to 2500 nm. The lunar irradiance can be predicted from the modeled lunar reflectance using a spectrum of the incident solar irradiance. While extremely successful as a relative exo-atmospheric calibration target, the ROLO Model is not SI-traceable and has estimated uncertainties too large for the Moon to be used as an absolute celestial calibration target. In this work, two recent absolute, low uncertainty, SI-traceable top-of-the-atmosphere (TOA) lunar irradiances, measured over the spectral range from 380 nm to 1040 nm, at lunar phase angles of 6.6° and 16.9° , are used as tie-points to the output of the ROLO Model. Combined with empirically derived phase and libration corrections to the output of the ROLO Model and uncertainty estimates in those corrections, the measurements enable development of a corrected TOA lunar irradiance model and its uncertainty budget for phase angles between +/-80° and libration angles from 7° to 51° . The uncertainties in the empirically corrected output from the ROLO model are approximately 1 % from 440 nm to 865 nm and increase to almost 3 % at 412 nm. The dominant components in the uncertainty budget are the uncertainty in the absolute TOA lunar irradiance and the uncertainty in the fit to the phase correction from the output of the ROLO model.
Empirical complexities in the genetic foundations of lethal mutagenesis.
Bull, James J; Joyce, Paul; Gladstone, Eric; Molineux, Ian J
2013-10-01
From population genetics theory, elevating the mutation rate of a large population should progressively reduce average fitness. If the fitness decline is large enough, the population will go extinct in a process known as lethal mutagenesis. Lethal mutagenesis has been endorsed in the virology literature as a promising approach to viral treatment, and several in vitro studies have forced viral extinction with high doses of mutagenic drugs. Yet only one empirical study has tested the genetic models underlying lethal mutagenesis, and the theory failed on even a qualitative level. Here we provide a new level of analysis of lethal mutagenesis by developing and evaluating models specifically tailored to empirical systems that may be used to test the theory. We first quantify a bias in the estimation of a critical parameter and consider whether that bias underlies the previously observed lack of concordance between theory and experiment. We then consider a seemingly ideal protocol that avoids this bias-mutagenesis of virions-but find that it is hampered by other problems. Finally, results that reveal difficulties in the mere interpretation of mutations assayed from double-strand genomes are derived. Our analyses expose unanticipated complexities in testing the theory. Nevertheless, the previous failure of the theory to predict experimental outcomes appears to reside in evolutionary mechanisms neglected by the theory (e.g., beneficial mutations) rather than from a mismatch between the empirical setup and model assumptions. This interpretation raises the specter that naive attempts at lethal mutagenesis may augment adaptation rather than retard it.
ERIC Educational Resources Information Center
Oetting, Janna B.; Cleveland, Lesli H.; Cope, Robert F., III
2008-01-01
Purpose: Using a sample of culturally/linguistically diverse children, we present data to illustrate the value of empirically derived combinations of tools and cutoffs for determining eligibility in child language impairment. Method: Data were from 95 4- and 6-year-olds (40 African American, 55 White; 18 with language impairment, 77 without) who…
ERIC Educational Resources Information Center
Bihagen, Erik; Ohls, Marita
2007-01-01
It has been claimed that women experience fewer career opportunities than men do mainly because they are over-represented in "Dead-end Jobs" (DEJs). Using Swedish panel data covering 1.1 million employees with the same employer in 1999 and 2003, measures of DEJ are empirically derived from analyses of wage mobility. The results indicate…
ERIC Educational Resources Information Center
Eddy, Kamryn T.; Le Grange, Daniel; Crosby, Ross D.; Hoste, Renee Rienecke; Doyle, Angela Celio; Smyth, Angela; Herzog, David B.
2010-01-01
Objective: The purpose of this study was to empirically derive eating disorder phenotypes in a clinical sample of children and adolescents using latent profile analysis (LPA), and to compare these latent profile (LP) groups to the DSM-IV-TR eating disorder categories. Method: Eating disorder symptom data collected from 401 youth (aged 7 through 19…
Backward jump continuous-time random walk: An application to market trading
NASA Astrophysics Data System (ADS)
Gubiec, Tomasz; Kutner, Ryszard
2010-10-01
The backward jump modification of the continuous-time random walk model or the version of the model driven by the negative feedback was herein derived for spatiotemporal continuum in the context of a share price evolution on a stock exchange. In the frame of the model, we described stochastic evolution of a typical share price on a stock exchange with a moderate liquidity within a high-frequency time scale. The model was validated by satisfactory agreement of the theoretical velocity autocorrelation function with its empirical counterpart obtained for the continuous quotation. This agreement is mainly a result of a sharp backward correlation found and considered in this article. This correlation is a reminiscence of such a bid-ask bounce phenomenon where backward price jump has the same or almost the same length as preceding jump. We suggested that this correlation dominated the dynamics of the stock market with moderate liquidity. Although assumptions of the model were inspired by the market high-frequency empirical data, its potential applications extend beyond the financial market, for instance, to the field covered by the Le Chatelier-Braun principle of contrariness.
Backward jump continuous-time random walk: an application to market trading.
Gubiec, Tomasz; Kutner, Ryszard
2010-10-01
The backward jump modification of the continuous-time random walk model or the version of the model driven by the negative feedback was herein derived for spatiotemporal continuum in the context of a share price evolution on a stock exchange. In the frame of the model, we described stochastic evolution of a typical share price on a stock exchange with a moderate liquidity within a high-frequency time scale. The model was validated by satisfactory agreement of the theoretical velocity autocorrelation function with its empirical counterpart obtained for the continuous quotation. This agreement is mainly a result of a sharp backward correlation found and considered in this article. This correlation is a reminiscence of such a bid-ask bounce phenomenon where backward price jump has the same or almost the same length as preceding jump. We suggested that this correlation dominated the dynamics of the stock market with moderate liquidity. Although assumptions of the model were inspired by the market high-frequency empirical data, its potential applications extend beyond the financial market, for instance, to the field covered by the Le Chatelier-Braun principle of contrariness.
NASA Astrophysics Data System (ADS)
Lechtenberg, Travis; McLaughlin, Craig A.; Locke, Travis; Krishna, Dhaval Mysore
2013-01-01
paper examines atmospheric density estimated using precision orbit ephemerides (POE) from the CHAMP and GRACE satellites during short periods of greater atmospheric density variability. The results of the calibration of CHAMP densities derived using POEs with those derived using accelerometers are examined for three different types of density perturbations, [traveling atmospheric disturbances (TADs), geomagnetic cusp phenomena, and midnight density maxima] in order to determine the temporal resolution of POE solutions. In addition, the densities are compared to High-Accuracy Satellite Drag Model (HASDM) densities to compare temporal resolution for both types of corrections. The resolution for these models of thermospheric density was found to be inadequate to sufficiently characterize the short-term density variations examined here. Also examined in this paper is the effect of differing density estimation schemes by propagating an initial orbit state forward in time and examining induced errors. The propagated POE-derived densities incurred errors of a smaller magnitude than the empirical models and errors on the same scale or better than those incurred using the HASDM model.
On the relationship between tumour growth rate and survival in non-small cell lung cancer.
Mistry, Hitesh B
2017-01-01
A recurrent question within oncology drug development is predicting phase III outcome for a new treatment using early clinical data. One approach to tackle this problem has been to derive metrics from mathematical models that describe tumour size dynamics termed re-growth rate and time to tumour re-growth. They have shown to be strong predictors of overall survival in numerous studies but there is debate about how these metrics are derived and if they are more predictive than empirical end-points. This work explores the issues raised in using model-derived metric as predictors for survival analyses. Re-growth rate and time to tumour re-growth were calculated for three large clinical studies by forward and reverse alignment. The latter involves re-aligning patients to their time of progression. Hence, it accounts for the time taken to estimate re-growth rate and time to tumour re-growth but also assesses if these predictors correlate to survival from the time of progression. I found that neither re-growth rate nor time to tumour re-growth correlated to survival using reverse alignment. This suggests that the dynamics of tumours up until disease progression has no relationship to survival post progression. For prediction of a phase III trial I found the metrics performed no better than empirical end-points. These results highlight that care must be taken when relating dynamics of tumour imaging to survival and that bench-marking new approaches to existing ones is essential.
Conventional intensive logging promotes loss of organic carbon from the mineral soil.
Dean, Christopher; Kirkpatrick, James B; Friedland, Andrew J
2017-01-01
There are few data, but diametrically opposed opinions, about the impacts of forest logging on soil organic carbon (SOC). Reviews and research articles conclude either that there is no effect, or show contradictory effects. Given that SOC is a substantial store of potential greenhouse gasses and forest logging and harvesting is routine, resolution is important. We review forest logging SOC studies and provide an overarching conceptual explanation for their findings. The literature can be separated into short-term empirical studies, longer-term empirical studies and long-term modelling. All modelling that includes major aboveground and belowground biomass pools shows a long-term (i.e. ≥300 years) decrease in SOC when a primary forest is logged and then subjected to harvesting cycles. The empirical longer-term studies indicate likewise. With successive harvests the net emission accumulates but is only statistically perceptible after centuries. Short-term SOC flux varies around zero. The long-term drop in SOC in the mineral soil is driven by the biomass drop from the primary forest level but takes time to adjust to the new temporal average biomass. We show agreement between secondary forest SOC stocks derived purely from biomass information and stocks derived from complex forest harvest modelling. Thus, conclusions that conventional harvests do not deplete SOC in the mineral soil have been a function of their short time frames. Forest managers, climate change modellers and environmental policymakers need to assume a long-term net transfer of SOC from the mineral soil to the atmosphere when primary forests are logged and then undergo harvest cycles. However, from a greenhouse accounting perspective, forest SOC is not the entire story. Forest wood products that ultimately reach landfill, and some portion of which produces some soil-like material there rather than in the forest, could possibly help attenuate the forest SOC emission by adding to a carbon pool in landfill. © 2016 John Wiley & Sons Ltd.
Eddy, Kamryn T.; le Grange, Daniel; Crosby, Ross D.; Hoste, Renee Rienecke; Doyle, Angela Celio; Smyth, Angela; Herzog, David B.
2009-01-01
Objective The purpose of this study was to empirically derive eating disorder phenotypes in a clinical sample of children and adolescents using latent profile analysis (LPA) and compare these latent profile (LP) groups to the DSM-IV-TR eating disorder categories. Method Eating disorder symptom data collected from 401 youth (ages 7–19; mean 15.14 ± 2.35y) seeking eating disorder treatment were included in LPA; general linear models were used to compare LP groups to DSM-IV-TR eating disorder categories on pre-treatment and outcome indices. Results Three LP groups were identified: LP1 (n=144), characterized binge eating and purging (“Binge/purge”); LP2 (n=126), characterized by excessive exercise and extreme eating disorder cognitions (“Exercise-extreme cognitions”); and LP3 (n=131), characterized by minimal eating disorder behaviors and cognitions (“Minimal behaviors/cognitions”). Identified LPs imperfectly resembled DSM-IV-TR eating disorders. LP1 resembled bulimia nervosa; LP2 and LP3 broadly resembled anorexia nervosa with a relaxed weight criterion, differentiated by excessive exercise and severity of eating disorder cognitions. LP groups were more differentiated than the DSM-IV-TR categories across pre-treatment eating disorder and general psychopathology indices, as well as weight change at follow-up. Neither LP nor DSM-IV-TR categories predicted change in binge/purge behaviors. Validation analyses suggest these empirically-derived groups improve upon the current DSM-IV-TR categories. Conclusions In children and adolescents, revisions for DSM-V should consider recognition of patients with minimal cognitive eating disorder symptoms. PMID:20410717
NASA Astrophysics Data System (ADS)
Ramirez, N.; Afshari, Afshin; Norford, L.
2018-07-01
A steady-state Reynolds-averaged Navier-Stoke computational fluid dynamics (CFD) investigation of boundary-layer flow over a major portion of downtown Abu Dhabi is conducted. The results are used to derive the shear stress and characterize the logarithmic region for eight sub-domains, where the sub-domains overlap and are overlaid in the streamwise direction. They are characterized by a high frontal area index initially, which decreases significantly beyond the fifth sub-domain. The plan area index is relatively stable throughout the domain. For each sub-domain, the estimated local roughness length and displacement height derived from CFD results are compared to prevalent empirical formulations. We further validate and tune a mixing-length model proposed by Coceal and Belcher (Q J R Meteorol Soc 130:1349-1372, 2004). Finally, the in-canopy wind-speed attenuation is analysed as a function of fetch. It is shown that, while there is some room for improvement in Macdonald's empirical formulations (Boundary-Layer Meteorol 97:25-45, 2000), Coceal and Belcher's mixing model in combination with the resolution method of Di Sabatino et al. (Boundary-Layer Meteorol 127:131-151, 2008) can provide a robust estimation of the average wind speed in the logarithmic region. Within the roughness sublayer, a properly parametrized Cionco exponential model is shown to be quite accurate.
Quantification of Neutral Wind Variability in the Upper Thermosphere
NASA Technical Reports Server (NTRS)
Richards, Philip G.
2000-01-01
The overall objective of this grant was to: 1) Quantify thermospheric neutral wind behavior in the ionosphere. This was to be achieved by developing an improved empirical wind model. 2) Validating the procedure for obtaining winds from the height of the peak density. 3) Improving the model capabilities and making updated versions of the model available to other scientists. The approach is to use neutral winds derived from ionosonde measurements of the height of the peak electron density (h(sub m)F(sub 2)). One of the proposed first year tasks was to perform some validation studies on the method. Substantial progress has been made with regard to both the empirical model and the validation study. Funding from this grant has also enabled a number of fruitful collaborations with other researchers; one of the stated aims in the proposal. Graduate student Mayra Martinez has developed the mathematical formulation for the empirical wind model as part of her dissertation. As proposed, authors continued validation studies of the technique for determining winds from h(sub m)F(sub 2). They are submitted a paper to the Journal of Geophysical Research in December 1996 entitled "Therinospheric neutral winds at southern mid-latitudes: comparison of optical and ionosonde h(sub m)F(sub 2) methods. A second paper entitled "Ionospheric behavior at a southern mid-latitude in March 1995" has come out of the March 1995 data set and was published in The Journal of Geophysical Research. A new algorithm was developed. The ionosphere also have been modeled.
Development and evaluation of an empirical diurnal sea surface temperature model
NASA Astrophysics Data System (ADS)
Weihs, R. R.; Bourassa, M. A.
2013-12-01
An innovative method is developed to determine the diurnal heating amplitude of sea surface temperatures (SSTs) using observations of high-quality satellite SST measurements and NWP atmospheric meteorological data. The diurnal cycle results from heating that develops at the surface of the ocean from low mechanical or shear produced turbulence and large solar radiation absorption. During these typically calm weather conditions, the absorption of solar radiation causes heating of the upper few meters of the ocean, which become buoyantly stable; this heating causes a temperature differential between the surface and the mixed [or bulk] layer on the order of a few degrees. It has been shown that capturing the diurnal cycle is important for a variety of applications, including surface heat flux estimates, which have been shown to be underestimated when neglecting diurnal warming, and satellite and buoy calibrations, which can be complicated because of the heating differential. An empirical algorithm using a pre-dawn sea surface temperature, peak solar radiation, and accumulated wind stress is used to estimate the cycle. The empirical algorithm is derived from a multistep process in which SSTs from MTG's SEVIRI SST experimental hourly data set are combined with hourly wind stress fields derived from a bulk flux algorithm. Inputs for the flux model are taken from NASA's MERRA reanalysis product. NWP inputs are necessary because the inputs need to incorporate diurnal and air-sea interactive processes, which are vital to the ocean surface dynamics, with a high enough temporal resolution. The MERRA winds are adjusted with CCMP winds to obtain more realistic spatial and variance characteristics and the other atmospheric inputs (air temperature, specific humidity) are further corrected on the basis of in situ comparisons. The SSTs are fitted to a Gaussian curve (using one or two peaks), forming a set of coefficients used to fit the data. The coefficient data are combined with accumulated wind stress and peak solar radiation to create an empirical relationship that approximates physical processes such as turbulence and heating memory (capacity) of the ocean. Weaknesses and strengths of the model, including potential spatial biases, will be discussed.
On the use of integrating FLUXNET eddy covariance and remote sensing data for model evaluation
NASA Astrophysics Data System (ADS)
Reichstein, Markus; Jung, Martin; Beer, Christian; Carvalhais, Nuno; Tomelleri, Enrico; Lasslop, Gitta; Baldocchi, Dennis; Papale, Dario
2010-05-01
The current FLUXNET database (www.fluxdata.org) of CO2, water and energy exchange between the terrestrial biosphere and the atmosphere contains almost 1000 site-years with data from more than 250 sites, encompassing all major biomes of the world and being processed in a standardized way (1-3). In this presentation we show that the information in the data is sufficient to derive generalized empirical relationships between vegetation/respective remote sensing information, climate and the biosphere-atmosphere exchanges across global biomes. These empirical patterns are used to generate global grids of the respective fluxes and derived properties (e.g. radiation and water-use efficiencies or climate sensitivities in general, bowen-ratio, AET/PET ratio). For example we revisit global 'text-book' numbers such as global Gross Primary Productivity (GPP) estimated since the 70's as ca. 120PgC (4), or global evapotranspiration (ET) estimated at 65km3/yr-1 (5) - for the first time with a more solid and direct empirical basis. Evaluation against independent data at regional to global scale (e.g. atmospheric CO2 inversions, runoff data) lends support to the validity of our almost purely empirical up-scaling approaches. Moreover climate factors such as radiation, temperature and water balance are identified as driving factors for variations and trends of carbon and water fluxes, with distinctly different sensitivities between different vegetation types. Hence, these global fields of biosphere-atmosphere exchange and the inferred relations between climate, vegetation type and fluxes should be used for evaluation or benchmarking of climate models or their land-surface components, while overcoming scale-issues with classical point-to-grid-cell comparisons. 1. M. Reichstein et al., Global Change Biology 11, 1424 (2005). 2. D. Baldocchi, Australian Journal of Botany 56, 1 (2008). 3. D. Papale et al., Biogeosciences 3, 571 (2006). 4. D. E. Alexander, R. W. Fairbridge, Encyclopedia of Environmental Science (Springer, Heidelberg, 1999), pp. 741. 5. T. Oki, S. Kanae, Science 313, 1068 (Aug 25, 2006)
Estimation of Boreal Forest Biomass Using Spaceborne SAR Systems
NASA Technical Reports Server (NTRS)
Saatchi, Sassan; Moghaddam, Mahta
1995-01-01
In this paper, we report on the use of a semiempirical algorithm derived from a two layer radar backscatter model for forest canopies. The model stratifies the forest canopy into crown and stem layers, separates the structural and biometric attributes of the canopy. The structural parameters are estimated by training the model with polarimetric SAR (synthetic aperture radar) data acquired over homogeneous stands with known above ground biomass. Given the structural parameters, the semi-empirical algorithm has four remaining parameters, crown biomass, stem biomass, surface soil moisture, and surface rms height that can be estimated by at least four independent SAR measurements. The algorithm has been used to generate biomass maps over the entire images acquired by JPL AIRSAR and SIR-C SAR systems. The semi-empirical algorithms are then modified to be used by single frequency radar systems such as ERS-1, JERS-1, and Radarsat. The accuracy. of biomass estimation from single channel radars is compared with the case when the channels are used together in synergism or in a polarimetric system.
Revisiting competition in a classic model system using formal links between theory and data.
Hart, Simon P; Burgin, Jacqueline R; Marshall, Dustin J
2012-09-01
Formal links between theory and data are a critical goal for ecology. However, while our current understanding of competition provides the foundation for solving many derived ecological problems, this understanding is fractured because competition theory and data are rarely unified. Conclusions from seminal studies in space-limited benthic marine systems, in particular, have been very influential for our general understanding of competition, but rely on traditional empirical methods with limited inferential power and compatibility with theory. Here we explicitly link mathematical theory with experimental field data to provide a more sophisticated understanding of competition in this classic model system. In contrast to predictions from conceptual models, our estimates of competition coefficients show that a dominant space competitor can be equally affected by interspecific competition with a poor competitor (traditionally defined) as it is by intraspecific competition. More generally, the often-invoked competitive hierarchies and intransitivities in this system might be usefully revisited using more sophisticated empirical and analytical approaches.
Modeling, simulation, and estimation of optical turbulence
NASA Astrophysics Data System (ADS)
Formwalt, Byron Paul
This dissertation documents three new contributions to simulation and modeling of optical turbulence. The first contribution is the formalization, optimization, and validation of a modeling technique called successively conditioned rendering (SCR). The SCR technique is empirically validated by comparing the statistical error of random phase screens generated with the technique. The second contribution is the derivation of the covariance delineation theorem, which provides theoretical bounds on the error associated with SCR. It is shown empirically that the theoretical bound may be used to predict relative algorithm performance. Therefore, the covariance delineation theorem is a powerful tool for optimizing SCR algorithms. For the third contribution, we introduce a new method for passively estimating optical turbulence parameters, and demonstrate the method using experimental data. The technique was demonstrated experimentally, using a 100 m horizontal path at 1.25 m above sun-heated tarmac on a clear afternoon. For this experiment, we estimated C2n ≈ 6.01 · 10-9 m-23 , l0 ≈ 17.9 mm, and L0 ≈ 15.5 m.
Bridging the Knowledge Gaps between Richards' Equation and Budyko Equation
NASA Astrophysics Data System (ADS)
Wang, D.
2017-12-01
The empirical Budyko equation represents the partitioning of mean annual precipitation into evaporation and runoff. Richards' equation, based on Darcy's law, represents the movement of water in unsaturated soils. The linkage between Richards' equation and Budyko equation is presented by invoking the empirical Soil Conservation Service curve number (SCS-CN) model for computing surface runoff at the event-scale. The basis of the SCS-CN method is the proportionality relationship, i.e., the ratio of continuing abstraction to its potential is equal to the ratio of surface runoff to its potential value. The proportionality relationship can be derived from the Richards' equation for computing infiltration excess and saturation excess models at the catchment scale. Meanwhile, the generalized proportionality relationship is demonstrated as the common basis of SCS-CN method, monthly "abcd" model, and Budyko equation. Therefore, the linkage between Darcy's law and the emergent pattern of mean annual water balance at the catchment scale is presented through the proportionality relationship.
WEIGHTED LIKELIHOOD ESTIMATION UNDER TWO-PHASE SAMPLING
Saegusa, Takumi; Wellner, Jon A.
2013-01-01
We develop asymptotic theory for weighted likelihood estimators (WLE) under two-phase stratified sampling without replacement. We also consider several variants of WLEs involving estimated weights and calibration. A set of empirical process tools are developed including a Glivenko–Cantelli theorem, a theorem for rates of convergence of M-estimators, and a Donsker theorem for the inverse probability weighted empirical processes under two-phase sampling and sampling without replacement at the second phase. Using these general results, we derive asymptotic distributions of the WLE of a finite-dimensional parameter in a general semiparametric model where an estimator of a nuisance parameter is estimable either at regular or nonregular rates. We illustrate these results and methods in the Cox model with right censoring and interval censoring. We compare the methods via their asymptotic variances under both sampling without replacement and the more usual (and easier to analyze) assumption of Bernoulli sampling at the second phase. PMID:24563559
Semi-empirical fragmentation model of meteoroid motion and radiation during atmospheric penetration
NASA Astrophysics Data System (ADS)
Revelle, D. O.; Ceplecha, Z.
2002-11-01
A semi-empirical fragmentation model (FM) of meteoroid motion, ablation, and radiation including two types of fragmentation is outlined. The FM was applied to observational data (height as function of time and the light curve) of Lost City, Innisfree and Benešov bolides. For the Lost City bolide we were able to fit the FM to the observed height as function of time with ±13 m and to the observed light curve with ±0.17 magnitude. Corresponding numbers for Innisfree are ±25 m and ±0.14 magnitude, and for Benešov ±46 m and ±0.19 magnitude. We also define apparent and intrinsic values of σ, K, and τ. Using older results and our fit of FM to the Lost City bolide we derived corrections to intrinsic luminous efficiencies expressed as functions of velocity, mass, and normalized air density.
A Grounded Theory of Sexual Minority Women and Transgender Individuals' Social Justice Activism.
Hagen, Whitney B; Hoover, Stephanie M; Morrow, Susan L
2018-01-01
Psychosocial benefits of activism include increased empowerment, social connectedness, and resilience. Yet sexual minority women (SMW) and transgender individuals with multiple oppressed statuses and identities are especially prone to oppression-based experiences, even within minority activist communities. This study sought to develop an empirical model to explain the diverse meanings of social justice activism situated in SMW and transgender individuals' social identities, values, and experiences of oppression and privilege. Using a grounded theory design, 20 SMW and transgender individuals participated in initial, follow-up, and feedback interviews. The most frequent demographic identities were queer or bisexual, White, middle-class women with advanced degrees. The results indicated that social justice activism was intensely relational, replete with multiple benefits, yet rife with experiences of oppression from within and outside of activist communities. The empirically derived model shows the complexity of SMW and transgender individuals' experiences, meanings, and benefits of social justice activism.
New Physical Algorithms for Downscaling SMAP Soil Moisture
NASA Astrophysics Data System (ADS)
Sadeghi, M.; Ghafari, E.; Babaeian, E.; Davary, K.; Farid, A.; Jones, S. B.; Tuller, M.
2017-12-01
The NASA Soil Moisture Active Passive (SMAP) mission provides new means for estimation of surface soil moisture at the global scale. However, for many hydrological and agricultural applications the spatial SMAP resolution is too low. To address this scale issue we fused SMAP data with MODIS observations to generate soil moisture maps at 1-km spatial resolution. In course of this study we have improved several existing empirical algorithms and introduced a new physical approach for downscaling SMAP data. The universal triangle/trapezoid model was applied to relate soil moisture to optical/thermal observations such as NDVI, land surface temperature and surface reflectance. These algorithms were evaluated with in situ data measured at 5-cm depth. Our results demonstrate that downscaling SMAP soil moisture data based on physical indicators of soil moisture derived from the MODIS satellite leads to higher accuracy than that achievable with empirical downscaling algorithms. Keywords: Soil moisture, microwave data, downscaling, MODIS, triangle/trapezoid model.
Bayesian methods to estimate urban growth potential
Smith, Jordan W.; Smart, Lindsey S.; Dorning, Monica; Dupéy, Lauren Nicole; Méley, Andréanne; Meentemeyer, Ross K.
2017-01-01
Urban growth often influences the production of ecosystem services. The impacts of urbanization on landscapes can subsequently affect landowners’ perceptions, values and decisions regarding their land. Within land-use and land-change research, very few models of dynamic landscape-scale processes like urbanization incorporate empirically-grounded landowner decision-making processes. Very little attention has focused on the heterogeneous decision-making processes that aggregate to influence broader-scale patterns of urbanization. We examine the land-use tradeoffs faced by individual landowners in one of the United States’ most rapidly urbanizing regions − the urban area surrounding Charlotte, North Carolina. We focus on the land-use decisions of non-industrial private forest owners located across the region’s development gradient. A discrete choice experiment is used to determine the critical factors influencing individual forest owners’ intent to sell their undeveloped properties across a series of experimentally varied scenarios of urban growth. Data are analyzed using a hierarchical Bayesian approach. The estimates derived from the survey data are used to modify a spatially-explicit trend-based urban development potential model, derived from remotely-sensed imagery and observed changes in the region’s socioeconomic and infrastructural characteristics between 2000 and 2011. This modeling approach combines the theoretical underpinnings of behavioral economics with spatiotemporal data describing a region’s historical development patterns. By integrating empirical social preference data into spatially-explicit urban growth models, we begin to more realistically capture processes as well as patterns that drive the location, magnitude and rates of urban growth.
Calculation of Host-Guest Binding Affinities Using a Quantum-Mechanical Energy Model.
Muddana, Hari S; Gilson, Michael K
2012-06-12
The prediction of protein-ligand binding affinities is of central interest in computer-aided drug discovery, but it is still difficult to achieve a high degree of accuracy. Recent studies suggesting that available force fields may be a key source of error motivate the present study, which reports the first mining minima (M2) binding affinity calculations based on a quantum mechanical energy model, rather than an empirical force field. We apply a semi-empirical quantum-mechanical energy function, PM6-DH+, coupled with the COSMO solvation model, to 29 host-guest systems with a wide range of measured binding affinities. After correction for a systematic error, which appears to derive from the treatment of polar solvation, the computed absolute binding affinities agree well with experimental measurements, with a mean error 1.6 kcal/mol and a correlation coefficient of 0.91. These calculations also delineate the contributions of various energy components, including solute energy, configurational entropy, and solvation free energy, to the binding free energies of these host-guest complexes. Comparison with our previous calculations, which used empirical force fields, point to significant differences in both the energetic and entropic components of the binding free energy. The present study demonstrates successful combination of a quantum mechanical Hamiltonian with the M2 affinity method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petrinec, S.M.; Russell, C.T.
1995-06-01
The shape of the dayside magnetopause has been studied from both a theoretical and an empirical perspective for several decades. Early theoretical studies of the magnetopause shape assumed an inviscid interaction and normal pressure balance along the entire boundary, with the interior magnetic field and magnetopause currents being solved self-consistently and iteratively, using the Biot-Savart Law. The derived shapes are complicated, due to asymmetries caused by the nature of the dipole field and the direction of flow of the solar wind. These models contain a weak field region or cusp through which the solar wind has direct access to themore » ionosphere. More recent MHD model results have indicated that the closed magnetic field lines of the dayside magnetosphere can be dragged tailward of the terminator plane, so that there is no direct access of the magnetosheath to the ionosphere. Most empirical studies have assumed that the magnetopause can be approximated by a simple conic section with a specified number of coefficients, which are determined by least squares fits to spacecraft crossing positions. Thus most empirical models resemble more the MHD models than the more complex shape of the Biot-Savart models. In this work, the authors examine empirically the effect of the cusp regions on the shape of the dayside magnetopause, and they test the accuracy of these models. They find that during periods of northward IMF, crossings of the magnetopause that are close to one of the cusp regions are observed at distances closer to Earth than crossings in the equatorial plane. This result is consistent with the results of the inviscid Biot-Savart models and suggests that the magnetopause is less viscous than is assumed in many MHD models. 28 refs., 4 figs., 1 tab.« less
NASA Technical Reports Server (NTRS)
Lamers, H. J. G. L. M.; Gathier, R.; Snow, T. P.
1980-01-01
From a study of the UV lines in the spectra of 25 stars from 04 to B1, the empirical relations between the mean density in the wind and the ionization fractions of O VI, N V, Si IV, and the excited C III (2p 3P0) level were derived. Using these empirical relations, a simple relation was derived between the mass-loss rate and the column density of any of these four ions. This relation can be used for a simple determination of the mass-loss rate from O4 to B1 stars.
Structural Patterns in Empirical Research Articles: A Cross-Disciplinary Study
ERIC Educational Resources Information Center
Lin, Ling; Evans, Stephen
2012-01-01
This paper presents an analysis of the major generic structures of empirical research articles (RAs), with a particular focus on disciplinary variation and the relationship between the adjacent sections in the introductory and concluding parts. The findings were derived from a close "manual" analysis of 433 recent empirical RAs from high-impact…
NASA Technical Reports Server (NTRS)
Volponi, Al; Simon, Donald L. (Technical Monitor)
2008-01-01
A key technological concept for producing reliable engine diagnostics and prognostics exploits the benefits of fusing sensor data, information, and/or processing algorithms. This report describes the development of a hybrid engine model for a propulsion gas turbine engine, which is the result of fusing two diverse modeling methodologies: a physics-based model approach and an empirical model approach. The report describes the process and methods involved in deriving and implementing a hybrid model configuration for a commercial turbofan engine. Among the intended uses for such a model is to enable real-time, on-board tracking of engine module performance changes and engine parameter synthesis for fault detection and accommodation.
Statistical validity of using ratio variables in human kinetics research.
Liu, Yuanlong; Schutz, Robert W
2003-09-01
The purposes of this study were to investigate the validity of the simple ratio and three alternative deflation models and examine how the variation of the numerator and denominator variables affects the reliability of a ratio variable. A simple ratio and three alternative deflation models were fitted to four empirical data sets, and common criteria were applied to determine the best model for deflation. Intraclass correlation was used to examine the component effect on the reliability of a ratio variable. The results indicate that the validity, of a deflation model depends on the statistical characteristics of the particular component variables used, and an optimal deflation model for all ratio variables may not exist. Therefore, it is recommended that different models be fitted to each empirical data set to determine the best deflation model. It was found that the reliability of a simple ratio is affected by the coefficients of variation and the within- and between-trial correlations between the numerator and denominator variables. It was recommended that researchers should compute the reliability of the derived ratio scores and not assume that strong reliabilities in the numerator and denominator measures automatically lead to high reliability in the ratio measures.
Pauluhn, Jürgen
2014-12-20
Convincing evidence suggests that poorly soluble low-toxicity particles (PSP) exert two unifying major modes of action (MoA), in which one appears to be deposition-related acute, whilst the other is retention-related and occurs with particle accumulation in the lung and associated persistent inflammation. Either MoA has its study- and cumulative dose-specific adverse outcome and metric. Modeling procedures were applied to better understand as to which extent protocol variables may predetermine any specific outcome of study. The results from modeled and empirical studies served as basis to derive OELs from modeled and empirically confirmed directions. This analysis demonstrates that the accumulated retained particle displacement volume was the most prominent unifying denominator linking the pulmonary retained volumetric particle dose to inflammogenicity and toxicity. However, conventional study design may not always be appropriate to unequivocally discriminate the surface thermodynamics-related acute adversity from the cumulative retention volume-related chronic adversity. Thus, in the absence of kinetically designed studies, it may become increasingly challenging to differentiate substance-specific deposition-related acute effects from the more chronic retained cumulative dose-related effects. It is concluded that the degree of dissolution of particles in the pulmonary environment seems to be generally underestimated with the possibility to attribute to toxicity due to decreased particle size and associated changes in thermodynamics and kinetics of dissolution. Accordingly, acute deposition-related outcomes become an important secondary variable within the pulmonary microenvironment. In turn, lung-overload related chronic adversities seem to be better described by the particle volume metric. This analysis supports the concept that 'self-validating', hypothesis-based computational study design delivers the highest level of unifying information required for the risk characterization of PSP. In demonstrating that the PSP under consideration is truly following the generic PSP-paradigm, this higher level of mechanistic information reduces the potential uncertainty involved with OEL derivation.
Visual aftereffects and sensory nonlinearities from a single statistical framework
Laparra, Valero; Malo, Jesús
2015-01-01
When adapted to a particular scenery our senses may fool us: colors are misinterpreted, certain spatial patterns seem to fade out, and static objects appear to move in reverse. A mere empirical description of the mechanisms tuned to color, texture, and motion may tell us where these visual illusions come from. However, such empirical models of gain control do not explain why these mechanisms work in this apparently dysfunctional manner. Current normative explanations of aftereffects based on scene statistics derive gain changes by (1) invoking decorrelation and linear manifold matching/equalization, or (2) using nonlinear divisive normalization obtained from parametric scene models. These principled approaches have different drawbacks: the first is not compatible with the known saturation nonlinearities in the sensors and it cannot fully accomplish information maximization due to its linear nature. In the second, gain change is almost determined a priori by the assumed parametric image model linked to divisive normalization. In this study we show that both the response changes that lead to aftereffects and the nonlinear behavior can be simultaneously derived from a single statistical framework: the Sequential Principal Curves Analysis (SPCA). As opposed to mechanistic models, SPCA is not intended to describe how physiological sensors work, but it is focused on explaining why they behave as they do. Nonparametric SPCA has two key advantages as a normative model of adaptation: (i) it is better than linear techniques as it is a flexible equalization that can be tuned for more sensible criteria other than plain decorrelation (either full information maximization or error minimization); and (ii) it makes no a priori functional assumption regarding the nonlinearity, so the saturations emerge directly from the scene data and the goal (and not from the assumed function). It turns out that the optimal responses derived from these more sensible criteria and SPCA are consistent with dysfunctional behaviors such as aftereffects. PMID:26528165
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ali, Melkamu; Ye, Sheng; Li, Hongyi
2014-07-19
Subsurface stormflow is an important component of the rainfall-runoff response, especially in steep forested regions. However; its contribution is poorly represented in current generation of land surface hydrological models (LSMs) and catchment-scale rainfall-runoff models. The lack of physical basis of common parameterizations precludes a priori estimation (i.e. without calibration), which is a major drawback for prediction in ungauged basins, or for use in global models. This paper is aimed at deriving physically based parameterizations of the storage-discharge relationship relating to subsurface flow. These parameterizations are derived through a two-step up-scaling procedure: firstly, through simulations with a physically based (Darcian) subsurfacemore » flow model for idealized three dimensional rectangular hillslopes, accounting for within-hillslope random heterogeneity of soil hydraulic properties, and secondly, through subsequent up-scaling to the catchment scale by accounting for between-hillslope and within-catchment heterogeneity of topographic features (e.g., slope). These theoretical simulation results produced parameterizations of the storage-discharge relationship in terms of soil hydraulic properties, topographic slope and their heterogeneities, which were consistent with results of previous studies. Yet, regionalization of the resulting storage-discharge relations across 50 actual catchments in eastern United States, and a comparison of the regionalized results with equivalent empirical results obtained on the basis of analysis of observed streamflow recession curves, revealed a systematic inconsistency. It was found that the difference between the theoretical and empirically derived results could be explained, to first order, by climate in the form of climatic aridity index. This suggests a possible codependence of climate, soils, vegetation and topographic properties, and suggests that subsurface flow parameterization needed for ungauged locations must account for both the physics of flow in heterogeneous landscapes, and the co-dependence of soil and topographic properties with climate, including possibly the mediating role of vegetation.« less
Haynos, Ann F; Pearson, Carolyn M; Utzinger, Linsey M; Wonderlich, Stephen A; Crosby, Ross D; Mitchell, James E; Crow, Scott J; Peterson, Carol B
2017-05-01
Evidence suggests that eating disorder subtypes reflecting under-controlled, over-controlled, and low psychopathology personality traits constitute reliable phenotypes that differentiate treatment response. This study is the first to use statistical analyses to identify these subtypes within treatment-seeking individuals with bulimia nervosa (BN) and to use these statistically derived clusters to predict clinical outcomes. Using variables from the Dimensional Assessment of Personality Pathology-Basic Questionnaire, K-means cluster analyses identified under-controlled, over-controlled, and low psychopathology subtypes within BN patients (n = 80) enrolled in a treatment trial. Generalized linear models examined the impact of personality subtypes on Eating Disorder Examination global score, binge eating frequency, and purging frequency cross-sectionally at baseline and longitudinally at end of treatment (EOT) and follow-up. In the longitudinal models, secondary analyses were conducted to examine personality subtype as a potential moderator of response to Cognitive Behavioral Therapy-Enhanced (CBT-E) or Integrative Cognitive-Affective Therapy for BN (ICAT-BN). There were no baseline clinical differences between groups. In the longitudinal models, personality subtype predicted binge eating (p = 0.03) and purging (p = 0.01) frequency at EOT and binge eating frequency at follow-up (p = 0.045). The over-controlled group demonstrated the best outcomes on these variables. In secondary analyses, there was a treatment by subtype interaction for purging at follow-up (p = 0.04), which indicated a superiority of CBT-E over ICAT-BN for reducing purging among the over-controlled group. Empirically derived personality subtyping appears to be a valid classification system with potential to guide eating disorder treatment decisions. © 2016 Wiley Periodicals, Inc.(Int J Eat Disord 2017; 50:506-514). © 2016 Wiley Periodicals, Inc.
Stellar Parameters for Trappist-1
NASA Astrophysics Data System (ADS)
Van Grootel, Valérie; Fernandes, Catarina S.; Gillon, Michael; Jehin, Emmanuel; Manfroid, Jean; Scuflaire, Richard; Burgasser, Adam J.; Barkaoui, Khalid; Benkhaldoun, Zouhair; Burdanov, Artem; Delrez, Laetitia; Demory, Brice-Olivier; de Wit, Julien; Queloz, Didier; Triaud, Amaury H. M. J.
2018-01-01
TRAPPIST-1 is an ultracool dwarf star transited by seven Earth-sized planets, for which thorough characterization of atmospheric properties, surface conditions encompassing habitability, and internal compositions is possible with current and next-generation telescopes. Accurate modeling of the star is essential to achieve this goal. We aim to obtain updated stellar parameters for TRAPPIST-1 based on new measurements and evolutionary models, compared to those used in discovery studies. We present a new measurement for the parallax of TRAPPIST-1, 82.4 ± 0.8 mas, based on 188 epochs of observations with the TRAPPIST and Liverpool Telescopes from 2013 to 2016. This revised parallax yields an updated luminosity of {L}* =(5.22+/- 0.19)× {10}-4 {L}ȯ , which is very close to the previous estimate but almost two times more precise. We next present an updated estimate for TRAPPIST-1 stellar mass, based on two approaches: mass from stellar evolution modeling, and empirical mass derived from dynamical masses of equivalently classified ultracool dwarfs in astrometric binaries. We combine them using a Monte-Carlo approach to derive a semi-empirical estimate for the mass of TRAPPIST-1. We also derive estimate for the radius by combining this mass with stellar density inferred from transits, as well as an estimate for the effective temperature from our revised luminosity and radius. Our final results are {M}* =0.089+/- 0.006 {M}ȯ , {R}* =0.121+/- 0.003 {R}ȯ , and {T}{eff} = 2516 ± 41 K. Considering the degree to which the TRAPPIST-1 system will be scrutinized in coming years, these revised and more precise stellar parameters should be considered when assessing the properties of TRAPPIST-1 planets.
NASA Astrophysics Data System (ADS)
Safeeq, M.; Grant, G. E.; Lewis, S. L.; Kramer, M. G.; Staab, B.
2014-09-01
Summer streamflows in the Pacific Northwest are largely derived from melting snow and groundwater discharge. As the climate warms, diminishing snowpack and earlier snowmelt will cause reductions in summer streamflow. Most regional-scale assessments of climate change impacts on streamflow use downscaled temperature and precipitation projections from general circulation models (GCMs) coupled with large-scale hydrologic models. Here we develop and apply an analytical hydrogeologic framework for characterizing summer streamflow sensitivity to a change in the timing and magnitude of recharge in a spatially explicit fashion. In particular, we incorporate the role of deep groundwater, which large-scale hydrologic models generally fail to capture, into streamflow sensitivity assessments. We validate our analytical streamflow sensitivities against two empirical measures of sensitivity derived using historical observations of temperature, precipitation, and streamflow from 217 watersheds. In general, empirically and analytically derived streamflow sensitivity values correspond. Although the selected watersheds cover a range of hydrologic regimes (e.g., rain-dominated, mixture of rain and snow, and snow-dominated), sensitivity validation was primarily driven by the snow-dominated watersheds, which are subjected to a wider range of change in recharge timing and magnitude as a result of increased temperature. Overall, two patterns emerge from this analysis: first, areas with high streamflow sensitivity also have higher summer streamflows as compared to low-sensitivity areas. Second, the level of sensitivity and spatial extent of highly sensitive areas diminishes over time as the summer progresses. Results of this analysis point to a robust, practical, and scalable approach that can help assess risk at the landscape scale, complement the downscaling approach, be applied to any climate scenario of interest, and provide a framework to assist land and water managers in adapting to an uncertain and potentially challenging future.
Stochastic modelling of non-stationary financial assets
NASA Astrophysics Data System (ADS)
Estevens, Joana; Rocha, Paulo; Boto, João P.; Lind, Pedro G.
2017-11-01
We model non-stationary volume-price distributions with a log-normal distribution and collect the time series of its two parameters. The time series of the two parameters are shown to be stationary and Markov-like and consequently can be modelled with Langevin equations, which are derived directly from their series of values. Having the evolution equations of the log-normal parameters, we reconstruct the statistics of the first moments of volume-price distributions which fit well the empirical data. Finally, the proposed framework is general enough to study other non-stationary stochastic variables in other research fields, namely, biology, medicine, and geology.
NASA Technical Reports Server (NTRS)
Blum, P. W.; Harris, I.
1973-01-01
The equations of horizontal motion of the neutral atmosphere between 120 and 500 km are integrated with the inclusion of all the nonlinear terms of the convective derivative and the viscous forces due to vertical and horizontal velocity gradients. Empirical models of the distribution of neutral and charged particles are assumed to be known. The model of velocities developed is a steady state model. In part 1 the mathematical method used in the integration of the Navier-Stokes equations is described and the various forces are analysed.
Science and Technology Investment Strategy for Squadron Level Training
1993-05-01
be derived from empirically sound and theory -based instructional models. Cmment. The automation of instructional design could favorably impact the...require a significant amount of time to develop and where the underlying theory and/or applications hardware and software is ht flux. Long-term efforts...training or training courses. It does not refer to the initial evaluation of individuals entering Upgrade Training ( UGT ). It Am refer to the evaluation of
Integration of GRACE and GNET GPS in modeling the deglaciation of Greenland
NASA Astrophysics Data System (ADS)
Knudsen, P.; Madsen, F. B.; Khan, S. A.; Bevis, M. G.; van Dam, T. M.
2017-12-01
The use the monthly gravity fields from the Gravity Recovery and Climate Experiment (GRACE) has become essential when assessing and modeling the mass changes of the ice sheets. The recent degradation of the current mission, however, has hampered the continuous monitoring of ice sheet masses, at least until GRACE Follow-On mission will become operational. Through the recent years it has been demonstrated that mass changes can be observed by GPS receivers mounted on the adjacent bedrock. Especially, the Greenland GPS Network (GNET) has proven that GPS is a valuable technique for detecting mass changes through the Earths elastic response. An integration of GNET with other observations of the Greenland ice sheet, e.g. satellite altimetry and GRACE, has made studies of GIA progressing significantly. In this study, we aim at improving the monitoring of the ice sheet mass by utilizing the redundancy for reducing the influence of errors and to fill in at data voids and, not at least to bridge the gap between GRACE and GRACE FO. Initial analyses are carried out to link GRACE and GNET time series empirically. EOF analyses are carried out to extract the main part of the variability and to isolate errors. Subsequently, empirical covariance functions are derived and used in the integration. Preliminary results are derived and inter-compared.
Markov switching of the electricity supply curve and power prices dynamics
NASA Astrophysics Data System (ADS)
Mari, Carlo; Cananà, Lucianna
2012-02-01
Regime-switching models seem to well capture the main features of power prices behavior in deregulated markets. In a recent paper, we have proposed an equilibrium methodology to derive electricity prices dynamics from the interplay between supply and demand in a stochastic environment. In particular, assuming that the supply function is described by a power law where the exponent is a two-state strictly positive Markov process, we derived a regime switching dynamics of power prices in which regime switches are induced by transitions between Markov states. In this paper, we provide a dynamical model to describe the random behavior of power prices where the only non-Brownian component of the motion is endogenously introduced by Markov transitions in the exponent of the electricity supply curve. In this context, the stochastic process driving the switching mechanism becomes observable, and we will show that the non-Brownian component of the dynamics induced by transitions from Markov states is responsible for jumps and spikes of very high magnitude. The empirical analysis performed on three Australian markets confirms that the proposed approach seems quite flexible and capable of incorporating the main features of power prices time-series, thus reproducing the first four moments of log-returns empirical distributions in a satisfactory way.
Zheng, Wendong; Zeng, Pingping
2016-01-01
ABSTRACT Most of the empirical studies on stochastic volatility dynamics favour the 3/2 specification over the square-root (CIR) process in the Heston model. In the context of option pricing, the 3/2 stochastic volatility model (SVM) is reported to be able to capture the volatility skew evolution better than the Heston model. In this article, we make a thorough investigation on the analytic tractability of the 3/2 SVM by proposing a closed-form formula for the partial transform of the triple joint transition density which stand for the log asset price, the quadratic variation (continuous realized variance) and the instantaneous variance, respectively. Two distinct formulations are provided for deriving the main result. The closed-form partial transform enables us to deduce a variety of marginal partial transforms and characteristic functions and plays a crucial role in pricing discretely sampled variance derivatives and exotic options that depend on both the asset price and quadratic variation. Various applications and numerical examples on pricing moment swaps and timer options with discrete monitoring feature are given to demonstrate the versatility of the partial transform under the 3/2 model. PMID:28706460
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mercer, D.E.
The objectives are threefold: (1) to perform an analytical survey of household production theory as it relates to natural-resource problems in less-developed countries, (2) to develop a household production model of fuelwood decision making, (3) to derive a theoretical framework for travel-cost demand studies of international nature tourism. The model of household fuelwood decision making provides a rich array of implications and predictions for empirical analysis. For example, it is shown that fuelwood and modern fuels may be either substitutes or complements depending on the interaction of the gross-substitution and income-expansion effects. Therefore, empirical analysis should precede adoption of anymore » inter-fuel substitution policies such as subsidizing kerosene. The fuelwood model also provides a framework for analyzing the conditions and factors determining entry and exit by households into the wood-burning subpopulation, a key for designing optimal household energy policies in the Third World. The international nature tourism travel cost model predicts that the demand for nature tourism is an aggregate of the demand for the individual activities undertaken during the trip.« less
Modeling the characteristics of wheel/rail rolling noise
NASA Astrophysics Data System (ADS)
Lui, Wai Keung; Li, Kai Ming; Frommer, Glenn H.
2005-04-01
To study the sound radiation characteristics of a passing train, four sets of noise measurements for different train operational conditions have been conducted at three different sites, including ballast tracks at grade and railway on a concrete viaduct. The time histories computed by the horizontal radiation models were compared with the measured noise profiles. The measured sound exposure levels are used to deduce the vertical directivity pattern for different railway systems. It is found that the vertical directivity of different railway systems shows a rather similar pattern. The vertical directivity of train noise is shown to increase up to about 30× before reducing to a minimum at 90×. A multipole expansion model is proposed to account for the vertical radiation directivity of the train noise. An empirical formula, which has been derived, compares well with the experimental data. The empirical model is found to be applicable to different train/rail systems at train speeds ranging up to 120 km/h in this study. [Work supported by MTR Corporation Ltd., Innovation Technology Commission of the HKSAR Government and The Hong Kong Polytechnic University.
Quality and price--impact on patient satisfaction.
Pantouvakis, Angelos; Bouranta, Nancy
2014-01-01
The purpose of this paper is to synthesize existing quality-measurement models and applies them to healthcare by combining a Nordic service-quality with an American service performance model. Results are based on a questionnaire survey of 1,298 respondents. Service quality dimensions were derived and related to satisfaction by employing a multinomial logistic model, which allows prediction and service improvement. Qualitative and empirical evidence indicates that customer satisfaction and service quality are multi-dimensional constructs, whose quality components, together with convenience and cost, influence the customer's overall satisfaction. The proposed model identifies important quality and satisfaction issues. It also enables transitions between different responses in different studies to be compared.
Modeling of outgassing and matrix decomposition in carbon-phenolic composites
NASA Technical Reports Server (NTRS)
Mcmanus, Hugh L.
1993-01-01
A new release rate equation to model the phase change of water to steam in composite materials was derived from the theory of molecular diffusion and equilibrium moisture concentration. The new model is dependent on internal pressure, the microstructure of the voids and channels in the composite materials, and the diffusion properties of the matrix material. Hence, it is more fundamental and accurate than the empirical Arrhenius rate equation currently in use. The model was mathematically formalized and integrated into the thermostructural analysis code CHAR. Parametric studies on variation of several parameters have been done. Comparisons to Arrhenius and straight-line models show that the new model produces physically realistic results under all conditions.
Toward a Model-Based Approach to the Clinical Assessment of Personality Psychopathology
Eaton, Nicholas R.; Krueger, Robert F.; Docherty, Anna R.; Sponheim, Scott R.
2015-01-01
Recent years have witnessed tremendous growth in the scope and sophistication of statistical methods available to explore the latent structure of psychopathology, involving continuous, discrete, and hybrid latent variables. The availability of such methods has fostered optimism that they can facilitate movement from classification primarily crafted through expert consensus to classification derived from empirically-based models of psychopathological variation. The explication of diagnostic constructs with empirically supported structures can then facilitate the development of assessment tools that appropriately characterize these constructs. Our goal in this paper is to illustrate how new statistical methods can inform conceptualization of personality psychopathology and therefore its assessment. We use magical thinking as example, because both theory and earlier empirical work suggested the possibility of discrete aspects to the latent structure of personality psychopathology, particularly forms of psychopathology involving distortions of reality testing, yet other data suggest that personality psychopathology is generally continuous in nature. We directly compared the fit of a variety of latent variable models to magical thinking data from a sample enriched with clinically significant variation in psychotic symptomatology for explanatory purposes. Findings generally suggested a continuous latent variable model best represented magical thinking, but results varied somewhat depending on different indices of model fit. We discuss the implications of the findings for classification and applied personality assessment. We also highlight some limitations of this type of approach that are illustrated by these data, including the importance of substantive interpretation, in addition to use of model fit indices, when evaluating competing structural models. PMID:24007309
PROPERTIES OF 42 SOLAR-TYPE KEPLER TARGETS FROM THE ASTEROSEISMIC MODELING PORTAL
DOE Office of Scientific and Technical Information (OSTI.GOV)
Metcalfe, T. S.; Mathur, S.; Creevey, O. L.
2014-10-01
Recently the number of main-sequence and subgiant stars exhibiting solar-like oscillations that are resolved into individual mode frequencies has increased dramatically. While only a few such data sets were available for detailed modeling just a decade ago, the Kepler mission has produced suitable observations for hundreds of new targets. This rapid expansion in observational capacity has been accompanied by a shift in analysis and modeling strategies to yield uniform sets of derived stellar properties more quickly and easily. We use previously published asteroseismic and spectroscopic data sets to provide a uniform analysis of 42 solar-type Kepler targets from the Asteroseismicmore » Modeling Portal. We find that fitting the individual frequencies typically doubles the precision of the asteroseismic radius, mass, and age compared to grid-based modeling of the global oscillation properties, and improves the precision of the radius and mass by about a factor of three over empirical scaling relations. We demonstrate the utility of the derived properties with several applications.« less
An accurate behavioral model for single-photon avalanche diode statistical performance simulation
NASA Astrophysics Data System (ADS)
Xu, Yue; Zhao, Tingchen; Li, Ding
2018-01-01
An accurate behavioral model is presented to simulate important statistical performance of single-photon avalanche diodes (SPADs), such as dark count and after-pulsing noise. The derived simulation model takes into account all important generation mechanisms of the two kinds of noise. For the first time, thermal agitation, trap-assisted tunneling and band-to-band tunneling mechanisms are simultaneously incorporated in the simulation model to evaluate dark count behavior of SPADs fabricated in deep sub-micron CMOS technology. Meanwhile, a complete carrier trapping and de-trapping process is considered in afterpulsing model and a simple analytical expression is derived to estimate after-pulsing probability. In particular, the key model parameters of avalanche triggering probability and electric field dependence of excess bias voltage are extracted from Geiger-mode TCAD simulation and this behavioral simulation model doesn't include any empirical parameters. The developed SPAD model is implemented in Verilog-A behavioral hardware description language and successfully operated on commercial Cadence Spectre simulator, showing good universality and compatibility. The model simulation results are in a good accordance with the test data, validating high simulation accuracy.
Inner Magnetospheric Electric Fields Derived from IMAGE EUV
NASA Technical Reports Server (NTRS)
Gallagher, D. L.; Adrian, M. L.
2007-01-01
The local and global patterns of plasmaspheric plasma transport reflect the influence of electric fields imposed by all sources in the inner magnetosphere. Image sequences of thermal plasma G:istribution obtained from the IMAGE Mission Extreme Ultraviolet Imager can be used to derive plasma motions and, using a magnetic field model, the corresponding electric fields. These motions and fields directly reflect the dynamic coupling of injected plasmasheet plasma and the ionosphere, in addition to solar wind and atmospheric drivers. What is being learned about the morphology of inner magnetospheric electric fields during storm and quite conditions from this new empirical tool will be presented and discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Busuioc, A.; Storch, H. von; Schnur, R.
Empirical downscaling procedures relate large-scale atmospheric features with local features such as station rainfall in order to facilitate local scenarios of climate change. The purpose of the present paper is twofold: first, a downscaling technique is used as a diagnostic tool to verify the performance of climate models on the regional scale; second, a technique is proposed for verifying the validity of empirical downscaling procedures in climate change applications. The case considered is regional seasonal precipitation in Romania. The downscaling model is a regression based on canonical correlation analysis between observed station precipitation and European-scale sea level pressure (SLP). Themore » climate models considered here are the T21 and T42 versions of the Hamburg ECHAM3 atmospheric GCM run in time-slice mode. The climate change scenario refers to the expected time of doubled carbon dioxide concentrations around the year 2050. Generally, applications of statistical downscaling to climate change scenarios have been based on the assumption that the empirical link between the large-scale and regional parameters remains valid under a changed climate. In this study, a rationale is proposed for this assumption by showing the consistency of the 2 x CO{sub 2} GCM scenarios in winter, derived directly from the gridpoint data, with the regional scenarios obtained through empirical downscaling. Since the skill of the GCMs in regional terms is already established, it is concluded that the downscaling technique is adequate for describing climatically changing regional and local conditions, at least for precipitation in Romania during winter.« less
NASA Astrophysics Data System (ADS)
Kelly, Angela
2017-01-01
Sociopsychological theories and empirical research provide a framework for exploring causal pathways and targeted interventions to increase the representation of women in post-secondary physics. Women earned only 19.7 percent of physics undergraduate degrees in 2012 (APS, 2015). This disparity has been attributed to a variety of factors, including chilly classroom climates, gender-based stereotypes, persistent self-doubt, and a lack of role models in physics departments. The theoretical framework for this research synthesis is based upon several psychological theories of sociocognitive behavior and is derived from three general constructs: 1) self-efficacy and self-concept; 2) expectancy value and planned behavior; and 3) motivation and self-determination. Recent studies have suggested that the gender discrepancy in physics participation may be alleviated by applying interventions derived from social cognitive research. These interventions include social and familial support, welcoming and collaborative classroom environments, critical feedback, and identification with a malleable view of intelligence. This research provides empirically supported mechanisms for university stakeholders to implement reforms that will increase women's participation in physics.
Two-state model based on the block-localized wave function method
NASA Astrophysics Data System (ADS)
Mo, Yirong
2007-06-01
The block-localized wave function (BLW) method is a variant of ab initio valence bond method but retains the efficiency of molecular orbital methods. It can derive the wave function for a diabatic (resonance) state self-consistently and is available at the Hartree-Fock (HF) and density functional theory (DFT) levels. In this work we present a two-state model based on the BLW method. Although numerous empirical and semiempirical two-state models, such as the Marcus-Hush two-state model, have been proposed to describe a chemical reaction process, the advantage of this BLW-based two-state model is that no empirical parameter is required. Important quantities such as the electronic coupling energy, structural weights of two diabatic states, and excitation energy can be uniquely derived from the energies of two diabatic states and the adiabatic state at the same HF or DFT level. Two simple examples of formamide and thioformamide in the gas phase and aqueous solution were presented and discussed. The solvation of formamide and thioformamide was studied with the combined ab initio quantum mechanical and molecular mechanical Monte Carlo simulations, together with the BLW-DFT calculations and analyses. Due to the favorable solute-solvent electrostatic interaction, the contribution of the ionic resonance structure to the ground state of formamide and thioformamide significantly increases, and for thioformamide the ionic form is even more stable than the covalent form. Thus, thioformamide in aqueous solution is essentially ionic rather than covalent. Although our two-state model in general underestimates the electronic excitation energies, it can predict relative solvatochromic shifts well. For instance, the intense π →π* transition for formamide upon solvation undergoes a redshift of 0.3eV, compared with the experimental data (0.40-0.5eV).
NASA Astrophysics Data System (ADS)
Zhang, Y.; Guanter, L.; Van der Tol, C.; Joiner, J.; Berry, J. A.
2015-12-01
Global sun-induced chlorophyll fluorescence (SIF) retrievals are currently available from several satellites. SIF is intrinsically linked to photosynthesis, so the new data sets allow to link remotely-sensed vegetation parameters and the actual photosynthetic activity of plants. In this study, we used space measurements of SIF together with the Soil-Canopy Observation of Photosynthesis and Energy (SCOPE) balance model in order to simulate regional photosynthetic uptake of croplands in the US corn belt. SCOPE couples fluorescence and photosynthesis at leaf and canopy levels. To do this, we first retrieved a key parameter of photosynthesis model, the maximum rate of carboxylation (Vcmax), from field measurements of CO2 and water flux during 2007-2012 at some crop eddy covariance flux sites in the Midwestern US. Then we empirically calibrated Vcmax with apparent fluorescence yield which is SIF divided by PAR. SIF retrievals are from the European GOME-2 instrument onboard the MetOp-A platform. The resulting apparent fluorescence yield shows a stronger relationship with Vcmax during the growing season than widely-used vegetation index, EVI and NDVI. New seasonal and regional Vcmax maps were derived based on the calibration model for the cropland of the corn belt. The uncertainties of Vcmax were also estimated through Gaussian error propagation. With the newly derived Vcmax maps, we modeled regional cropland GPP during the growing season for the Midwestern USA, with meteorological data from MERRA reanalysis data and LAI from MODIS product (MCD15A2). The results show the improvement in the seasonal and spatial patterns of cropland productivity in comparisons with both flux tower and agricultural inventory data.
How good are indirect tests at detecting recombination in human mtDNA?
White, Daniel James; Bryant, David; Gemmell, Neil John
2013-07-08
Empirical proof of human mitochondrial DNA (mtDNA) recombination in somatic tissues was obtained in 2004; however, a lack of irrefutable evidence exists for recombination in human mtDNA at the population level. Our inability to demonstrate convincingly a signal of recombination in population data sets of human mtDNA sequence may be due, in part, to the ineffectiveness of current indirect tests. Previously, we tested some well-established indirect tests of recombination (linkage disequilibrium vs. distance using D' and r(2), Homoplasy Test, Pairwise Homoplasy Index, Neighborhood Similarity Score, and Max χ(2)) on sequence data derived from the only empirically confirmed case of human mtDNA recombination thus far and demonstrated that some methods were unable to detect recombination. Here, we assess the performance of these six well-established tests and explore what characteristics specific to human mtDNA sequence may affect their efficacy by simulating sequence under various parameters with levels of recombination (ρ) that vary around an empirically derived estimate for human mtDNA (population parameter ρ = 5.492). No test performed infallibly under any of our scenarios, and error rates varied across tests, whereas detection rates increased substantially with ρ values > 5.492. Under a model of evolution that incorporates parameters specific to human mtDNA, including rate heterogeneity, population expansion, and ρ = 5.492, successful detection rates are limited to a range of 7-70% across tests with an acceptable level of false-positive results: the neighborhood similarity score incompatibility test performed best overall under these parameters. Population growth seems to have the greatest impact on recombination detection probabilities across all models tested, likely due to its impact on sequence diversity. The implications of our findings on our current understanding of mtDNA recombination in humans are discussed.
An analytical model of iceberg drift
NASA Astrophysics Data System (ADS)
Eisenman, I.; Wagner, T. J. W.; Dell, R.
2017-12-01
Icebergs transport freshwater from glaciers and ice shelves, releasing the freshwater into the upper ocean thousands of kilometers from the source. This influences ocean circulation through its effect on seawater density. A standard empirical rule-of-thumb for estimating iceberg trajectories is that they drift at the ocean surface current velocity plus 2% of the atmospheric surface wind velocity. This relationship has been observed in empirical studies for decades, but it has never previously been physically derived or justified. In this presentation, we consider the momentum balance for an individual iceberg, which includes nonlinear drag terms. Applying a series of approximations, we derive an analytical solution for the iceberg velocity as a function of time. In order to validate the model, we force it with surface velocity and temperature data from an observational state estimate and compare the results with iceberg observations in both hemispheres. We show that the analytical solution reduces to the empirical 2% relationship in the asymptotic limit of small icebergs (or strong winds), which approximately applies for typical Arctic icebergs. We find that the 2% value arises due to a term involving the drag coefficients for water and air and the densities of the iceberg, ocean, and air. In the opposite limit of large icebergs (or weak winds), which approximately applies for typical Antarctic icebergs with horizontal length scales greater than about 12 km, we find that the 2% relationship is not applicable and that icebergs instead move with the ocean current, unaffected by the wind. The two asymptotic regimes can be understood by considering how iceberg size influences the relative importance of the wind and ocean current drag terms compared with the Coriolis and pressure gradient force terms in the iceberg momentum balance.
How Good Are Indirect Tests at Detecting Recombination in Human mtDNA?
White, Daniel James; Bryant, David; Gemmell, Neil John
2013-01-01
Empirical proof of human mitochondrial DNA (mtDNA) recombination in somatic tissues was obtained in 2004; however, a lack of irrefutable evidence exists for recombination in human mtDNA at the population level. Our inability to demonstrate convincingly a signal of recombination in population data sets of human mtDNA sequence may be due, in part, to the ineffectiveness of current indirect tests. Previously, we tested some well-established indirect tests of recombination (linkage disequilibrium vs. distance using D′ and r2, Homoplasy Test, Pairwise Homoplasy Index, Neighborhood Similarity Score, and Max χ2) on sequence data derived from the only empirically confirmed case of human mtDNA recombination thus far and demonstrated that some methods were unable to detect recombination. Here, we assess the performance of these six well-established tests and explore what characteristics specific to human mtDNA sequence may affect their efficacy by simulating sequence under various parameters with levels of recombination (ρ) that vary around an empirically derived estimate for human mtDNA (population parameter ρ = 5.492). No test performed infallibly under any of our scenarios, and error rates varied across tests, whereas detection rates increased substantially with ρ values > 5.492. Under a model of evolution that incorporates parameters specific to human mtDNA, including rate heterogeneity, population expansion, and ρ = 5.492, successful detection rates are limited to a range of 7−70% across tests with an acceptable level of false-positive results: the neighborhood similarity score incompatibility test performed best overall under these parameters. Population growth seems to have the greatest impact on recombination detection probabilities across all models tested, likely due to its impact on sequence diversity. The implications of our findings on our current understanding of mtDNA recombination in humans are discussed. PMID:23665874
Impact of orbit modeling on DORIS station position and Earth rotation estimates
NASA Astrophysics Data System (ADS)
Štěpánek, Petr; Rodriguez-Solano, Carlos Javier; Hugentobler, Urs; Filler, Vratislav
2014-04-01
The high precision of estimated station coordinates and Earth rotation parameters (ERP) obtained from satellite geodetic techniques is based on the precise determination of the satellite orbit. This paper focuses on the analysis of the impact of different orbit parameterizations on the accuracy of station coordinates and the ERPs derived from DORIS observations. In a series of experiments the DORIS data from the complete year 2011 were processed with different orbit model settings. First, the impact of precise modeling of the non-conservative forces on geodetic parameters was compared with results obtained with an empirical-stochastic modeling approach. Second, the temporal spacing of drag scaling parameters was tested. Third, the impact of estimating once-per-revolution harmonic accelerations in cross-track direction was analyzed. And fourth, two different approaches for solar radiation pressure (SRP) handling were compared, namely adjusting SRP scaling parameter or fixing it on pre-defined values. Our analyses confirm that the empirical-stochastic orbit modeling approach, which does not require satellite attitude information and macro models, results for most of the monitored station parameters in comparable accuracy as the dynamical model that employs precise non-conservative force modeling. However, the dynamical orbit model leads to a reduction of the RMS values for the estimated rotation pole coordinates by 17% for x-pole and 12% for y-pole. The experiments show that adjusting atmospheric drag scaling parameters each 30 min is appropriate for DORIS solutions. Moreover, it was shown that the adjustment of cross-track once-per-revolution empirical parameter increases the RMS of the estimated Earth rotation pole coordinates. With recent data it was however not possible to confirm the previously known high annual variation in the estimated geocenter z-translation series as well as its mitigation by fixing the SRP parameters on pre-defined values.
Influence of the Atmospheric Model on Hanle Diagnostics
NASA Astrophysics Data System (ADS)
Ishikawa, Ryohko; Uitenbroek, Han; Goto, Motoshi; Iida, Yusuke; Tsuneta, Saku
2018-05-01
We clarify the uncertainty in the inferred magnetic field vector via the Hanle diagnostics of the hydrogen Lyman-α line when the stratification of the underlying atmosphere is unknown. We calculate the anisotropy of the radiation field with plane-parallel semi-empirical models under the nonlocal thermal equilibrium condition and derive linear polarization signals for all possible parameters of magnetic field vectors based on an analytical solution of the atomic polarization and Hanle effect. We find that the semi-empirical models of the inter-network region (FAL-A) and network region (FAL-F) show similar degrees of anisotropy in the radiation field, and this similarity results in an acceptable inversion error ( e.g., {˜} 40 G instead of 50 G in field strength and {˜} 100° instead of 90° in inclination) when FAL-A and FAL-F are swapped. However, the semi-empirical models of FAL-C (averaged quiet-Sun model including both inter-network and network regions) and FAL-P (plage regions) yield an atomic polarization that deviates from all other models, which makes it difficult to precisely determine the magnetic field vector if the correct atmospheric model is not known ( e.g., the inversion error is much larger than 40% of the field strength; {>} 70 G instead of 50 G). These results clearly demonstrate that the choice of model atmosphere is important for Hanle diagnostics. As is well known, one way to constrain the average atmospheric stratification is to measure the center-to-limb variation of the linear polarization signals. The dependence of the center-to-limb variations on the atmospheric model is also presented in this paper.
NASA Astrophysics Data System (ADS)
Ng, T. Y.; Yeak, S. H.; Liew, K. M.
2008-02-01
A multiscale technique is developed that couples empirical molecular dynamics (MD) and ab initio density functional theory (DFT). An overlap handshaking region between the empirical MD and ab initio DFT regions is formulated and the interaction forces between the carbon atoms are calculated based on the second-generation reactive empirical bond order potential, the long-range Lennard-Jones potential as well as the quantum-mechanical DFT derived forces. A density of point algorithm is also developed to track all interatomic distances in the system, and to activate and establish the DFT and handshaking regions. Through parallel computing, this multiscale method is used here to study the dynamic behavior of single-walled carbon nanotubes (SWCNTs) under asymmetrical axial compression. The detection of sideways buckling due to the asymmetrical axial compression is reported and discussed. It is noted from this study on SWCNTs that the MD results may be stiffer compared to those with electron density considerations, i.e. first-principle ab initio methods.
NASA Astrophysics Data System (ADS)
Moon, Joon-Young; Kim, Junhyeok; Ko, Tae-Wook; Kim, Minkyung; Iturria-Medina, Yasser; Choi, Jee-Hyun; Lee, Joseph; Mashour, George A.; Lee, Uncheol
2017-04-01
Identifying how spatially distributed information becomes integrated in the brain is essential to understanding higher cognitive functions. Previous computational and empirical studies suggest a significant influence of brain network structure on brain network function. However, there have been few analytical approaches to explain the role of network structure in shaping regional activities and directionality patterns. In this study, analytical methods are applied to a coupled oscillator model implemented in inhomogeneous networks. We first derive a mathematical principle that explains the emergence of directionality from the underlying brain network structure. We then apply the analytical methods to the anatomical brain networks of human, macaque, and mouse, successfully predicting simulation and empirical electroencephalographic data. The results demonstrate that the global directionality patterns in resting state brain networks can be predicted solely by their unique network structures. This study forms a foundation for a more comprehensive understanding of how neural information is directed and integrated in complex brain networks.
Modeling the effect of topical oxygen therapy on wound healing
NASA Astrophysics Data System (ADS)
Agyingi, Ephraim; Ross, David; Maggelakis, Sophia
2011-11-01
Oxygen supply is a critical element for the healing of wounds. Clinical investigations have shown that topical oxygen therapy (TOT) increases the healing rate of wounds. The reason behind TOT increasing the healing rate of a wound remains unclear and hence current protocols are empirical. In this paper we present a mathematical model of wound healing that we use to simulate the application of TOT in the treatment of cutaneous wounds. At the core of our model is an account of the initiation of angiogenesis by macrophage-derived growth factors. The model is expressed as a system of reaction-diffusion equations. We present results of simulations for a version of the model with one spatial dimension.
The new Kuznets cycle: a test of the Easterlin-Wachter-Wachter hypothesis.
Ahlburg, D A
1982-01-01
The aim of this paper is to evaluate the Easterlin-Wachter-Wachter model of the effect of the size of one generation on the size of the succeeding generation. An attempt is made "to identify and test empirically each component of the Easterlin-Wachter-Wachter model..., to show how the components collapse to give a closed demographic model of generation size, and to investigate the impacts of relative cohort size on the economic performance of a cohort." The models derived are then used to generate forecasts of the U.S. birth rate to the year 2050. The results provide support for the major components of the original model. excerpt
MacDonald, Donald D.; Dipinto, Lisa M.; Field, Jay; Ingersoll, Christopher G.; Long, Edward R.; Swartz, Richard C.
2000-01-01
Sediment-quality guidelines (SQGs) have been published for polychlorinated biphenyls (PCBs) using both empirical and theoretical approaches. Empirically based guidelines have been developed using the screening-level concentration, effects range, effects level, and apparent effects threshold approaches. Theoretically based guidelines have been developed using the equilibrium-partitioning approach. Empirically-based guidelines were classified into three general categories, in accordance with their original narrative intents, and used to develop three consensus-based sediment effect concentrations (SECs) for total PCBs (tPCBs), including a threshold effect concentration, a midrange effect concentration, and an extreme effect concentration. Consensus-based SECs were derived because they estimate the central tendency of the published SQGs and, thus, reconcile the guidance values that have been derived using various approaches. Initially, consensus-based SECs for tPCBs were developed separately for freshwater sediments and for marine and estuarine sediments. Because the respective SECs were statistically similar, the underlying SQGs were subsequently merged and used to formulate more generally applicable SECs. The three consensus-based SECs were then evaluated for reliability using matching sediment chemistry and toxicity data from field studies, dose-response data from spiked-sediment toxicity tests, and SQGs derived from the equilibrium-partitioning approach. The results of this evaluation demonstrated that the consensus-based SECs can accurately predict both the presence and absence of toxicity in field-collected sediments. Importantly, the incidence of toxicity increases incrementally with increasing concentrations of tPCBs. Moreover, the consensus-based SECs are comparable to the chronic toxicity thresholds that have been estimated from dose-response data and equilibrium-partitioning models. Therefore, consensus-based SECs provide a unifying synthesis of existing SQGs, reflect causal rather than correlative effects, and accurately predict sediment toxicity in PCB-contaminated sediments.
Multi-scale predictive modeling of nano-material and realistic electron devices
NASA Astrophysics Data System (ADS)
Palaria, Amritanshu
Among the challenges faced in further miniaturization of electronic devices, heavy influence of the detailed atomic configuration of the material(s) involved, which often differs significantly from that of the bulk material(s), is prominent. Device design has therefore become highly interrelated with material engineering at the atomic level. This thesis aims at outlining, with examples, a multi-scale simulation procedure that allows one to integrate material and device aspects of nano-electronic design to predict behavior of novel devices with novel material. This is followed in four parts: (1) An approach that combines a higher time scale reactive force field analysis with density functional theory to predict structure of new material is demonstrated for the first time for nanowires. Novel stable structures for very small diameter silicon nanowires are predicted. (2) Density functional theory is used to show that the new nanowire structures derived in 1 above have properties different from diamond core wires even though the surface bonds in some may be similar to the surface of bulk silicon. (3) Electronic structure of relatively large-scale germanium sections of realistically strained Si/strained Ge/ strained Si nanowire heterostructures is computed using empirical tight binding and it is shown that the average non-homogeneous strain in these structures drives their interesting non-conventional electronic characteristics such as hole effective masses which decrease as the wire cross-section is reduced. (4) It is shown that tight binding, though empirical in nature, is not necessarily limited to the material and atomic structure for which the parameters have been empirically derived, but that simple changes may adapt the derived parameters to new bond environments. Si (100) surface electronic structure is obtained from bulk Si parameters.
NASA Technical Reports Server (NTRS)
Lautenschlager, L.; Perry, C. R., Jr. (Principal Investigator)
1981-01-01
The development of formulae for the reduction of multispectral scanner measurements to a single value (vegetation index) for predicting and assessing vegetative characteristics is addressed. The origin, motivation, and derivation of some four dozen vegetation indices are summarized. Empirical, graphical, and analytical techniques are used to investigate the relationships among the various indices. It is concluded that many vegetative indices are very similar, some being simple algebraic transforms of others.
Modelling vertical error in LiDAR-derived digital elevation models
NASA Astrophysics Data System (ADS)
Aguilar, Fernando J.; Mills, Jon P.; Delgado, Jorge; Aguilar, Manuel A.; Negreiros, J. G.; Pérez, José L.
2010-01-01
A hybrid theoretical-empirical model has been developed for modelling the error in LiDAR-derived digital elevation models (DEMs) of non-open terrain. The theoretical component seeks to model the propagation of the sample data error (SDE), i.e. the error from light detection and ranging (LiDAR) data capture of ground sampled points in open terrain, towards interpolated points. The interpolation methods used for infilling gaps may produce a non-negligible error that is referred to as gridding error. In this case, interpolation is performed using an inverse distance weighting (IDW) method with the local support of the five closest neighbours, although it would be possible to utilize other interpolation methods. The empirical component refers to what is known as "information loss". This is the error purely due to modelling the continuous terrain surface from only a discrete number of points plus the error arising from the interpolation process. The SDE must be previously calculated from a suitable number of check points located in open terrain and assumes that the LiDAR point density was sufficiently high to neglect the gridding error. For model calibration, data for 29 study sites, 200×200 m in size, belonging to different areas around Almeria province, south-east Spain, were acquired by means of stereo photogrammetric methods. The developed methodology was validated against two different LiDAR datasets. The first dataset used was an Ordnance Survey (OS) LiDAR survey carried out over a region of Bristol in the UK. The second dataset was an area located at Gador mountain range, south of Almería province, Spain. Both terrain slope and sampling density were incorporated in the empirical component through the calibration phase, resulting in a very good agreement between predicted and observed data (R2 = 0.9856 ; p < 0.001). In validation, Bristol observed vertical errors, corresponding to different LiDAR point densities, offered a reasonably good fit to the predicted errors. Even better results were achieved in the more rugged morphology of the Gador mountain range dataset. The findings presented in this article could be used as a guide for the selection of appropriate operational parameters (essentially point density in order to optimize survey cost), in projects related to LiDAR survey in non-open terrain, for instance those projects dealing with forestry applications.
Towards a universal model for carbon dioxide uptake by plants
Wang, Han; Prentice, I. Colin; Keenan, Trevor F.; ...
2017-09-04
Gross primary production (GPP) - the uptake of carbon dioxide (CO 2) by leaves, and its conversion to sugars by photosynthesis - is the basis for life on land. Earth System Models (ESMs) incorporating the interactions of land ecosystems and climate are used to predict the future of the terrestrial sink for anthropogenic CO 2. ESMs require accurate representation of GPP. However, current ESMs disagree on how GPP responds to environmental variations, suggesting a need for a more robust theoretical framework for modelling. Here in this work, we focus on a key quantity for GPP, the ratio of leaf internalmore » to external CO 2 (χ). χ is tightly regulated and depends on environmental conditions, but is represented empirically and incompletely in today's models. We show that a simple evolutionary optimality hypothesis predicts specific quantitative dependencies of χ on temperature, vapour pressure deficit and elevation; and that these same dependencies emerge from an independent analysis of empirical χ values, derived from a worldwide dataset of >3,500 leaf stable carbon isotope measurements. A single global equation embodying these relationships then unifies the empirical light-use efficiency model with the standard model of C 3 photosynthesis, and successfully predicts GPP measured at eddy-covariance flux sites. This success is notable given the equation's simplicity and broad applicability across biomes and plant functional types. Finally, it provides a theoretical underpinning for the analysis of plant functional coordination across species and emergent properties of ecosystems, and a potential basis for the reformulation of the controls of GPP in next-generation ESMs.« less
Towards a universal model for carbon dioxide uptake by plants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Han; Prentice, I. Colin; Keenan, Trevor F.
Gross primary production (GPP) - the uptake of carbon dioxide (CO 2) by leaves, and its conversion to sugars by photosynthesis - is the basis for life on land. Earth System Models (ESMs) incorporating the interactions of land ecosystems and climate are used to predict the future of the terrestrial sink for anthropogenic CO 2. ESMs require accurate representation of GPP. However, current ESMs disagree on how GPP responds to environmental variations, suggesting a need for a more robust theoretical framework for modelling. Here in this work, we focus on a key quantity for GPP, the ratio of leaf internalmore » to external CO 2 (χ). χ is tightly regulated and depends on environmental conditions, but is represented empirically and incompletely in today's models. We show that a simple evolutionary optimality hypothesis predicts specific quantitative dependencies of χ on temperature, vapour pressure deficit and elevation; and that these same dependencies emerge from an independent analysis of empirical χ values, derived from a worldwide dataset of >3,500 leaf stable carbon isotope measurements. A single global equation embodying these relationships then unifies the empirical light-use efficiency model with the standard model of C 3 photosynthesis, and successfully predicts GPP measured at eddy-covariance flux sites. This success is notable given the equation's simplicity and broad applicability across biomes and plant functional types. Finally, it provides a theoretical underpinning for the analysis of plant functional coordination across species and emergent properties of ecosystems, and a potential basis for the reformulation of the controls of GPP in next-generation ESMs.« less
Examining spring and autumn phenology in a temperate deciduous urban woodlot
NASA Astrophysics Data System (ADS)
Yu, Rong
This dissertation is an intensive phenological study in a temperate deciduous urban woodlot over six consecutive years (2007-2012). It explores three important topics related to spring and autumn phenology, as well as ground and remote sensing phenology. First, it examines key climatic factors influencing spring and autumn phenology by conducting phenological observations four days a week and recording daily microclimate measurements. Second, it investigates the differences in phenological responses between an urban woodlot and a rural forest by employing comparative basswood phenological data. Finally, it bridges ground visual phenology and remote sensing derived phenological changes by using the Normalized Difference Vegetation Index (NDVI) and Enhanced Vegetation Index (EVI) derived from the Moderate Resolution Imaging Spectro-radiometer (MODIS). The primary outcomes are as follows: 1) empirical spatial regression models for two dominant tree species - basswood and white ash - have been built and analyzed to detect spatial patterns and possible causes of phenological change; the results show that local urban settings significantly affect phenology; 2) empirical phenological progression models have been built for each species and the community as a whole to examine how phenology develops in spring and autumn; the results indicate that the critical factor influencing spring phenology is AGDD (accumulated growing degree-days) and for autumn phenology, ACDD (accumulated chilling degree-days) and day length; and 3) satellite derived phenological changes have been compared with ground visual community phenology in both spring and autumn seasons, and the results confirm that both NDVI and EVI depict vegetation dynamics well and therefore have corresponding phenological meanings.
An empirical Bayes approach for the Poisson life distribution.
NASA Technical Reports Server (NTRS)
Canavos, G. C.
1973-01-01
A smooth empirical Bayes estimator is derived for the intensity parameter (hazard rate) in the Poisson distribution as used in life testing. The reliability function is also estimated either by using the empirical Bayes estimate of the parameter, or by obtaining the expectation of the reliability function. The behavior of the empirical Bayes procedure is studied through Monte Carlo simulation in which estimates of mean-squared errors of the empirical Bayes estimators are compared with those of conventional estimators such as minimum variance unbiased or maximum likelihood. Results indicate a significant reduction in mean-squared error of the empirical Bayes estimators over the conventional variety.
C.W. Woodall; G.M. Domke; J. Coulston; M.B. Russell; J.A. Smith; C.H. Perry; S.M. Ogle; S. Healey; A. Gray
2015-01-01
The FIA program does not directly measure forest C stocks. Instead, a combination of empirically derived C estimates (e.g., standing live and dead trees) and models (e.g., understory C stocks related to stand age and forest type) are used to estimate forest C stocks. A series of recent refinements in FIA estimation procedures have sought to reduce the uncertainty...
Study of the post-flare loops on 29 July 1973. I - Dynamics of the X-ray loops
NASA Technical Reports Server (NTRS)
Nolte, J. T.; Gerassimenko, M.; Krieger, A. S.; Petrasso, R. D.; Svestka, Z.
1979-01-01
We derive an empirical model of the X-ray emitting post-flare loops observed during the decay phase of the 29 July 1973 flare. We find that the loops are elliptical, with the brightest emitting region at the tops. We determine the height, velocity of growth, and ratio of height to width of the loops at times from 3 to 12 hr after the flare onset.
A DEIM Induced CUR Factorization
2015-09-18
CUR approximate matrix factorization based on the Discrete Empirical Interpolation Method (DEIM). For a given matrix A, such a factorization provides a...CUR approximations based on leverage scores. 1 Introduction This work presents a new CUR matrix factorization based upon the Discrete Empirical...SUPPLEMENTARY NOTES 14. ABSTRACT We derive a CUR approximate matrix factorization based on the Discrete Empirical Interpolation Method (DEIM). For a given
Landis, G.P.; Hofstra, A.H.
1991-01-01
Recent advances in instrumentation now permit quantitative analysis of gas species from individual fluid inclusions. Fluid inclusion gas data can be applied to minerals exploration empirically to establish chemical (gas composition) signatures of the ore fluids, and conceptually through the development of genetic models of ore formation from a framework of integrated geologic, geochemical, and isotopic investigations. Case studies of fluid inclusion gas chemistry from ore deposits representing a spectrum of ore-forming processes and environments are presented to illustrate both the empirical and conceptual approaches. We consider epithermal silver-gold deposits of Creede, Colorado, Carlin-type sediment-hosted disseminated gold deposits of Jerritt Canyon, Nevada, metamorphic silver-base-metal veins of the Coeur d'Alene district, Idaho and Montana, gold-quartz veins in accreted terranes of southern Alaska, and the mid-continent base-metal sulfide deposits of Mississippi Valley-Type (MVT's). Variations in gas chemistry determine the redox state of the ore fluids, provide compositional input for gas geothermometers, characterize ore fluid chemistry (e.g., CH4CO2, H2SSO2, CO2/H2S, organic-rich fluids, gas-rich and gas-poor fluids), identify magmatic, meteoric, metamorphic, shallow and deep basin fluids in ore systems, locate upwelling plumes of magmatic-derived volatiles, zones of boiling and volatile separation, interfaces between contrasting fluids, and important zones of fluid mixing. Present techniques are immediately applicable to exploration programsas empirical studies that monitor fluid inclusion gas threshold concentration levels, presence or absence of certain gases, or changes in gas ratios. We suggest that the greater contribution of fluid inclusion gas analysis is in the integrated and comprehensive chemical dimension that gas data impart to genetic models, and in the exploration concepts based on processes and environments of ore formation derived from these genetic models. ?? 1991.
Henriques, D. A.; Ladbury, J. E.; Jackson, R. M.
2000-01-01
The prediction of binding energies from the three-dimensional (3D) structure of a protein-ligand complex is an important goal of biophysics and structural biology. Here, we critically assess the use of empirical, solvent-accessible surface area-based calculations for the prediction of the binding of Src-SH2 domain with a series of tyrosyl phosphopeptides based on the high-affinity ligand from the hamster middle T antigen (hmT), where the residue in the pY+ 3 position has been changed. Two other peptides based on the C-terminal regulatory site of the Src protein and the platelet-derived growth factor receptor (PDGFR) are also investigated. Here, we take into account the effects of proton linkage on binding, and test five different surface area-based models that include different treatments for the contributions to conformational change and protein solvation. These differences relate to the treatment of conformational flexibility in the peptide ligand and the inclusion of proximal ordered solvent molecules in the surface area calculations. This allowed the calculation of a range of thermodynamic state functions (deltaCp, deltaS, deltaH, and deltaG) directly from structure. Comparison with the experimentally derived data shows little agreement for the interaction of SrcSH2 domain and the range of tyrosyl phosphopeptides. Furthermore, the adoption of the different models to treat conformational change and solvation has a dramatic effect on the calculated thermodynamic functions, making the predicted binding energies highly model dependent. While empirical, solvent-accessible surface area based calculations are becoming widely adopted to interpret thermodynamic data, this study highlights potential problems with application and interpretation of this type of approach. There is undoubtedly some agreement between predicted and experimentally determined thermodynamic parameters: however, the tolerance of this approach is not sufficient to make it ubiquitously applicable. PMID:11106171
Empirical Relationships from Regional Infrasound Signals
NASA Astrophysics Data System (ADS)
Negraru, P. T.; Golden, P.
2011-12-01
Two yearlong infrasound observations were collected at two arrays located within the so called "Zone of Silence" or "Shadow Zone" from well controlled explosive sources to investigate the long term atmospheric effects on signal propagation. The first array (FNIAR) is located north of Fallon NV, at 154 km from the munitions disposal facility outside of Hawthorne NV, while the second array (DNIAR) is located near Mercury NV, approximately 293 km south east of the detonation site. Based on celerity values, approximately 80% of the observed arrivals at FNIAR are considered stratospheric (celerities below 300 m/s), while 20% of them propagated as tropospheric waveguides with celerities of 330-345 m/s. Although there is considerable scatter in the celerity values, two seasonal effects were observed for both years; 1) a gradual decrease in celerity from summer to winter (July/January period) and 2) an increase in celerity values that starts in April. In the winter months celerity values can be extremely variable, and we have observed signals with celerities as low as 240 m/s. In contrast, at DNIAR we observe much stronger seasonal variations. In winter months we have observed tropospheric, stratospheric and thermospheric arrivals while in the summer mostly tropospheric and slower thermospheric arrivals dominate. This interpretation is consistent with the current seasonal variation of the stratospheric winds and was confirmed by ray tracing with G2S models. In addition we also discuss how the observed infrasound arrivals can be used to improve ground truth estimation methods (location, origin times and yield). For instance an empirical wind parameter derived from G2S models suggests that the differences in celerity values observed for both arrays can be explained by changes in the wind conditions. Currently we have started working on improving location algorithms that take into account empirical celerity models derived from celerity/wind plots.
NASA Technical Reports Server (NTRS)
Bettadpur, Srinivas V.; Eanes, Richard J.
1994-01-01
In analogy to the geographical representation of the zeroth-order radial orbit perturbations due to the static geopotential, similar relationships have been derived for radial orbit perturbations due to the ocean tides. At each location these perturbations are seen to be coherent with the tide height variations. The study of this singularity is of obvious importance to the estimation of ocean tides from satellite altimeter data. We derive analytical expressions for the sensitivity of altimeter derived ocean tide models to the ocean tide force model induced errors in the orbits of the altimeter satellite. In particular, we focus on characterizing and quantifying the nonresonant tidal orbit perturbations, which cannot be adjusted into the empirical accelerations or radial perturbation adjustments commonly used during orbit determination and in altimeter data processing. As an illustration of the utility of this technique, we study the differences between a TOPEX/POSEIDON-derived ocean tide model and the Cartwright and Ray 1991 Geosat model. This analysis shows that nearly 60% of the variance of this difference for M(sub 2) can be explained by the Geosat radial orbit eror due to the omission of coefficients from the GEM-T2 background ocean tide model. For O(sub 1), K(sub 1), S(sub 2), and K(sub 2) the orbital effects account for approximately 10 to 40% of the variances of these differences. The utility of this technique to assessment of the ocean tide induced errors in the TOPEX/POSEIDON-derived tide models is also discussed.
Intermittency in small-scale turbulence: a velocity gradient approach
NASA Astrophysics Data System (ADS)
Meneveau, Charles; Johnson, Perry
2017-11-01
Intermittency of small-scale motions is an ubiquitous facet of turbulent flows, and predicting this phenomenon based on reduced models derived from first principles remains an important open problem. Here, a multiple-time scale stochastic model is introduced for the Lagrangian evolution of the full velocity gradient tensor in fluid turbulence at arbitrarily high Reynolds numbers. This low-dimensional model differs fundamentally from prior shell models and other empirically-motivated models of intermittency because the nonlinear gradient self-stretching and rotation A2 term vital to the energy cascade and intermittency development is represented exactly from the Navier-Stokes equations. With only one adjustable parameter needed to determine the model's effective Reynolds number, numerical solutions of the resulting set of stochastic differential equations show that the model predicts anomalous scaling for moments of the velocity gradient components and negative derivative skewness. It also predicts signature topological features of the velocity gradient tensor such as vorticity alignment trends with the eigen-directions of the strain-rate. This research was made possible by a graduate Fellowship from the National Science Foundation and by a Grant from The Gulf of Mexico Research Initiative.
Profiles of equilibrium constants for self-association of aromatic molecules
NASA Astrophysics Data System (ADS)
Beshnova, Daria A.; Lantushenko, Anastasia O.; Davies, David B.; Evstigneev, Maxim P.
2009-04-01
Analysis of the noncovalent, noncooperative self-association of identical aromatic molecules assumes that the equilibrium self-association constants are either independent of the number of molecules (the EK-model) or change progressively with increasing aggregation (the AK-model). The dependence of the self-association constant on the number of molecules in the aggregate (i.e., the profile of the equilibrium constant) was empirically derived in the AK-model but, in order to provide some physical understanding of the profile, it is proposed that the sources for attenuation of the equilibrium constant are the loss of translational and rotational degrees of freedom, the ordering of molecules in the aggregates and the electrostatic contribution (for charged units). Expressions are derived for the profiles of the equilibrium constants for both neutral and charged molecules. Although the EK-model has been widely used in the analysis of experimental data, it is shown in this work that the derived equilibrium constant, KEK, depends on the concentration range used and hence, on the experimental method employed. The relationship has also been demonstrated between the equilibrium constant KEK and the real dimerization constant, KD, which shows that the value of KEK is always lower than KD.
NASA Astrophysics Data System (ADS)
Piretzidis, Dimitrios; Sideris, Michael G.
2016-04-01
This study investigates the possibilities of local hydrology signal extraction using GRACE data and conventional filtering techniques. The impact of the basin shape has also been studied in order to derive empirical rules for tuning the GRACE filter parameters. GRACE CSR Release 05 monthly solutions were used from April 2002 to August 2015 (161 monthly solutions in total). SLR data were also used to replace the GRACE C2,0 coefficient, and a de-correlation filter with optimal parameters for CSR Release 05 data was applied to attenuate the correlation errors of monthly mass differences. For basins located at higher latitudes, the effect of Glacial Isostatic Adjustment (GIA) was taken into account using the ICE-6G model. The study focuses on three geometric properties, i.e., the area, the convexity and the width in the longitudinal direction, of 100 basins with global distribution. Two experiments have been performed. The first one deals with the determination of the Gaussian smoothing radius that minimizes the gaussianity of GRACE equivalent water height (EWH) over the selected basins. The EWH kurtosis was selected as a metric of gaussianity. The second experiment focuses on the derivation of the Gaussian smoothing radius that minimizes the RMS difference between GRACE data and a hydrology model. The GLDAS 1.0 Noah hydrology model was chosen, which shows good agreement with GRACE data according to previous studies. Early results show that there is an apparent relation between the geometric attributes of the basins examined and the Gaussian radius derived from the two experiments. The kurtosis analysis experiment tends to underestimate the optimal Gaussian radius, which is close to 200-300 km in many cases. Empirical rules for the selection of the Gaussian radius have been also developed for sub-regional scale basins.
Fluid mechanics of Windkessel effect.
Mei, C C; Zhang, J; Jing, H X
2018-01-08
We describe a mechanistic model of Windkessel phenomenon based on the linear dynamics of fluid-structure interactions. The phenomenon has its origin in an old-fashioned fire-fighting equipment where an air chamber serves to transform the intermittent influx from a pump to a more steady stream out of the hose. A similar mechanism exists in the cardiovascular system where blood injected intermittantly from the heart becomes rather smooth after passing through an elastic aorta. In existing haeodynamics literature, this mechanism is explained on the basis of electric circuit analogy with empirical impedances. We present a mechanistic theory based on the principles of fluid/structure interactions. Using a simple one-dimensional model, wave motion in the elastic aorta is coupled to the viscous flow in the rigid peripheral artery. Explicit formulas are derived that exhibit the role of material properties such as the blood density, viscosity, wall elasticity, and radii and lengths of the vessels. The current two-element model in haemodynamics is shown to be the limit of short aorta and low injection frequency and the impedance coefficients are derived theoretically. Numerical results for different aorta lengths and radii are discussed to demonstrate their effects on the time variations of blood pressure, wall shear stress, and discharge. Graphical Abstract A mechanistic analysis of Windkessel Effect is described which confirms theoretically the well-known feature that intermittent influx becomes continuous outflow. The theory depends only on the density and viscosity of the blood, the elasticity and dimensions of the vessel. Empirical impedence parameters are avoided.
Micrometeoroid Impacts and Optical Scatter in Space Environment
NASA Technical Reports Server (NTRS)
Heaney, James B.; Wang, Liqin L.; He, Charles C.
2010-01-01
This paper discusses the results of an attempt to use laboratory test data and empirically derived models to quantify the degree of surface damage and associated light scattering that might be expected from hypervelocity particle impacts in space environment. Published descriptions of the interplanetary dust environment were used as the sources of particle mass, size, and velocity estimates. Micrometeoroid sizes are predicted to be predominantly in the mass range 10(exp -5) g or less, with most having diameters near 1 micrometer, but some larger than I20 micrometers, with velocities near 20 kilometers per second. In a laboratory test, latex ( p = 1.1. grams per cubic centimeter) and iron (7.9 grams per cubic centimeter) particles with diameters ranging from 0.75 micrometers to 1.60 micrometers and with velocities ranging from 2.0 kilometers per second to 18.5 kilometers per second, were shot at a Be substrate mirror that had a dielectric coated gold reflecting surface. Scanning electron and atomic force microscopy were used to measure crater dimensions that were then associated with particle impact energies. These data were then fitted to empirical models derived from solar cell and other spacecraft surface components returned from orbit, as well as studies of impact craters studied on glassy materials returned from the lunar surface, to establish a link between particle energy and impact crater dimension. From these data, an estimate of total expected damaged area was computed and this result produced an estimate of expected surface scatter from the modeled environment.
Pan, Yuanjin; Shen, Wen-Bin; Ding, Hao; Hwang, Cheinway; Li, Jin; Zhang, Tengxu
2015-10-14
Modeling nonlinear vertical components of a GPS time series is critical to separating sources contributing to mass displacements. Improved vertical precision in GPS positioning at stations for velocity fields is key to resolving the mechanism of certain geophysical phenomena. In this paper, we use ensemble empirical mode decomposition (EEMD) to analyze the daily GPS time series at 89 continuous GPS stations, spanning from 2002 to 2013. EEMD decomposes a GPS time series into different intrinsic mode functions (IMFs), which are used to identify different kinds of signals and secular terms. Our study suggests that the GPS records contain not only the well-known signals (such as semi-annual and annual signals) but also the seldom-noted quasi-biennial oscillations (QBS). The quasi-biennial signals are explained by modeled loadings of atmosphere, non-tidal and hydrology that deform the surface around the GPS stations. In addition, the loadings derived from GRACE gravity changes are also consistent with the quasi-biennial deformations derived from the GPS observations. By removing the modeled components, the weighted root-mean-square (WRMS) variation of the GPS time series is reduced by 7.1% to 42.3%, and especially, after removing the seasonal and QBO signals, the average improvement percentages for seasonal and QBO signals are 25.6% and 7.5%, respectively, suggesting that it is significant to consider the QBS signals in the GPS records to improve the observed vertical deformations.
Pan, Yuanjin; Shen, Wen-Bin; Ding, Hao; Hwang, Cheinway; Li, Jin; Zhang, Tengxu
2015-01-01
Modeling nonlinear vertical components of a GPS time series is critical to separating sources contributing to mass displacements. Improved vertical precision in GPS positioning at stations for velocity fields is key to resolving the mechanism of certain geophysical phenomena. In this paper, we use ensemble empirical mode decomposition (EEMD) to analyze the daily GPS time series at 89 continuous GPS stations, spanning from 2002 to 2013. EEMD decomposes a GPS time series into different intrinsic mode functions (IMFs), which are used to identify different kinds of signals and secular terms. Our study suggests that the GPS records contain not only the well-known signals (such as semi-annual and annual signals) but also the seldom-noted quasi-biennial oscillations (QBS). The quasi-biennial signals are explained by modeled loadings of atmosphere, non-tidal and hydrology that deform the surface around the GPS stations. In addition, the loadings derived from GRACE gravity changes are also consistent with the quasi-biennial deformations derived from the GPS observations. By removing the modeled components, the weighted root-mean-square (WRMS) variation of the GPS time series is reduced by 7.1% to 42.3%, and especially, after removing the seasonal and QBO signals, the average improvement percentages for seasonal and QBO signals are 25.6% and 7.5%, respectively, suggesting that it is significant to consider the QBS signals in the GPS records to improve the observed vertical deformations. PMID:26473882
NASA Astrophysics Data System (ADS)
Bora, S. S.; Scherbaum, F.; Kuehn, N. M.; Stafford, P.; Edwards, B.
2014-12-01
In a probabilistic seismic hazard assessment (PSHA) framework, it still remains a challenge to adjust ground motion prediction equations (GMPEs) for application in different seismological environments. In this context, this study presents a complete framework for the development of a response spectral GMPE easily adjustable to different seismological conditions; and which does not suffer from the technical problems associated with the adjustment in response spectral domain. Essentially, the approach consists of an empirical FAS (Fourier Amplitude Spectrum) model and a duration model for ground motion which are combined within the random vibration theory (RVT) framework to obtain the full response spectral ordinates. Additionally, FAS corresponding to individual acceleration records are extrapolated beyond the frequency range defined by the data using the stochastic FAS model, obtained by inversion as described in Edwards & Faeh, (2013). To that end, an empirical model for a duration, which is tuned to optimize the fit between RVT based and observed response spectral ordinate, at each oscillator frequency is derived. Although, the main motive of the presented approach was to address the adjustability issues of response spectral GMPEs; comparison, of median predicted response spectra with the other regional models indicate that presented approach can also be used as a stand-alone model. Besides that, a significantly lower aleatory variability (σ<0.5 in log units) in comparison to other regional models, at shorter periods brands it to a potentially viable alternative to the classical regression (on response spectral ordinates) based GMPEs for seismic hazard studies in the near future. The dataset used for the presented analysis is a subset of the recently compiled database RESORCE-2012 across Europe, Middle East and the Mediterranean region.
NASA Astrophysics Data System (ADS)
Ishtiaq, K. S.; Abdul-Aziz, O. I.
2014-12-01
We developed a scaling-based, simple empirical model for spatio-temporally robust prediction of the diurnal cycles of wetland net ecosystem exchange (NEE) by using an extended stochastic harmonic algorithm (ESHA). A reference-time observation from each diurnal cycle was utilized as the scaling parameter to normalize and collapse hourly observed NEE of different days into a single, dimensionless diurnal curve. The modeling concept was tested by parameterizing the unique diurnal curve and predicting hourly NEE of May to October (summer growing and fall seasons) between 2002-12 for diverse wetland ecosystems, as available in the U.S. AmeriFLUX network. As an example, the Taylor Slough short hydroperiod marsh site in the Florida Everglades had data for four consecutive growing seasons from 2009-12; results showed impressive modeling efficiency (coefficient of determination, R2 = 0.66) and accuracy (ratio of root-mean-square-error to the standard deviation of observations, RSR = 0.58). Model validation was performed with an independent year of NEE data, indicating equally impressive performance (R2 = 0.68, RSR = 0.57). The model included a parsimonious set of estimated parameters, which exhibited spatio-temporal robustness by collapsing onto narrow ranges. Model robustness was further investigated by analytically deriving and quantifying parameter sensitivity coefficients and a first-order uncertainty measure. The relatively robust, empirical NEE model can be applied for simulating continuous (e.g., hourly) NEE time-series from a single reference observation (or a set of limited observations) at different wetland sites of comparable hydro-climatology, biogeochemistry, and ecology. The method can also be used for a robust gap-filling of missing data in observed time-series of periodic ecohydrological variables for wetland or other ecosystems.
NASA Technical Reports Server (NTRS)
Schonberg, William P.; Mohamed, Essam
1997-01-01
This report presents the results of a study whose objective was to develop first-principles-based models of hole size and maximum tip-to-tip crack length for a spacecraft module pressure wall that has been perforated in an orbital debris particle impact. The hole size and crack length models are developed by sequentially characterizing the phenomena comprising the orbital debris impact event, including the initial impact, the creation and motion of a debris cloud within the dual-wall system, the impact of the debris cloud on the pressure wall, the deformation of the pressure wall due to debris cloud impact loading prior to crack formation, pressure wall crack initiation, propagation, and arrest, and finally pressure wall deformation following crack initiation and growth. The model development has been accomplished through the application of elementary shock physics and thermodynamic theory, as well as the principles of mass, momentum, and energy conservation. The predictions of the model developed herein are compared against the predictions of empirically-based equations for hole diameters and maximum tip-to-tip crack length for three International Space Station wall configurations. The ISS wall systems considered are the baseline U.S. Lab Cylinder, the enhanced U.S. Lab Cylinder, and the U.S. Lab Endcone. The empirical predictor equations were derived from experimentally obtained hole diameters and crack length data. The original model predictions did not compare favorably with the experimental data, especially for cases in which pressure wall petalling did not occur. Several modifications were made to the original model to bring its predictions closer in line with the experimental results. Following the adjustment of several empirical constants, the predictions of the modified analytical model were in much closer agreement with the experimental results.
Protein model discrimination using mutational sensitivity derived from deep sequencing.
Adkar, Bharat V; Tripathi, Arti; Sahoo, Anusmita; Bajaj, Kanika; Goswami, Devrishi; Chakrabarti, Purbani; Swarnkar, Mohit K; Gokhale, Rajesh S; Varadarajan, Raghavan
2012-02-08
A major bottleneck in protein structure prediction is the selection of correct models from a pool of decoys. Relative activities of ∼1,200 individual single-site mutants in a saturation library of the bacterial toxin CcdB were estimated by determining their relative populations using deep sequencing. This phenotypic information was used to define an empirical score for each residue (RankScore), which correlated with the residue depth, and identify active-site residues. Using these correlations, ∼98% of correct models of CcdB (RMSD ≤ 4Å) were identified from a large set of decoys. The model-discrimination methodology was further validated on eleven different monomeric proteins using simulated RankScore values. The methodology is also a rapid, accurate way to obtain relative activities of each mutant in a large pool and derive sequence-structure-function relationships without protein isolation or characterization. It can be applied to any system in which mutational effects can be monitored by a phenotypic readout. Copyright © 2012 Elsevier Ltd. All rights reserved.
Derivation of the Freundlich Adsorption Isotherm from Kinetics
ERIC Educational Resources Information Center
Skopp, Joseph
2009-01-01
The Freundlich adsorption isotherm is a useful description of adsorption phenomena. It is frequently presented as an empirical equation with little theoretical basis. In fact, a variety of derivations exist. Here a new derivation is presented using the concepts of fractal reaction kinetics. This derivation provides an alternative basis for…
Time-series Oxygen-18 Precipitation Isoscapes for Canada and the Northern United States
NASA Astrophysics Data System (ADS)
Delavau, Carly J.; Chun, Kwok P.; Stadnyk, Tricia A.; Birks, S. Jean; Welker, Jeffrey M.
2014-05-01
The present and past hydrological cycle from the watershed to regional scale can be greatly enhanced using water isotopes (δ18O and δ2H), displayed today as isoscapes. The development of water isoscapes has both hydrological and ecological applications, such as ground water recharge and food web ecology, and can provide critical information when observations are not available due to spatial and temporal gaps in sampling and data networks. This study focuses on the creation of δ18O precipitation (δ18Oppt) isoscapes at a monthly temporal frequency across Canada and the northern United States (US) utilizing CNIP (Canadian Network for Isotopes in Precipitation) and USNIP (United States Network for Isotopes in Precipitation) measurements. Multiple linear stepwise regressions of CNIP and USNIP observations alongside NARR (North American Regional Reanalysis) climatological variables, teleconnection indices, and geographic indicators are utilized to create empirical models that predict the δ18O of monthly precipitation across Canada and the northern US. Pooling information from nearby locations within a region can be useful due to the similarity of processes and mechanisms controlling the variability of δ18O. We expect similarity in the controls on isotopic composition to strengthen the correlation between δ18Oppt and predictor variables, resulting in model simulation improvements. For this reason, three different regionalization approaches are used to separate the study domain into 'isotope zones' to explore the effect of regionalization on model performance. This methodology results in 15 empirical models, five within each regionalization. A split sample calibration and validation approach is employed for model development, and parameter selection is based on demonstrated improvement of the Akaike Information Criteria (AIC). Simulation results indicate the empirical models are generally able to capture the overall monthly variability in δ18Oppt. For the three regionalizations, average adjusted-R2 and RMSE (weighted to number of observations within each isotope zone) range from 0.70 - 0.72 and 2.76 - 2.91, respectively, indicating that on average the different spatial groupings perform comparably. Validation weighted R2and RMSE show a larger spread between models and poorer performance, ranging from 0.45 - 0.59 and 3.28 - 3.39, respectively. Additional evaluation of simulated δ18Oppt at each station and inter/intra-annually is conducted to evaluate model performance over various space and time scales. Stepwise regression derived parameterizations indicate the significance of precipitable water content and latitude as predictor variables for all regionalizations. Long-term (1981-2010) annual average δ18Oppt isoscapes are produced for Canada and the northern US, highlighting the differences between regionalization approaches. 95% confidence interval maps are generated to provide an estimate of the uncertainty associated with long-term δ18Oppt simulations. This is the first ever time-series empirical modelling of δ18Oppt for Canada utilizing CNIP data, as well as the first modelling collaboration between the CNIP and USNIP networks. This study is the initial step towards empirically derived time-series δ18Oppt for use in iso-hydrological modelling studies. Methods and results from this research are equally applicable to ecology and forensics as the simulated δ18Oppt isoscapes provide the primary oxygen source for many plants and foodwebs at refined temporal and spatial scales across Canada and the northern US.
Knudsen, Erik S; Balaji, Uthra; Mannakee, Brian; Vail, Paris; Eslinger, Cody; Moxom, Christopher; Mansour, John; Witkiewicz, Agnieszka K
2018-03-01
Pancreatic ductal adenocarcinoma (PDAC) is a therapy recalcitrant disease with the worst survival rate of common solid tumours. Preclinical models that accurately reflect the genetic and biological diversity of PDAC will be important for delineating features of tumour biology and therapeutic vulnerabilities. 27 primary PDAC tumours were employed for genetic analysis and development of tumour models. Tumour tissue was used for derivation of xenografts and cell lines. Exome sequencing was performed on the originating tumour and developed models. RNA sequencing, histological and functional analyses were employed to determine the relationship of the patient-derived models to clinical presentation of PDAC. The cohort employed captured the genetic diversity of PDAC. From most cases, both cell lines and xenograft models were developed. Exome sequencing confirmed preservation of the primary tumour mutations in developed cell lines, which remained stable with extended passaging. The level of genetic conservation in the cell lines was comparable to that observed with patient-derived xenograft (PDX) models. Unlike historically established PDAC cancer cell lines, patient-derived models recapitulated the histological architecture of the primary tumour and exhibited metastatic spread similar to that observed clinically. Detailed genetic analyses of tumours and derived models revealed features of ex vivo evolution and the clonal architecture of PDAC. Functional analysis was used to elucidate therapeutic vulnerabilities of relevance to treatment of PDAC. These data illustrate that with the appropriate methods it is possible to develop cell lines that maintain genetic features of PDAC. Such models serve as important substrates for analysing the significance of genetic variants and create a unique biorepository of annotated cell lines and xenografts that were established simultaneously from same primary tumour. These models can be used to infer genetic and empirically determined therapeutic sensitivities that would be germane to the patient. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Modeling NAPL dissolution from pendular rings in idealized porous media
NASA Astrophysics Data System (ADS)
Huang, Junqi; Christ, John A.; Goltz, Mark N.; Demond, Avery H.
2015-10-01
The dissolution rate of nonaqueous phase liquid (NAPL) often governs the remediation time frame at subsurface hazardous waste sites. Most formulations for estimating this rate are empirical and assume that the NAPL is the nonwetting fluid. However, field evidence suggests that some waste sites might be organic wet. Thus, formulations that assume the NAPL is nonwetting may be inappropriate for estimating the rates of NAPL dissolution. An exact solution to the Young-Laplace equation, assuming NAPL resides as pendular rings around the contact points of porous media idealized as spherical particles in a hexagonal close packing arrangement, is presented in this work to provide a theoretical prediction for NAPL-water interfacial area. This analytic expression for interfacial area is then coupled with an exact solution to the advection-diffusion equation in a capillary tube assuming Hagen-Poiseuille flow to provide a theoretical means of calculating the mass transfer rate coefficient for dissolution at the NAPL-water interface in an organic-wet system. A comparison of the predictions from this theoretical model with predictions from empirically derived formulations from the literature for water-wet systems showed a consistent range of values for the mass transfer rate coefficient, despite the significant differences in model foundations (water wetting versus NAPL wetting, theoretical versus empirical). This finding implies that, under these system conditions, the important parameter is interfacial area, with a lesser role played by NAPL configuration.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sommer, A., E-mail: a.sommer@lte.uni-saarland.de; Farle, O., E-mail: o.farle@lte.uni-saarland.de; Dyczij-Edlinger, R., E-mail: edlinger@lte.uni-saarland.de
2015-10-15
This paper presents a fast numerical method for computing certified far-field patterns of phased antenna arrays over broad frequency bands as well as wide ranges of steering and look angles. The proposed scheme combines finite-element analysis, dual-corrected model-order reduction, and empirical interpolation. To assure the reliability of the results, improved a posteriori error bounds for the radiated power and directive gain are derived. Both the reduced-order model and the error-bounds algorithm feature offline–online decomposition. A real-world example is provided to demonstrate the efficiency and accuracy of the suggested approach.
NASA Astrophysics Data System (ADS)
Feng, Jiandi; Jiang, Weiping; Wang, Zhengtao; Zhao, Zhenzhen; Nie, Linjuan
2017-08-01
Global empirical total electron content (TEC) models based on TEC maps effectively describe the average behavior of the ionosphere. However, the accuracy of these global models for a certain region may not be ideal. Due to the number and distribution of the International GNSS Service (IGS) stations, the accuracy of TEC maps is geographically different. The modeling database derived from the global TEC maps with different accuracy is likely one of the main reasons that limits the accuracy of the new models. Moreover, many anomalies in the ionosphere are geographic or geomagnetic dependent, and as such the accuracy of global models can deteriorate if these anomalies are not fully incorporated into the modeling approach. For regional models built in small areas, these influences on modeling are immensely weakened. Thus, the regional TEC models may better reflect the temporal and spatial variations of TEC. In our previous work (Feng et al., 2016), a regional TEC model TECM-NEC is proposed for northeast China. However, this model is only directed against the typical region of Mid-latitude Summer Nighttime Anomaly (MSNA) occurrence, which is meaningless in other regions without MSNA. Following the technique of TECM-NEC model, this study proposes another regional empirical TEC model for other regions in mid-latitudes. Taking a small area BeiJing-TianJin-Tangshan (JJT) region (37.5°-42.5° N, 115°-120° E) in China as an example, a regional empirical TEC model (TECM-JJT) is proposed using the TEC grid data from January 1, 1999 to June 30, 2015 provided by the Center for Orbit Determination in Europe (CODE) under quiet geomagnetic conditions. The TECM-JJT model fits the input CODE TEC data with a bias of 0.11TECU and a root mean square error of 3.26TECU. Result shows that the regional model TECM-JJT is consistent with CODE TEC data and GPS-TEC data.
NASA Astrophysics Data System (ADS)
Wang, D.; Cui, Y.
2015-12-01
The objectives of this paper are to validate the applicability of a multi-band quasi-analytical algorithm (QAA) in retrieval absorption coefficients of optically active constituents in turbid coastal waters, and to further improve the model using a proposed semi-analytical model (SAA). The ap(531) and ag(531) semi-analytically derived using SAA model are quite different from the retrievals procedures of QAA model that ap(531) and ag(531) are semi-analytically derived from the empirical retrievals results of a(531) and a(551). The two models are calibrated and evaluated against datasets taken from 19 independent cruises in West Florida Shelf in 1999-2003, provided by SeaBASS. The results indicate that the SAA model produces a superior performance to QAA model in absorption retrieval. Using of the SAA model in retrieving absorption coefficients of optically active constituents from West Florida Shelf decreases the random uncertainty of estimation by >23.05% from the QAA model. This study demonstrates the potential of the SAA model in absorption coefficients of optically active constituents estimating even in turbid coastal waters. Keywords: Remote sensing; Coastal Water; Absorption Coefficient; Semi-analytical Model
NASA Technical Reports Server (NTRS)
Varanasi, P.; Cess, R. D.; Bangaru, B. R. P.
1974-01-01
Measurements of the absolute intensity and integrated band absorption have been performed for the nu sub 9 fundamental band of ethane. The intensity is found to be about 34 per sq cm per atm at STP, and this is significantly higher than previous estimates. It is shown that a Gaussian profile provides an empirical representation of the apparent spectral absorption coefficient. Employing this empirical profile, a simple expression is derived for the integrated band absorption, which is in excellent agreement with experimental values. The band model is then employed to investigate the possible role of ethane as a source of thermal infrared opacity within the atmospheres of Jupiter and Saturn, and to interpret qualitatively observed brightness temperatures for Saturn.
Wilson, Sylia; Schalet, Benjamin D; Hicks, Brian M; Zucker, Robert A
2013-08-01
The present study used an empirical, "bottom-up" approach to delineate the structure of the California Child Q-Set (CCQ), a comprehensive set of personality descriptors, in a sample of 373 preschool-aged children. This approach yielded two broad trait dimensions, Adaptive Socialization (emotional stability, compliance, intelligence) and Anxious Inhibition (emotional/behavioral introversion). Results demonstrate the value of using empirical derivation to investigate the structure of personality in young children, speak to the importance of early-evident personality traits for adaptive development, and are consistent with a growing body of evidence indicating that personality structure in young children is similar, but not identical to, that in adults, suggesting a model of broad personality dimensions in childhood that evolve into narrower traits in adulthood.
Climate data induced uncertainty in model based estimations of terrestrial primary productivity
NASA Astrophysics Data System (ADS)
Wu, Z.; Ahlström, A.; Smith, B.; Ardö, J.; Eklundh, L.; Fensholt, R.; Lehsten, V.
2016-12-01
Models used to project global vegetation and carbon cycle differ in their estimates of historical fluxes and pools. These differences arise not only from differences between models but also from differences in the environmental and climatic data that forces the models. Here we investigate the role of uncertainties in historical climate data, encapsulated by a set of six historical climate datasets. We focus on terrestrial gross primary productivity (GPP) and analyze the results from a dynamic process-based vegetation model (LPJ-GUESS) forced by six different climate datasets and two empirical datasets of GPP (derived from flux towers and remote sensing). We find that the climate induced uncertainty, defined as the difference among historical simulations in GPP when forcing the model with the different climate datasets, can be as high as 33 Pg C yr-1 globally (19% of mean GPP). The uncertainty is partitioned into the three main climatic drivers, temperature, precipitation, and shortwave radiation. Additionally, we illustrate how the uncertainty due to a given climate driver depends both on the magnitude of the forcing data uncertainty (the data range) and the sensitivity of the modeled GPP to the driver (the ecosystem sensitivity). The analysis is performed globally and stratified into five land cover classes. We find that the dynamic vegetation model overestimates GPP, compared to empirically based GPP data over most areas, except for the tropical region. Both the simulations and empirical estimates agree that the tropical region is a disproportionate source of uncertainty in GPP estimation. This is mainly caused by uncertainties in shortwave radiation forcing, of which climate data range contributes slightly higher uncertainty than ecosystem sensitivity to shortwave radiation. We also find that precipitation dominated the climate induced uncertainty over nearly half of terrestrial vegetated surfaces, which is mainly due to large ecosystem sensitivity to precipitation. Overall, climate data ranges are found to contribute more to the climate induced uncertainty than ecosystem sensitivity. Our study highlights the need to better constrain tropical climate and demonstrate that uncertainty caused by climatic forcing data must be considered when comparing and evaluating model results and empirical datasets.
Confounder summary scores when comparing the effects of multiple drug exposures.
Cadarette, Suzanne M; Gagne, Joshua J; Solomon, Daniel H; Katz, Jeffrey N; Stürmer, Til
2010-01-01
Little information is available comparing methods to adjust for confounding when considering multiple drug exposures. We compared three analytic strategies to control for confounding based on measured variables: conventional multivariable, exposure propensity score (EPS), and disease risk score (DRS). Each method was applied to a dataset (2000-2006) recently used to examine the comparative effectiveness of four drugs. The relative effectiveness of risedronate, nasal calcitonin, and raloxifene in preventing non-vertebral fracture, were each compared to alendronate. EPSs were derived both by using multinomial logistic regression (single model EPS) and by three separate logistic regression models (separate model EPS). DRSs were derived and event rates compared using Cox proportional hazard models. DRSs derived among the entire cohort (full cohort DRS) was compared to DRSs derived only among the referent alendronate (unexposed cohort DRS). Less than 8% deviation from the base estimate (conventional multivariable) was observed applying single model EPS, separate model EPS or full cohort DRS. Applying the unexposed cohort DRS when background risk for fracture differed between comparison drug exposure cohorts resulted in -7 to + 13% deviation from our base estimate. With sufficient numbers of exposed and outcomes, either conventional multivariable, EPS or full cohort DRS may be used to adjust for confounding to compare the effects of multiple drug exposures. However, our data also suggest that unexposed cohort DRS may be problematic when background risks differ between referent and exposed groups. Further empirical and simulation studies will help to clarify the generalizability of our findings.
Genotype imputation in a coalescent model with infinitely-many-sites mutation
Huang, Lucy; Buzbas, Erkan O.; Rosenberg, Noah A.
2012-01-01
Empirical studies have identified population-genetic factors as important determinants of the properties of genotype-imputation accuracy in imputation-based disease association studies. Here, we develop a simple coalescent model of three sequences that we use to explore the theoretical basis for the influence of these factors on genotype-imputation accuracy, under the assumption of infinitely-many-sites mutation. Employing a demographic model in which two populations diverged at a given time in the past, we derive the approximate expectation and variance of imputation accuracy in a study sequence sampled from one of the two populations, choosing between two reference sequences, one sampled from the same population as the study sequence and the other sampled from the other population. We show that under this model, imputation accuracy—as measured by the proportion of polymorphic sites that are imputed correctly in the study sequence—increases in expectation with the mutation rate, the proportion of the markers in a chromosomal region that are genotyped, and the time to divergence between the study and reference populations. Each of these effects derives largely from an increase in information available for determining the reference sequence that is genetically most similar to the sequence targeted for imputation. We analyze as a function of divergence time the expected gain in imputation accuracy in the target using a reference sequence from the same population as the target rather than from the other population. Together with a growing body of empirical investigations of genotype imputation in diverse human populations, our modeling framework lays a foundation for extending imputation techniques to novel populations that have not yet been extensively examined. PMID:23079542
Violent Crime in Post-Civil War Guatemala: Causes and Policy Implications
2015-03-01
on field research and case studies in Honduras, Bolivia, and Argentina. Bailey’s Security Trap theory is comprehensive in nature and derived from... research question. The second phase uses empirical data and comparative case studies to validate or challenge selected arguments that potentially...Contextual relevancy, historical inference, Tools: Empirics and case conclusions empirical data studies Figme2. Sample Research Methodology E
How Does Rumination Impact Cognition? A First Mechanistic Model.
van Vugt, Marieke K; van der Velde, Maarten
2018-01-01
Rumination is a process of uncontrolled, narrowly focused negative thinking that is often self-referential, and that is a hallmark of depression. Despite its importance, little is known about its cognitive mechanisms. Rumination can be thought of as a specific, constrained form of mind-wandering. Here, we introduce a cognitive model of rumination that we developed on the basis of our existing model of mind-wandering. The rumination model implements the hypothesis that rumination is caused by maladaptive habits of thought. These habits of thought are modeled by adjusting the number of memory chunks and their associative structure, which changes the sequence of memories that are retrieved during mind-wandering, such that during rumination the same set of negative memories is retrieved repeatedly. The implementation of habits of thought was guided by empirical data from an experience sampling study in healthy and depressed participants. On the basis of this empirically derived memory structure, our model naturally predicts the declines in cognitive task performance that are typically observed in depressed patients. This study demonstrates how we can use cognitive models to better understand the cognitive mechanisms underlying rumination and depression. Copyright © 2018 The Authors. Topics in Cognitive Science published by Wiley Periodicals, Inc. on behalf of Cognitive Science Society.
NASA Astrophysics Data System (ADS)
Strauch, R. L.; Istanbulluoglu, E.
2017-12-01
We develop a landslide hazard modeling approach that integrates a data-driven statistical model and a probabilistic process-based shallow landslide model for mapping probability of landslide initiation, transport, and deposition at regional scales. The empirical model integrates the influence of seven site attribute (SA) classes: elevation, slope, curvature, aspect, land use-land cover, lithology, and topographic wetness index, on over 1,600 observed landslides using a frequency ratio (FR) approach. A susceptibility index is calculated by adding FRs for each SA on a grid-cell basis. Using landslide observations we relate susceptibility index to an empirically-derived probability of landslide impact. This probability is combined with results from a physically-based model to produce an integrated probabilistic map. Slope was key in landslide initiation while deposition was linked to lithology and elevation. Vegetation transition from forest to alpine vegetation and barren land cover with lower root cohesion leads to higher frequency of initiation. Aspect effects are likely linked to differences in root cohesion and moisture controlled by solar insulation and snow. We demonstrate the model in the North Cascades of Washington, USA and identify locations of high and low probability of landslide impacts that can be used by land managers in their design, planning, and maintenance.
Inferring the parameters of a Markov process from snapshots of the steady state
NASA Astrophysics Data System (ADS)
Dettmer, Simon L.; Berg, Johannes
2018-02-01
We seek to infer the parameters of an ergodic Markov process from samples taken independently from the steady state. Our focus is on non-equilibrium processes, where the steady state is not described by the Boltzmann measure, but is generally unknown and hard to compute, which prevents the application of established equilibrium inference methods. We propose a quantity we call propagator likelihood, which takes on the role of the likelihood in equilibrium processes. This propagator likelihood is based on fictitious transitions between those configurations of the system which occur in the samples. The propagator likelihood can be derived by minimising the relative entropy between the empirical distribution and a distribution generated by propagating the empirical distribution forward in time. Maximising the propagator likelihood leads to an efficient reconstruction of the parameters of the underlying model in different systems, both with discrete configurations and with continuous configurations. We apply the method to non-equilibrium models from statistical physics and theoretical biology, including the asymmetric simple exclusion process (ASEP), the kinetic Ising model, and replicator dynamics.
More sound of church bells: Authors' correction
NASA Astrophysics Data System (ADS)
Vogt, Patrik; Kasper, Lutz; Burde, Jan-Philipp
2016-01-01
In the recently published article "The Sound of Church Bells: Tracking Down the Secret of a Traditional Arts and Crafts Trade," the bell frequencies have been erroneously oversimplified. The problem affects Eqs. (2) and (3), which were derived from the elementary "coffee mug model" and in which we used the speed of sound in air. However, this does not make sense from a physical point of view, since air only acts as a sound carrier, not as a sound source in the case of bells. Due to the excellent fit of the theoretical model with the empirical data, we unfortunately failed to notice this error before publication. However, all other equations, e.g., the introduction of the correction factor in Eq. (4) and the estimation of the mass in Eqs. (5) and (6) are not affected by this error, since they represent empirical models. However, it is unfortunate to introduce the speed of sound in air as a constant in Eqs. (4) and (6). Instead, we suggest the following simple rule of thumb for relating the radius of a church bell R to its humming frequency fhum:
Sundell, Knut; Ferrer-Wreder, Laura; Fraser, Mark W
2014-06-01
The spread of evidence-based practice throughout the world has resulted in the wide adoption of empirically supported interventions (ESIs) and a growing number of controlled trials of imported and culturally adapted ESIs. This article is informed by outcome research on family-based interventions including programs listed in the American Blueprints Model and Promising Programs. Evidence from these controlled trials is mixed and, because it is comprised of both successful and unsuccessful replications of ESIs, it provides clues for the translation of promising programs in the future. At least four explanations appear plausible for the mixed results in replication trials. One has to do with methodological differences across trials. A second deals with ambiguities in the cultural adaptation process. A third explanation is that ESIs in failed replications have not been adequately implemented. A fourth source of variation derives from unanticipated contextual influences that might affect the effects of ESIs when transported to other cultures and countries. This article describes a model that allows for the differential examination of adaptations of interventions in new cultural contexts. © The Author(s) 2012.
Discharge in Long Air Gaps; Modelling and applications
NASA Astrophysics Data System (ADS)
Beroual, A.; Fofana, I.
2016-06-01
Discharge in Long Air Gaps: Modelling and applications presents self-consistent predictive dynamic models of positive and negative discharges in long air gaps. Equivalent models are also derived to predict lightning parameters based on the similarities between long air gap discharges and lightning flashes. Macroscopic air gap discharge parameters are calculated to solve electrical, empirical and physical equations, and comparisons between computed and experimental results for various test configurations are presented and discussed. This book is intended to provide a fresh perspective by contributing an innovative approach to this research domain, and universities with programs in high-voltage engineering will find this volume to be a working example of how to introduce the basics of electric discharge phenomena.
Neural dynamics of motion processing and speed discrimination.
Chey, J; Grossberg, S; Mingolla, E
1998-09-01
A neural network model of visual motion perception and speed discrimination is presented. The model shows how a distributed population code of speed tuning, that realizes a size-speed correlation, can be derived from the simplest mechanisms whereby activations of multiple spatially short-range filters of different size are transformed into speed-turned cell responses. These mechanisms use transient cell responses to moving stimuli, output thresholds that covary with filter size, and competition. These mechanisms are proposed to occur in the V1-->MT cortical processing stream. The model reproduces empirically derived speed discrimination curves and simulates data showing how visual speed perception and discrimination can be affected by stimulus contrast, duration, dot density and spatial frequency. Model motion mechanisms are analogous to mechanisms that have been used to model 3-D form and figure-ground perception. The model forms the front end of a larger motion processing system that has been used to simulate how global motion capture occurs, and how spatial attention is drawn to moving forms. It provides a computational foundation for an emerging neural theory of 3-D form and motion perception.
Network Reconstruction From High-Dimensional Ordinary Differential Equations.
Chen, Shizhe; Shojaie, Ali; Witten, Daniela M
2017-01-01
We consider the task of learning a dynamical system from high-dimensional time-course data. For instance, we might wish to estimate a gene regulatory network from gene expression data measured at discrete time points. We model the dynamical system nonparametrically as a system of additive ordinary differential equations. Most existing methods for parameter estimation in ordinary differential equations estimate the derivatives from noisy observations. This is known to be challenging and inefficient. We propose a novel approach that does not involve derivative estimation. We show that the proposed method can consistently recover the true network structure even in high dimensions, and we demonstrate empirical improvement over competing approaches. Supplementary materials for this article are available online.
Simulating sunflower canopy temperatures to infer root-zone soil water potential
NASA Technical Reports Server (NTRS)
Choudhury, B. J.; Idso, S. B.
1983-01-01
A soil-plant-atmosphere model for sunflower (Helianthus annuus L.), together with clear sky weather data for several days, is used to study the relationship between canopy temperature and root-zone soil water potential. Considering the empirical dependence of stomatal resistance on insolation, air temperature and leaf water potential, a continuity equation for water flux in the soil-plant-atmosphere system is solved for the leaf water potential. The transpirational flux is calculated using Monteith's combination equation, while the canopy temperature is calculated from the energy balance equation. The simulation shows that, at high soil water potentials, canopy temperature is determined primarily by air and dew point temperatures. These results agree with an empirically derived linear regression equation relating canopy-air temperature differential to air vapor pressure deficit. The model predictions of leaf water potential are also in agreement with observations, indicating that measurements of canopy temperature together with a knowledge of air and dew point temperatures can provide a reliable estimate of the root-zone soil water potential.
NASA Astrophysics Data System (ADS)
Loomis, John
2003-04-01
Past recreation studies have noted that on-site or visitor intercept surveys are subject to over-sampling of avid users (i.e., endogenous stratification) and have offered econometric solutions to correct for this. However, past papers do not estimate the empirical magnitude of the bias in benefit estimates with a real data set, nor do they compare the corrected estimates to benefit estimates derived from a population sample. This paper empirically examines the magnitude of the recreation benefits per trip bias by comparing estimates from an on-site river visitor intercept survey to a household survey. The difference in average benefits is quite large, with the on-site visitor survey yielding 24 per day trip, while the household survey yields 9.67 per day trip. A simple econometric correction for endogenous stratification in our count data model lowers the benefit estimate to $9.60 per day trip, a mean value nearly identical and not statistically different from the household survey estimate.
Ontological addiction theory: Attachment to me, mine, and I.
Van Gordon, William; Shonin, Edo; Diouri, Sofiane; Garcia-Campayo, Javier; Kotera, Yasuhiro; Griffiths, Mark D
2018-06-07
Background Ontological addiction theory (OAT) is a novel metaphysical model of psychopathology and posits that human beings are prone to forming implausible beliefs concerning the way they think they exist, and that these beliefs can become addictive leading to functional impairments and mental illness. The theoretical underpinnings of OAT derive from the Buddhist philosophical perspective that all phenomena, including the self, do not manifest inherently or independently. Aims and methods This paper outlines the theoretical foundations of OAT along with indicative supportive empirical evidence from studies evaluating meditation awareness training as well as studies investigating non-attachment, emptiness, compassion, and loving-kindness. Results OAT provides a novel perspective on addiction, the factors that underlie mental illness, and how beliefs concerning selfhood are shaped and reified. Conclusion In addition to continuing to test the underlying assumptions of OAT, future empirical research needs to determine how ontological addiction fits with extant theories of self, reality, and suffering, as well with more established models of addiction.
Introduction: demography and cultural macroevolution.
Steele, James; Shennan, Stephen
2009-04-01
The papers in this special issue of Human Biology, which derive from a conference sponsored by the Arts and Humanities Research Council (AHRC) Center for the Evolution of Cultural Diversity, lay some of the foundations for an empirical macroevolutionary analysis of cultural dynamics. Our premise here is that cultural dynamics-including the stability of traditions and the rate of origination of new variants-are influenced by independently occurring demographic processes (population size, structure, and distribution as these vary over time as a result of changes in rates of fertility, mortality, and migration). The contributors focus on three sets of problems relevant to empirical studies of cultural macroevolution: large-scale reconstruction of past population dynamics from archaeological and genetic data; juxtaposition of models and evidence of cultural dynamics using large-scale archaeological and historical data sets; and juxtaposition of models and evidence of cultural dynamics from large-scale linguistic data sets. In this introduction we outline some of the theoretical and methodological issues and briefly summarize the individual contributions.
Biomolecular Force Field Parameterization via Atoms-in-Molecule Electron Density Partitioning.
Cole, Daniel J; Vilseck, Jonah Z; Tirado-Rives, Julian; Payne, Mike C; Jorgensen, William L
2016-05-10
Molecular mechanics force fields, which are commonly used in biomolecular modeling and computer-aided drug design, typically treat nonbonded interactions using a limited library of empirical parameters that are developed for small molecules. This approach does not account for polarization in larger molecules or proteins, and the parametrization process is labor-intensive. Using linear-scaling density functional theory and atoms-in-molecule electron density partitioning, environment-specific charges and Lennard-Jones parameters are derived directly from quantum mechanical calculations for use in biomolecular modeling of organic and biomolecular systems. The proposed methods significantly reduce the number of empirical parameters needed to construct molecular mechanics force fields, naturally include polarization effects in charge and Lennard-Jones parameters, and scale well to systems comprised of thousands of atoms, including proteins. The feasibility and benefits of this approach are demonstrated by computing free energies of hydration, properties of pure liquids, and the relative binding free energies of indole and benzofuran to the L99A mutant of T4 lysozyme.
Klotz, Dino; Grave, Daniel A; Dotan, Hen; Rothschild, Avner
2018-03-15
Photoelectrochemical impedance spectroscopy (PEIS) is a useful tool for the characterization of photoelectrodes for solar water splitting. However, the analysis of PEIS spectra often involves a priori assumptions that might bias the results. This work puts forward an empirical method that analyzes the distribution of relaxation times (DRT), obtained directly from the measured PEIS spectra of a model hematite photoanode. By following how the DRT evolves as a function of control parameters such as the applied potential and composition of the electrolyte solution, we obtain unbiased insights into the underlying mechanisms that shape the photocurrent. In a subsequent step, we fit the data to a process-oriented equivalent circuit model (ECM) whose makeup is derived from the DRT analysis in the first step. This yields consistent quantitative trends of the dominant polarization processes observed. Our observations reveal a common step for the photo-oxidation reactions of water and H 2 O 2 in alkaline solution.
Revision of empirical electric field modeling in the inner magnetosphere using Cluster data
NASA Astrophysics Data System (ADS)
Matsui, H.; Torbert, R. B.; Spence, H. E.; Khotyaintsev, Yu. V.; Lindqvist, P.-A.
2013-07-01
Using Cluster data from the Electron Drift (EDI) and the Electric Field and Wave (EFW) instruments, we revise our empirically-based, inner-magnetospheric electric field (UNH-IMEF) model at 2
Chow, Sy-Miin; Bendezú, Jason J.; Cole, Pamela M.; Ram, Nilam
2016-01-01
Several approaches currently exist for estimating the derivatives of observed data for model exploration purposes, including functional data analysis (FDA), generalized local linear approximation (GLLA), and generalized orthogonal local derivative approximation (GOLD). These derivative estimation procedures can be used in a two-stage process to fit mixed effects ordinary differential equation (ODE) models. While the performance and utility of these routines for estimating linear ODEs have been established, they have not yet been evaluated in the context of nonlinear ODEs with mixed effects. We compared properties of the GLLA and GOLD to an FDA-based two-stage approach denoted herein as functional ordinary differential equation with mixed effects (FODEmixed) in a Monte Carlo study using a nonlinear coupled oscillators model with mixed effects. Simulation results showed that overall, the FODEmixed outperformed both the GLLA and GOLD across all the embedding dimensions considered, but a novel use of a fourth-order GLLA approach combined with very high embedding dimensions yielded estimation results that almost paralleled those from the FODEmixed. We discuss the strengths and limitations of each approach and demonstrate how output from each stage of FODEmixed may be used to inform empirical modeling of young children’s self-regulation. PMID:27391255
Chow, Sy-Miin; Bendezú, Jason J; Cole, Pamela M; Ram, Nilam
2016-01-01
Several approaches exist for estimating the derivatives of observed data for model exploration purposes, including functional data analysis (FDA; Ramsay & Silverman, 2005 ), generalized local linear approximation (GLLA; Boker, Deboeck, Edler, & Peel, 2010 ), and generalized orthogonal local derivative approximation (GOLD; Deboeck, 2010 ). These derivative estimation procedures can be used in a two-stage process to fit mixed effects ordinary differential equation (ODE) models. While the performance and utility of these routines for estimating linear ODEs have been established, they have not yet been evaluated in the context of nonlinear ODEs with mixed effects. We compared properties of the GLLA and GOLD to an FDA-based two-stage approach denoted herein as functional ordinary differential equation with mixed effects (FODEmixed) in a Monte Carlo (MC) study using a nonlinear coupled oscillators model with mixed effects. Simulation results showed that overall, the FODEmixed outperformed both the GLLA and GOLD across all the embedding dimensions considered, but a novel use of a fourth-order GLLA approach combined with very high embedding dimensions yielded estimation results that almost paralleled those from the FODEmixed. We discuss the strengths and limitations of each approach and demonstrate how output from each stage of FODEmixed may be used to inform empirical modeling of young children's self-regulation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bolton, P.
The purpose of this task was to support ESH-3 in providing Airborne Release Fraction and Respirable Fraction training to safety analysts at LANL who perform accident analysis, hazard analysis, safety analysis, and/or risk assessments at nuclear facilities. The task included preparation of materials for and the conduct of two 3-day training courses covering the following topics: safety analysis process; calculation model; aerosol physic concepts for safety analysis; and overview of empirically derived airborne release fractions and respirable fractions.
1980-01-01
SUPPLEMENTARY NOTES Is. KEY WORDS (Continue on reverse ede If neceseay id Identify by block number) Carbon Monoxide (CO) Computer Program Carboxyhemoglobin ...several researchers, which predicts the instantaneous amount of carboxyhemoglobin (COHb) in the blood of a person based upon the amount of carbon monoxide...developed from an empirical equation (derived from reference I and detailed in reference 3) which predicts the amount of carboxyhemoglobin (COHb) in
NASA Astrophysics Data System (ADS)
Oh, Sangjun; Kim, Keeman
2006-02-01
We study the transition temperature Tc, the thermodynamic critical field Bc, and the upper critical field Bc2 of Nb3Sn with Eliashberg theory of strongly coupled superconductors using the Einstein spectrum α2(ω)F(ω)=λ<ω2>1/2δ(ω-<ω2>1/2). The strain dependences of λ(ɛ) and <ω2>1/2(V) are introduced from the empirical strain dependence of Tc(V) for three model cases. It is found that the empirical relation Tc(V)/Tc(0)=[Bc2(4.2 K,V)/Bc2(4.2 K,0)]1/w (w~3) is mainly due to the low-energy-phonon mode softening. We derive analytic expressions for the strain and temperature dependences of Bc(T,V) and Bc2(T,V) and the Ginzburg-Landau parameter κ(T,V) from the numerical calculation results. The Summers refinement on the temperature dependence of κ(T) shows deviation from our calculation results. We propose a unified scaling law of flux pinning in Nb3Sn strands in the form of the Kramer model with the analytic expressions of Bc2(T,V) and κ(T,V) derived in this work. It is shown that the proposed scaling law gives a reasonable fit to the reported data with only eight fitting parameters.
Empirical Data Fusion for Convective Weather Hazard Nowcasting
NASA Astrophysics Data System (ADS)
Williams, J.; Ahijevych, D.; Steiner, M.; Dettling, S.
2009-09-01
This paper describes a statistical analysis approach to developing an automated convective weather hazard nowcast system suitable for use by aviation users in strategic route planning and air traffic management. The analysis makes use of numerical weather prediction model fields and radar, satellite, and lightning observations and derived features along with observed thunderstorm evolution data, which are aligned using radar-derived motion vectors. Using a dataset collected during the summers of 2007 and 2008 over the eastern U.S., the predictive contributions of the various potential predictor fields are analyzed for various spatial scales, lead-times and scenarios using a technique called random forests (RFs). A minimal, skillful set of predictors is selected for each scenario requiring distinct forecast logic, and RFs are used to construct an empirical probabilistic model for each. The resulting data fusion system, which ran in real-time at the National Center for Atmospheric Research during the summer of 2009, produces probabilistic and deterministic nowcasts of the convective weather hazard and assessments of the prediction uncertainty. The nowcasts' performance and results for several case studies are presented to demonstrate the value of this approach. This research has been funded by the U.S. Federal Aviation Administration to support the development of the Consolidated Storm Prediction for Aviation (CoSPA) system, which is intended to provide convective hazard nowcasts and forecasts for the U.S. Next Generation Air Transportation System (NextGen).
Interference in astronomical speckle patterns
NASA Technical Reports Server (NTRS)
Breckinridge, J. B.
1976-01-01
Astronomical speckle patterns are examined in an atmospheric-optics context in order to determine what kind of image quality is to be expected from several different imaging techniques. The model used to describe the instantaneous complex field distribution across the pupil of a large telescope regards the pupil as a deep phase grating with a periodicity given by the size of the cell of uniform phase or the refractive index structure function. This model is used along with an empirical formula derived purely from the physical appearance of the speckle patterns to discuss the orders of interference in astronomical speckle patterns.
A first attempt at few coils and low-coverage resistive wall mode stabilization of EXTRAP T2R
NASA Astrophysics Data System (ADS)
Olofsson, K. Erik J.; Brunsell, Per R.; Drake, James R.; Frassinetti, Lorenzo
2012-09-01
The reversed-field pinch features resistive-shell-type instabilities at any (vanishing and finite) plasma pressure. An attempt to stabilize the full spectrum of these modes using both (i) incomplete coverage and (ii) few coils is presented. Two empirically derived model-based control algorithms are compared with a baseline guaranteed suboptimal intelligent-shell-type (IS) feedback. Experimental stabilization could not be achieved for the coil array subset sizes considered by this first study. But the model-based controllers appear to significantly outperform the decentralized IS method.
NASA Technical Reports Server (NTRS)
Hedin, A. E.
1979-01-01
The neutral temperature, neutral densities for N2, O2, O, Ar, He and H, mean molecular weight, and total mass density as predicted by the Mass Spectrometer and Incoherent Scatter empirical thermosphere model are presented in tabular form. The predictions are based on selected altitudes, latitudes, local times, days and other geophysical conditions. The model is dependent on a least squares fit to density data from mass spectrometers on five satellites and temperature data from four incoherent scatter stations, providing coverage for most of solar sunspot cycle 20.
A note on the microeconomics of migration.
Stahl, K
1983-11-01
"The purpose of this note is to demonstrate in a simple model that an individual's migration from a small town to a large city may be rationalized purely by a consumption motive, rather than the motive of obtaining a higher income. More specifically, it is shown that in a large city an individual may derive a higher utility from spending a given amount of income than in a small town." A formal model is first developed that includes the principal forces at work and is then illustrated using a graphic example. The theoretical and empirical issues raised are considered in the concluding section. excerpt
Determining the non-inferiority margin for patient reported outcomes.
Gerlinger, Christoph; Schmelter, Thomas
2011-01-01
One of the cornerstones of any non-inferiority trial is the choice of the non-inferiority margin delta. This threshold of clinical relevance is very difficult to determine, and in practice, delta is often "negotiated" between the sponsor of the trial and the regulatory agencies. However, for patient reported, or more precisely patient observed outcomes, the patients' minimal clinically important difference (MCID) can be determined empirically by relating the treatment effect, for example, a change on a 100-mm visual analogue scale, to the patient's satisfaction with the change. This MCID can then be used to define delta. We used an anchor-based approach with non-parametric discriminant analysis and ROC analysis and a distribution-based approach with Norman's half standard deviation rule to determine delta in three examples endometriosis-related pelvic pain measured on a 100-mm visual analogue scale, facial acne measured by lesion counts, and hot flush counts. For each of these examples, all three methods yielded quite similar results. In two of the cases, the empirically derived MCIDs were smaller or similar of deltas used before in non-inferiority trials, and in the third case, the empirically derived MCID was used to derive a responder definition that was accepted by the FDA. In conclusion, for patient-observed endpoints, the delta can be derived empirically. In our view, this is a better approach than that of asking the clinician for a "nice round number" for delta, such as 10, 50%, π, e, or i. Copyright © 2011 John Wiley & Sons, Ltd.
Equation of state for dense nucleonic matter from metamodeling. I. Foundational aspects
NASA Astrophysics Data System (ADS)
Margueron, Jérôme; Hoffmann Casali, Rudiney; Gulminelli, Francesca
2018-02-01
Metamodeling for the nucleonic equation of state (EOS), inspired from a Taylor expansion around the saturation density of symmetric nuclear matter, is proposed and parameterized in terms of the empirical parameters. The present knowledge of nuclear empirical parameters is first reviewed in order to estimate their average values and associated uncertainties, and thus defining the parameter space of the metamodeling. They are divided into isoscalar and isovector types, and ordered according to their power in the density expansion. The goodness of the metamodeling is analyzed against the predictions of the original models. In addition, since no correlation among the empirical parameters is assumed a priori, all arbitrary density dependences can be explored, which might not be accessible in existing functionals. Spurious correlations due to the assumed functional form are also removed. This meta-EOS allows direct relations between the uncertainties on the empirical parameters and the density dependence of the nuclear equation of state and its derivatives, and the mapping between the two can be done with standard Bayesian techniques. A sensitivity analysis shows that the more influential empirical parameters are the isovector parameters Lsym and Ksym, and that laboratory constraints at supersaturation densities are essential to reduce the present uncertainties. The present metamodeling for the EOS for nuclear matter is proposed for further applications in neutron stars and supernova matter.
Empirical source strength correlations for rans-based acoustic analogy methods
NASA Astrophysics Data System (ADS)
Kube-McDowell, Matthew Tyndall
JeNo is a jet noise prediction code based on an acoustic analogy method developed by Mani, Gliebe, Balsa, and Khavaran. Using the flow predictions from a standard Reynolds-averaged Navier-Stokes computational fluid dynamics solver, JeNo predicts the overall sound pressure level and angular spectra for high-speed hot jets over a range of observer angles, with a processing time suitable for rapid design purposes. JeNo models the noise from hot jets as a combination of two types of noise sources; quadrupole sources dependent on velocity fluctuations, which represent the major noise of turbulent mixing, and dipole sources dependent on enthalpy fluctuations, which represent the effects of thermal variation. These two sources are modeled by JeNo as propagating independently into the far-field, with no cross-correlation at the observer location. However, high-fidelity computational fluid dynamics solutions demonstrate that this assumption is false. In this thesis, the theory, assumptions, and limitations of the JeNo code are briefly discussed, and a modification to the acoustic analogy method is proposed in which the cross-correlation of the two primary noise sources is allowed to vary with the speed of the jet and the observer location. As a proof-of-concept implementation, an empirical correlation correction function is derived from comparisons between JeNo's noise predictions and a set of experimental measurements taken for the Air Force Aero-Propulsion Laboratory. The empirical correlation correction is then applied to JeNo's predictions of a separate data set of hot jets tested at NASA's Glenn Research Center. Metrics are derived to measure the qualitative and quantitative performance of JeNo's acoustic predictions, and the empirical correction is shown to provide a quantitative improvement in the noise prediction at low observer angles with no freestream flow, and a qualitative improvement in the presence of freestream flow. However, the results also demonstrate that there are underlying flaws in JeNo's ability to predict the behavior of a hot jet's acoustic signature at certain rear observer angles, and that this correlation correction is not able to correct these flaws.
Reevaluating Old Stellar Populations
NASA Astrophysics Data System (ADS)
Stanway, E. R.; Eldridge, J. J.
2018-05-01
Determining the properties of old stellar populations (those with age >1 Gyr) has long involved the comparison of their integrated light, either in the form of photometry or spectroscopic indexes, with empirical or synthetic templates. Here we reevaluate the properties of old stellar populations using a new set of stellar population synthesis models, designed to incorporate the effects of binary stellar evolution pathways as a function of stellar mass and age. We find that single-aged stellar population models incorporating binary stars, as well as new stellar evolution and atmosphere models, can reproduce the colours and spectral indices observed in both globular clusters and quiescent galaxies. The best fitting model populations are often younger than those derived from older spectral synthesis models, and may also lie at slightly higher metallicities.
Comparison of ACCENT 2000 Shuttle Plume Data with SIMPLE Model Predictions
NASA Astrophysics Data System (ADS)
Swaminathan, P. K.; Taylor, J. C.; Ross, M. N.; Zittel, P. F.; Lloyd, S. A.
2001-12-01
The JHU/APL Stratospheric IMpact of PLume Effluents (SIMPLE)model was employed to analyze the trace species in situ composition data collected during the ACCENT 2000 intercepts of the space shuttle Space Transportation Launch System (STS) rocket plume as a function of time and radial location within the cold plume. The SIMPLE model is initialized using predictions for species depositions calculated using an afterburning model based on standard TDK/SPP nozzle and SPF plume flowfield codes with an expanded chemical kinetic scheme. The time dependent ambient stratospheric chemistry is fully coupled to the plume species evolution whose transport is based on empirically derived diffusion. Model/data comparisons are encouraging through capturing observed local ozone recovery times as well as overall morphology of chlorine chemistry.
Long-term Trends and Variability of Eddy Activities in the South China Sea
NASA Astrophysics Data System (ADS)
Zhang, M.; von Storch, H.
2017-12-01
For constructing empirical downscaling models and projecting possible future states of eddy activities in the South China Sea (SCS), long-term statistical characteristics of the SCS eddy are needed. We use a daily global eddy-resolving model product named STORM covering the period of 1950-2010. This simulation has employed the MPI-OM model with a mean horizontal resolution of 10km and been driven by the NCEP reanalysis-1 data set. An eddy detection and tracking algorithm operating on the gridded sea surface height anomaly (SSHA) fields was developed. A set of parameters for the criteria in the SCS are determined through sensitivity tests. Our method detected more than 6000 eddy tracks in the South China Sea. For all of them, eddy diameters, track length, eddy intensity, eddy lifetime and eddy frequency were determined. The long-term trends and variability of those properties also has been derived. Most of the eddies propagate westward. Nearly 100 eddies travel longer than 1000km, and over 800 eddies have a lifespan of more than 2 months. Furthermore, for building the statistical empirical model, the relationship between the SCS eddy statistics and the large-scale atmospheric and oceanic phenomena has been investigated.
Low temperature heat capacities and thermodynamic functions described by Debye-Einstein integrals.
Gamsjäger, Ernst; Wiessner, Manfred
2018-01-01
Thermodynamic data of various crystalline solids are assessed from low temperature heat capacity measurements, i.e., from almost absolute zero to 300 K by means of semi-empirical models. Previous studies frequently present fit functions with a large amount of coefficients resulting in almost perfect agreement with experimental data. It is, however, pointed out in this work that special care is required to avoid overfitting. Apart from anomalies like phase transformations, it is likely that data from calorimetric measurements can be fitted by a relatively simple Debye-Einstein integral with sufficient precision. Thereby, reliable values for the heat capacities, standard enthalpies, and standard entropies at T = 298.15 K are obtained. Standard thermodynamic functions of various compounds strongly differing in the number of atoms in the formula unit can be derived from this fitting procedure and are compared to the results of previous fitting procedures. The residuals are of course larger when the Debye-Einstein integral is applied instead of using a high number of fit coefficients or connected splines, but the semi-empiric fit coefficients keep their meaning with respect to physics. It is suggested to use the Debye-Einstein integral fit as a standard method to describe heat capacities in the range between 0 and 300 K so that the derived thermodynamic functions are obtained on the same theory-related semi-empiric basis. Additional fitting is recommended when a precise description for data at ultra-low temperatures (0-20 K) is requested.
Ivanova, Masha Y; Achenbach, Thomas; Leite, Manuela; Almeida, Vera; Caldas, Carlos; Turner, Lori; Dumas, Julie A
2018-05-01
As the world population ages, mental health professionals increasingly need empirically supported assessment instruments for older adult psychopathology. This study tested the degree to which syndromes derived from self-ratings of psychopathology by elders in the US would fit self-ratings by elders in Portugal. The Older Adult Self-Report (OASR) was completed by 352 60- to 102-year-olds in Portuguese community and residential settings. Confirmatory factor analyses tested the fit of the 7-syndrome OASR model to self-ratings by Portuguese elders. The primary fit index (Root Mean Square Error of Approximation) showed good fit, while secondary fit indices (the Comparative Fit Index and the Tucker-Lewis Index) showed acceptable fit. Loadings of 95 of the 97 items on their expected syndromes were statistically significant (mean = .63), indicating that the items measured the syndromes well. Correlations between latent factors, ie, between the hypothesized syndrome constructs measured by the items, averaged .66. The correlations between syndromes reflect varying degrees of comorbidity between problems comprising particular pairs of syndromes. The results support the syndrome structure of the OASR for Portuguese elders, offering Portuguese clinicians and researchers a useful instrument for assessing a broad spectrum of psychopathology. The results also offer a core of empirically supported taxonomic constructs of later life psychopathology as a basis for advancing clinical practice, training, and cross-cultural research. Copyright © 2017 John Wiley & Sons, Ltd.
Hansson, H; Lagerkvist, C J
2016-01-01
In this study, we sought to identify empirically the types of use and non-use values that motivate dairy farmers in their work relating to animal welfare of dairy cows. We also sought to identify how they prioritize between these use and non-use values. Use values are derived from productivity considerations; non-use values are derived from the wellbeing of the animals, independent of the present or future use the farmer may make of the animal. In particular, we examined the empirical content and structure of the economic value dairy farmers associate with animal welfare of dairy cows. Based on a best-worst scaling approach and data from 123 Swedish dairy farmers, we suggest that the economic value those farmers associate with animal welfare of dairy cows covers aspects of both use and non-use type, with non-use values appearing more important. Using principal component factor analysis, we were able to check unidimensionality of the economic value construct. These findings are useful for understanding why dairy farmers may be interested in considering dairy cow welfare. Such understanding is essential for improving agricultural policy and advice aimed at encouraging dairy farmers to improve animal welfare; communicating to consumers the values under which dairy products are produced; and providing a basis for more realistic assumptions when developing economic models about dairy farmers' behavior. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sundararaman, Ravishankar; Gunceler, Deniz; Arias, T. A.
2014-10-07
Continuum solvation models enable efficient first principles calculations of chemical reactions in solution, but require extensive parametrization and fitting for each solvent and class of solute systems. Here, we examine the assumptions of continuum solvation models in detail and replace empirical terms with physical models in order to construct a minimally-empirical solvation model. Specifically, we derive solvent radii from the nonlocal dielectric response of the solvent from ab initio calculations, construct a closed-form and parameter-free weighted-density approximation for the free energy of the cavity formation, and employ a pair-potential approximation for the dispersion energy. We show that the resulting modelmore » with a single solvent-independent parameter: the electron density threshold (n c), and a single solvent-dependent parameter: the dispersion scale factor (s 6), reproduces solvation energies of organic molecules in water, chloroform, and carbon tetrachloride with RMS errors of 1.1, 0.6 and 0.5 kcal/mol, respectively. We additionally show that fitting the solvent-dependent s 6 parameter to the solvation energy of a single non-polar molecule does not substantially increase these errors. Parametrization of this model for other solvents, therefore, requires minimal effort and is possible without extensive databases of experimental solvation free energies.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sundararaman, Ravishankar; Gunceler, Deniz; Arias, T. A.
2014-10-07
Continuum solvation models enable efficient first principles calculations of chemical reactions in solution, but require extensive parametrization and fitting for each solvent and class of solute systems. Here, we examine the assumptions of continuum solvation models in detail and replace empirical terms with physical models in order to construct a minimally-empirical solvation model. Specifically, we derive solvent radii from the nonlocal dielectric response of the solvent from ab initio calculations, construct a closed-form and parameter-free weighted-density approximation for the free energy of the cavity formation, and employ a pair-potential approximation for the dispersion energy. We show that the resulting modelmore » with a single solvent-independent parameter: the electron density threshold (n{sub c}), and a single solvent-dependent parameter: the dispersion scale factor (s{sub 6}), reproduces solvation energies of organic molecules in water, chloroform, and carbon tetrachloride with RMS errors of 1.1, 0.6 and 0.5 kcal/mol, respectively. We additionally show that fitting the solvent-dependent s{sub 6} parameter to the solvation energy of a single non-polar molecule does not substantially increase these errors. Parametrization of this model for other solvents, therefore, requires minimal effort and is possible without extensive databases of experimental solvation free energies.« less
Studying aerodynamic drag for modeling the kinematical behavior of CMEs
NASA Astrophysics Data System (ADS)
Temmer, M.; Vrsnak, B.; Moestl, C.; Zic, T.; Veronig, A. M.; Rollett, T.
2013-12-01
With the SECCHI instrument suite aboard STEREO, coronal mass ejections (CMEs) can be observed from multiple vantage points during their entire propagation all the way from the Sun to 1 AU. The propagation behavior of CMEs in interplanetary space is mainly influenced by the ambient solar wind flow. CMEs that are faster than the ambient solar wind get decelerated, whereas slower ones are accelerated until the CME speed is finally adjusted to the solar wind speed. On a statistical basis, empirical models taking into account the drag force acting on CMEs, are able to describe the observed kinematical behaviors. For several well observed CME events we derive the kinematical evolution by combining remote sensing and in situ data. The observed kinematical behavior is compared to results from current empirical and numerical propagation models. For this we mainly use the drag based model DBM as well as the MHD model ENLIL. We aim to obtain the distance regime at which the solar wind drag force is dominating the CME propagation and quantify differences between different model results. This work has received funding from the FWF: V195-N16, and the European Commission FP7 Projects eHEROES (284461, www.eheroes.eu) and COMESEP (263252, www.comesep.eu).
Drought Dynamics and Food Security in Ukraine
NASA Astrophysics Data System (ADS)
Kussul, N. M.; Kogan, F.; Adamenko, T. I.; Skakun, S. V.; Kravchenko, O. M.; Kryvobok, O. A.; Shelestov, A. Y.; Kolotii, A. V.; Kussul, O. M.; Lavrenyuk, A. M.
2012-12-01
In recent years food security became a problem of great importance at global, national and regional scale. Ukraine is one of the most developed agriculture countries and one of the biggest crop producers in the world. According to the 2011 statistics provided by the USDA FAS, Ukraine was the 8th largest exporter and 10th largest producer of wheat in the world. Therefore, identifying current and projecting future trends in climate and agriculture parameters is a key element in providing support to policy makers in food security. This paper combines remote sensing, meteorological, and modeling data to investigate dynamics of extreme events, such as droughts, and its impact on agriculture production in Ukraine. Two main problems have been considered in the study: investigation of drought dynamics in Ukraine and its impact on crop production; and investigation of crop growth models for yield and production forecasting and its comparison with empirical models that use as a predictor satellite-derived parameters and meteorological observations. Large-scale weather disasters in Ukraine such as drought were assessed using vegetation health index (VHI) derived from satellite data. The method is based on estimation of green canopy stress/no stress from indices, characterizing moisture and thermal conditions of vegetation canopy. These conditions are derived from the reflectance/emission in the red, near infrared and infrared parts of solar spectrum measured by the AVHRR flown on the NOAA afternoon polar-orbiting satellites since 1981. Droughts were categorized into exceptional, extreme, severe and moderate. Drought area (DA, in % from total Ukrainian area) was calculated for each category. It was found that maximum DA over past 20 years was 10% for exceptional droughts, 20% for extreme droughts, 50% for severe droughts, and 80% for moderate droughts. Also, it was shown that in general the drought intensity and area did not increase considerably over past 10 years. Analysis of interrelation between DA of different categories at oblast level with agriculture production will be discussed as well. A comparative study was carried out to assess three approaches to forecast winter wheat yield in Ukraine at oblast level: (i) empirical regression-based model that uses as a predictor 16-day NDVI composites derived from MODIS at the 250 m resolution, (ii) empirical regression-based model that uses as predictors meteorological parameters, and (iii) adapted for Ukraine Crop Growth Monitoring System (CGMS) that is based on WOFOST crop growth simulation model and meteorological parameters. These three approaches were calibrated for 2000-2009 and 2000-2010 data, and compared while performing forecasts on independent data for 2010 and 2011. For 2010, the best results in terms of root mean square error (RMSE, by oblast, deviation of predicted values from official statistics) were achieved using CGMS models: 0.3 t/ha. For NDVI and meteorological models RMSE values were 0.79 and 0.77 t/ha, respectively. When forecasting winter wheat yield for 2011, the following RMSE values were obtained: 0.58 t/ha for CGMS, 0.56 t/ha for meteorological model, and 0.62 t/ha for NDVI. In this case performance of all three approaches was relatively the same. Acknowledgements. This work was supported by the U.S. CRDF Grant "Analysis of climate change & food security based on remote sensing & in situ data sets" (UKB2-2972-KV-09).
Trapped Proton Environment in Medium-Earth Orbit (2000-2010)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yue; Friedel, Reinhard Hans; Kippen, Richard Marc
This report describes the method used to derive fluxes of the trapped proton belt along the GPS orbit (i.e., a Medium-Earth Orbit) during 2000 – 2010, a period almost covering a solar cycle. This method utilizes a newly developed empirical proton radiation-belt model, with the model output scaled by GPS in-situ measurements, to generate proton fluxes that cover a wide range of energies (50keV- 6MeV) and keep temporal features as well. The new proton radiation-belt model is developed based upon CEPPAD proton measurements from the Polar mission (1996 – 2007). Comparing to the de-facto standard empirical model of AP8, thismore » model is not only based upon a new data set representative of the proton belt during the same period covered by GPS, but can also provide statistical information of flux values such as worst cases and occurrence percentiles instead of solely the mean values. The comparison shows quite different results from the two models and suggests that the commonly accepted error factor of 2 on the AP8 flux output over-simplifies and thus underestimates variations of the proton belt. Output fluxes from this new model along the GPS orbit are further scaled by the ns41 in-situ data so as to reflect the dynamic nature of protons in the outer radiation belt at geomagnetically active times. Derived daily proton fluxes along the GPS ns41 orbit, whose data files are delivered along with this report, are depicted to illustrate the trapped proton environment in the Medium-Earth Orbit. Uncertainties on those daily proton fluxes from two sources are evaluated: One is from the new proton-belt model that has error factors < ~3; the other is from the in-situ measurements and the error factors could be ~ 5.« less
NASA Astrophysics Data System (ADS)
Brigandì, Giuseppina; Tito Aronica, Giuseppe; Bonaccorso, Brunella; Gueli, Roberto; Basile, Giuseppe
2017-09-01
The main focus of the paper is to present a flood and landslide early warning system, named HEWS (Hydrohazards Early Warning System), specifically developed for the Civil Protection Department of Sicily, based on the combined use of rainfall thresholds, soil moisture modelling and quantitative precipitation forecast (QPF). The warning system is referred to 9 different Alert Zones
in which Sicily has been divided into and based on a threshold system of three different increasing critical levels: ordinary, moderate and high. In this system, for early flood warning, a Soil Moisture Accounting (SMA) model provides daily soil moisture conditions, which allow to select a specific set of three rainfall thresholds, one for each critical level considered, to be used for issue the alert bulletin. Wetness indexes, representative of the soil moisture conditions of a catchment, are calculated using a simple, spatially-lumped rainfall-streamflow model, based on the SCS-CN method, and on the unit hydrograph approach, that require daily observed and/or predicted rainfall, and temperature data as input. For the calibration of this model daily continuous time series of rainfall, streamflow and air temperature data are used. An event based lumped rainfall-runoff model has been, instead, used for the derivation of the rainfall thresholds for each catchment in Sicily characterised by an area larger than 50 km2. In particular, a Kinematic Instantaneous Unit Hydrograph based lumped rainfall-runoff model with the SCS-CN routine for net rainfall was developed for this purpose. For rainfall-induced shallow landslide warning, empirical rainfall thresholds provided by Gariano et al. (2015) have been included in the system. They were derived on an empirical basis starting from a catalogue of 265 shallow landslides in Sicily in the period 2002-2012. Finally, Delft-FEWS operational forecasting platform has been applied to link input data, SMA model and rainfall threshold models to produce warning on a daily basis for the entire region.
Against the empirical viability of the Deutsch-Wallace-Everett approach to quantum mechanics
NASA Astrophysics Data System (ADS)
Dawid, Richard; Thébault, Karim P. Y.
2014-08-01
The subjective Everettian approach to quantum mechanics presented by Deutsch and Wallace fails to constitute an empirically viable theory of quantum phenomena. The decision theoretic implementation of the Born rule realized in this approach provides no basis for rejecting Everettian quantum mechanics in the face of empirical data that contradicts the Born rule. The approach of Greaves and Myrvold, which provides a subjective implementation of the Born rule as well but derives it from empirical data rather than decision theoretic arguments, avoids the problem faced by Deutsch and Wallace and is empirically viable. However, there is good reason to cast doubts on its scientific value.
NASA Astrophysics Data System (ADS)
Li, Huajiao; Fang, Wei; An, Haizhong; Gao, Xiangyun; Yan, Lili
2016-05-01
Economic networks in the real world are not homogeneous; therefore, it is important to study economic networks with heterogeneous nodes and edges to simulate a real network more precisely. In this paper, we present an empirical study of the one-mode derivative holding-based network constructed by the two-mode affiliation network of two sets of actors using the data of worldwide listed energy companies and their shareholders. First, we identify the primitive relationship in the two-mode affiliation network of the two sets of actors. Then, we present the method used to construct the derivative network based on the shareholding relationship between two sets of actors and the affiliation relationship between actors and events. After constructing the derivative network, we analyze different topological features on the node level, edge level and entire network level and explain the meanings of the different values of the topological features combining the empirical data. This study is helpful for expanding the usage of complex networks to heterogeneous economic networks. For empirical research on the worldwide listed energy stock market, this study is useful for discovering the inner relationships between the nations and regions from a new perspective.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andersson, Anders David Ragnar; Pastore, Giovanni; Liu, Xiang-Yang
2014-11-07
This report summarizes the development of new fission gas diffusion models from lower length scale simulations and assessment of these models in terms of annealing experiments and fission gas release simulations using the BISON fuel performance code. Based on the mechanisms established from density functional theory (DFT) and empirical potential calculations, continuum models for diffusion of xenon (Xe) in UO 2 were derived for both intrinsic conditions and under irradiation. The importance of the large X eU3O cluster (a Xe atom in a uranium + oxygen vacancy trap site with two bound uranium vacancies) is emphasized, which is a consequencemore » of its high mobility and stability. These models were implemented in the MARMOT phase field code, which is used to calculate effective Xe diffusivities for various irradiation conditions. The effective diffusivities were used in BISON to calculate fission gas release for a number of test cases. The results are assessed against experimental data and future directions for research are outlined based on the conclusions.« less
A Behavior-Analytic Account of Motivational Interviewing
ERIC Educational Resources Information Center
Christopher, Paulette J.; Dougher, Michael J.
2009-01-01
Several published reports have now documented the clinical effectiveness of motivational interviewing (MI). Despite its effectiveness, there are no generally accepted or empirically supported theoretical accounts of its effects. The theoretical accounts that do exist are mentalistic, descriptive, and not based on empirically derived behavioral…
Classification of Marital Relationships: An Empirical Approach.
ERIC Educational Resources Information Center
Snyder, Douglas K.; Smith, Gregory T.
1986-01-01
Derives an empirically based classification system of marital relationships, employing a multidimensional self-report measure of marital interaction. Spouses' profiles on the Marital Satisfaction Inventory for samples of clinic and nonclinic couples were subjected to cluster analysis, resulting in separate five-group typologies for husbands and…
NASA Astrophysics Data System (ADS)
Yang, J.; Medlyn, B.; De Kauwe, M. G.; Duursma, R.
2017-12-01
Leaf Area Index (LAI) is a key variable in modelling terrestrial vegetation, because it has a major impact on carbon, water and energy fluxes. However, LAI is difficult to predict: several recent intercomparisons have shown that modelled LAI differs significantly among models, and between models and satellite-derived estimates. Empirical studies show that long-term mean LAI is strongly related to mean annual precipitation. This observation is predicted by the theory of ecohydrological equilibrium, which provides a promising alternative means to predict steady-state LAI. We implemented this theory in a simple optimisation model. We hypothesized that, when water availability is limited, plants should adjust long-term LAI and stomatal behavior (g1) to maximize net canopy carbon export, under the constraint that canopy transpiration is a fixed fraction of total precipitation. We evaluated the predicted LAI (Lopt) for Australia against ground-based observations of LAI at 135 sites, and continental-scale satellite-derived estimates. For the site-level data, the RMSE of predicted Lopt was 0.14 m2 m-2, which was similar to the RMSE of a comparison of the data against nine-year mean satellite-derived LAI at those sites. Continentally, Lopt had a R2 of over 70% when compared to satellite-derived LAI, which is comparable to the R2 obtained when different satellite products are compared against each other. The predicted response of Lopt to the increase in atmospheric CO2 over the last 30 years also agreed with the estimate based on satellite-derivatives. Our results indicate that long-term equilibrium LAI can be successfully predicted from a simple application of ecohydrological theory. We suggest that this theory could be usefully incorporated into terrestrial vegetation models to improve their predictions of LAI.
NASA Astrophysics Data System (ADS)
Fuchs, Richard; Prestele, Reinhard; Verburg, Peter H.
2018-05-01
The consideration of gross land changes, meaning all area gains and losses within a pixel or administrative unit (e.g. country), plays an essential role in the estimation of total land changes. Gross land changes affect the magnitude of total land changes, which feeds back to the attribution of biogeochemical and biophysical processes related to climate change in Earth system models. Global empirical studies on gross land changes are currently lacking. Whilst the relevance of gross changes for global change has been indicated in the literature, it is not accounted for in future land change scenarios. In this study, we extract gross and net land change dynamics from large-scale and high-resolution (30-100 m) remote sensing products to create a new global gross and net change dataset. Subsequently, we developed an approach to integrate our empirically derived gross and net changes with the results of future simulation models by accounting for the gross and net change addressed by the land use model and the gross and net change that is below the resolution of modelling. Based on our empirical data, we found that gross land change within 0.5° grid cells was substantially larger than net changes in all parts of the world. As 0.5° grid cells are a standard resolution of Earth system models, this leads to an underestimation of the amount of change. This finding contradicts earlier studies, which assumed gross land changes to appear in shifting cultivation areas only. Applied in a future scenario, the consideration of gross land changes led to approximately 50 % more land changes globally compared to a net land change representation. Gross land changes were most important in heterogeneous land systems with multiple land uses (e.g. shifting cultivation, smallholder farming, and agro-forestry systems). Moreover, the importance of gross changes decreased over time due to further polarization and intensification of land use. Our results serve as an empirical database for land change dynamics that can be applied in Earth system models and integrated assessment models.
NASA Technical Reports Server (NTRS)
Mertens, Christopher J.; Mast, Jeffrey C.; Winick, Jeremy R.; Russell, James M., III; Mlynczak, Martin G.; Evans, David S.
2007-01-01
The large thermospheric infrared radiance enhancements observed from the TIMED/SABER experiment during recent solar storms provide an exciting opportunity to study the influence of solar-geomagnetic disturbances on the upper atmosphere and ionosphere. In particular, nighttime enhancements of 4.3 um emission, due to vibrational excitation and radiative emission by NO+, provide an excellent proxy to study and analyze the response of the ionospheric E-region to auroral electron dosing and storm-time enhancements to the E-region electron density. In this paper we give a status report of on-going work on model and data analysis methodologies of deriving NO+ 4.3 um volume emission rates, a proxy for the storm-time E-region response, and the approach for deriving an empirical storm-time correction to International Reference Ionosphere (IRI) E-region NO+ and electron densities.
Target strengths of two abundant mesopelagic fish species.
Scoulding, Ben; Chu, Dezhang; Ona, Egil; Fernandes, Paul G
2015-02-01
Mesopelagic fish of the Myctophidae and Sternoptychidae families dominate the biomass of the oceanic deep scattering layers and, therefore, have important ecological roles within these ecosystems. Interest in the commercial exploitation of these fish is growing, so the development of techniques for estimating their abundance, distribution and, ultimately, sustainable exploitation are essential. The acoustic backscattering characteristics for two size classes of Maurolicus muelleri and Benthosema glaciale are reported here based on swimbladder morphology derived from digitized soft x-ray images, and empirical (in situ) measurements of target strength (TS) derived from an acoustic survey in a Norwegian Sea. A backscattering model based on a gas-filled prolate spheroid was used to predict the theoretical TS for both species across a frequency range between 0 and 250 kHz. Sensitivity analyses of the TS model to the modeling parameters indicate that TS is rather sensitive to the viscosity, swimbladder volume ratio, and tilt, which can result in substantial changes to the TS. Theoretical TS predictions close to the resonance frequency were in good agreement (±2 dB) with mean in situ TS derived from the areas acoustically surveyed that were spatially and temporally consistent with the trawl information for both species.
NASA Technical Reports Server (NTRS)
Lyon, R. J. P.; Prelat, A. E.; Kirk, R. (Principal Investigator)
1981-01-01
An attempt was made to match HCMM- and U2HCMR-derived temperature data over two test sites of very local size to similar data collected in the field at nearly the same times. Results indicate that HCMM investigations using resolutions cells of 500 m or so are best conducted with areally-extensive sites, rather than point observations. The excellent quality day-VIS imagery is particularly useful for lineament studies, as is the DELTA-T imagery. Attempts to register the ground observed temperatures (even for 0.5 sq mile targets) were unsuccessful due to excessive pixel-to-pixel noise on the HCMM data. Several computer models were explored and related to thermal parameter value changes with observed data. Unless quite complex models, with many parameters which can be observed (perhaps not even measured (perhaps not even measured) only under remote sensing conditions (e.g., roughness, wind shear, etc) are used, the model outputs do not match the observed data. Empirical relationship may be most readily studied.
NASA Astrophysics Data System (ADS)
Vaccaro, S. R.
2011-09-01
The voltage dependence of the ionic and gating currents of a K channel is dependent on the activation barriers of a voltage sensor with a potential function which may be derived from the principal electrostatic forces on an S4 segment in an inhomogeneous dielectric medium. By variation of the parameters of a voltage-sensing domain model, consistent with x-ray structures and biophysical data, the lowest frequency of the survival probability of each stationary state derived from a solution of the Smoluchowski equation provides a good fit to the voltage dependence of the slowest time constant of the ionic current in a depolarized membrane, and the gating current exhibits a rising phase that precedes an exponential relaxation. For each depolarizing potential, the calculated time dependence of the survival probabilities of the closed states of an alpha helical S4 sensor are in accord with an empirical model of the ionic and gating currents recorded during the activation process.
You, Joyce H S; Chan, Eva S K; Leung, Maggie Y K; Ip, Margaret; Lee, Nelson L S
2012-01-01
Seasonal and 2009 H1N1 influenza viruses may cause severe diseases and result in excess hospitalization and mortality in the older and younger adults, respectively. Early antiviral treatment may improve clinical outcomes. We examined potential outcomes and costs of test-guided versus empirical treatment in patients hospitalized for suspected influenza in Hong Kong. We designed a decision tree to simulate potential outcomes of four management strategies in adults hospitalized for severe respiratory infection suspected of influenza: "immunofluorescence-assay" (IFA) or "polymerase-chain-reaction" (PCR)-guided oseltamivir treatment, "empirical treatment plus PCR" and "empirical treatment alone". Model inputs were derived from literature. The average prevalence (11%) of influenza in 2010-2011 (58% being 2009 H1N1) among cases of respiratory infections was used in the base-case analysis. Primary outcome simulated was cost per quality-adjusted life-year (QALY) expected (ICER) from the Hong Kong healthcare providers' perspective. In base-case analysis, "empirical treatment alone" was shown to be the most cost-effective strategy and dominated the other three options. Sensitivity analyses showed that "PCR-guided treatment" would dominate "empirical treatment alone" when the daily cost of oseltamivir exceeded USD18, or when influenza prevalence was <2.5% and the predominant circulating viruses were not 2009 H1N1. Using USD50,000 as the threshold of willingness-to-pay, "empirical treatment alone" and "PCR-guided treatment" were cost-effective 97% and 3% of time, respectively, in 10,000 Monte-Carlo simulations. During influenza epidemics, empirical antiviral treatment appears to be a cost-effective strategy in managing patients hospitalized with severe respiratory infection suspected of influenza, from the perspective of healthcare providers in Hong Kong.
Empirical Ground Motion Characterization of Induced Seismicity in Alberta and Oklahoma
NASA Astrophysics Data System (ADS)
Novakovic, M.; Atkinson, G. M.; Assatourians, K.
2017-12-01
We develop empirical ground-motion prediction equations (GMPEs) for ground motions from induced earthquakes in Alberta and Oklahoma following the stochastic-model-based method of Atkinson et al. (2015 BSSA). The Oklahoma ground-motion database is compiled from over 13,000 small to moderate seismic events (M 1 to 5.8) recorded at 1600 seismic stations, at distances from 1 to 750 km. The Alberta database is compiled from over 200 small to moderate seismic events (M 1 to 4.2) recorded at 50 regional stations, at distances from 30 to 500 km. A generalized inversion is used to solve for regional source, attenuation and site parameters. The obtained parameters describe the regional attenuation, stress parameter and site amplification. Resolving these parameters allows for the derivation of regionally-calibrated GMPEs that can be used to compare ground motion observations between waste water injection (Oklahoma) and hydraulic fracture induced events (Alberta), and further compare induced observations with ground motions resulting from natural sources (California, NGAWest2). The derived GMPEs have applications for the evaluation of hazards from induced seismicity and can be used to track amplitudes across the regions in real time, which is useful for ground-motion-based alerting systems and traffic light protocols.
NASA Astrophysics Data System (ADS)
Yu, Hongjuan; Guo, Jinyun; Kong, Qiaoli; Chen, Xiaodong
2018-04-01
The static observation data from a relative gravimeter contain noise and signals such as gravity tides. This paper focuses on the extraction of the gravity tides from the static relative gravimeter data for the first time applying the combined method of empirical mode decomposition (EMD) and independent component analysis (ICA), called the EMD-ICA method. The experimental results from the CG-5 gravimeter (SCINTREX Limited Ontario Canada) data show that the gravity tides time series derived by EMD-ICA are consistent with the theoretical reference (Longman formula) and the RMS of their differences only reaches 4.4 μGal. The time series of the gravity tides derived by EMD-ICA have a strong correlation with the theoretical time series and the correlation coefficient is greater than 0.997. The accuracy of the gravity tides estimated by EMD-ICA is comparable to the theoretical model and is slightly higher than that of independent component analysis (ICA). EMD-ICA could overcome the limitation of ICA having to process multiple observations and slightly improve the extraction accuracy and reliability of gravity tides from relative gravimeter data compared to that estimated with ICA.
Empirical improvements for estimating earthquake response spectra with random‐vibration theory
Boore, David; Thompson, Eric M.
2012-01-01
The stochastic method of ground‐motion simulation is often used in combination with the random‐vibration theory to directly compute ground‐motion intensity measures, thereby bypassing the more computationally intensive time‐domain simulations. Key to the application of random‐vibration theory to simulate response spectra is determining the duration (Drms) used in computing the root‐mean‐square oscillator response. Boore and Joyner (1984) originally proposed an equation for Drms , which was improved upon by Liu and Pezeshk (1999). Though these equations are both substantial improvements over using the duration of the ground‐motion excitation for Drms , we document systematic differences between the ground‐motion intensity measures derived from the random‐vibration and time‐domain methods for both of these Drms equations. These differences are generally less than 10% for most magnitudes, distances, and periods of engineering interest. Given the systematic nature of the differences, however, we feel that improved equations are warranted. We empirically derive new equations from time‐domain simulations for eastern and western North America seismological models. The new equations improve the random‐vibration simulations over a wide range of magnitudes, distances, and oscillator periods.
Modeling conflict and error in the medial frontal cortex.
Mayer, Andrew R; Teshiba, Terri M; Franco, Alexandre R; Ling, Josef; Shane, Matthew S; Stephen, Julia M; Jung, Rex E
2012-12-01
Despite intensive study, the role of the dorsal medial frontal cortex (dMFC) in error monitoring and conflict processing remains actively debated. The current experiment manipulated conflict type (stimulus conflict only or stimulus and response selection conflict) and utilized a novel modeling approach to isolate error and conflict variance during a multimodal numeric Stroop task. Specifically, hemodynamic response functions resulting from two statistical models that either included or isolated variance arising from relatively few error trials were directly contrasted. Twenty-four participants completed the task while undergoing event-related functional magnetic resonance imaging on a 1.5-Tesla scanner. Response times monotonically increased based on the presence of pure stimulus or stimulus and response selection conflict. Functional results indicated that dMFC activity was present during trials requiring response selection and inhibition of competing motor responses, but absent during trials involving pure stimulus conflict. A comparison of the different statistical models suggested that relatively few error trials contributed to a disproportionate amount of variance (i.e., activity) throughout the dMFC, but particularly within the rostral anterior cingulate gyrus (rACC). Finally, functional connectivity analyses indicated that an empirically derived seed in the dorsal ACC/pre-SMA exhibited strong connectivity (i.e., positive correlation) with prefrontal and inferior parietal cortex but was anti-correlated with the default-mode network. An empirically derived seed from the rACC exhibited the opposite pattern, suggesting that sub-regions of the dMFC exhibit different connectivity patterns with other large scale networks implicated in internal mentations such as daydreaming (default-mode) versus the execution of top-down attentional control (fronto-parietal). Copyright © 2011 Wiley Periodicals, Inc.
Wilson, Sylia; Schalet, Benjamin D.; Hicks, Brian M.; Zucker, Robert A.
2013-01-01
The present study used an empirical, “bottom-up” approach to delineate the structure of the California Child Q-Set (CCQ), a comprehensive set of personality descriptors, in a sample of 373 preschool-aged children. This approach yielded two broad trait dimensions, Adaptive Socialization (emotional stability, compliance, intelligence) and Anxious Inhibition (emotional/behavioral introversion). Results demonstrate the value of using empirical derivation to investigate the structure of personality in young children, speak to the importance of early-evident personality traits for adaptive development, and are consistent with a growing body of evidence indicating that personality structure in young children is similar, but not identical to, that in adults, suggesting a model of broad personality dimensions in childhood that evolve into narrower traits in adulthood. PMID:24223448
Tree Guidelines for Inland Empire Communities
E.G. McPherson; J.R. Simpson; P.J. Peper; Q. Xiao; D.R. Pittenger; D.R. Hodel
2001-01-01
Communities in the Inland Empire region of California contain over 8 million people, or about 25% of the stateâs population. The regionâs inhabitants derive great benefit from trees because compared to coastal areas, the summers are hotter and air pollution levels are higher. The regionâs climate is still mild enough to grow a diverse mix of trees. The Inland Empireâs...
NASA Technical Reports Server (NTRS)
1981-01-01
The application of statistical methods to recorded ozone measurements. The effects of a long term depletion of ozone at magnitudes predicted by the NAS is harmful to most forms of life. Empirical prewhitening filters the derivation of which is independent of the underlying physical mechanisms were analyzed. Statistical analysis performs a checks and balances effort. Time series filters variations into systematic and random parts, errors are uncorrelated, and significant phase lag dependencies are identified. The use of time series modeling to enhance the capability of detecting trends is discussed.
NASA Technical Reports Server (NTRS)
1973-01-01
Application of the Phillips theory to engineering calculations of rocket and high speed jet noise radiation is reported. Presented are a detailed derivation of the theory, the composition of the numerical scheme, and discussions of the practical problems arising in the application of the present noise prediction method. The present method still contains some empirical elements, yet it provides a unified approach in the prediction of sound power, spectrum, and directivity.
A Symmetric Time-Varying Cluster Rate of Descent Model
NASA Technical Reports Server (NTRS)
Ray, Eric S.
2015-01-01
A model of the time-varying rate of descent of the Orion vehicle was developed based on the observed correlation between canopy projected area and drag coefficient. This initial version of the model assumes cluster symmetry and only varies the vertical component of velocity. The cluster fly-out angle is modeled as a series of sine waves based on flight test data. The projected area of each canopy is synchronized with the primary fly-out angle mode. The sudden loss of projected area during canopy collisions is modeled at minimum fly-out angles, leading to brief increases in rate of descent. The cluster geometry is converted to drag coefficient using empirically derived constants. A more complete model is under development, which computes the aerodynamic response of each canopy to its local incidence angle.
A strategy for understanding noise-induced annoyance
NASA Astrophysics Data System (ADS)
Fidell, S.; Green, D. M.; Schultz, T. J.; Pearsons, K. S.
1988-08-01
This report provides a rationale for development of a systematic approach to understanding noise-induced annoyance. Two quantitative models are developed to explain: (1) the prevalence of annoyance due to residential exposure to community noise sources; and (2) the intrusiveness of individual noise events. Both models deal explicitly with the probabilistic nature of annoyance, and assign clear roles to acoustic and nonacoustic determinants of annoyance. The former model provides a theoretical foundation for empirical dosage-effect relationships between noise exposure and community response, while the latter model differentiates between the direct and immediate annoyance of noise intrusions and response bias factors that influence the reporting of annoyance. The assumptions of both models are identified, and the nature of the experimentation necessary to test hypotheses derived from the models is described.
NASA Astrophysics Data System (ADS)
Izett, Jonathan G.; Fennel, Katja
2018-02-01
Rivers deliver large amounts of terrestrially derived materials (such as nutrients, sediments, and pollutants) to the coastal ocean, but a global quantification of the fate of this delivery is lacking. Nutrients can accumulate on shelves, potentially driving high levels of primary production with negative consequences like hypoxia, or be exported across the shelf to the open ocean where impacts are minimized. Global biogeochemical models cannot resolve the relatively small-scale processes governing river plume dynamics and cross-shelf export; instead, river inputs are often parameterized assuming an "all or nothing" approach. Recently, Sharples et al. (2017), https://doi.org/10.1002/2016GB005483 proposed the SP number—a dimensionless number relating the estimated size of a plume as a function of latitude to the local shelf width—as a simple estimator of cross-shelf export. We extend their work, which is solely based on theoretical and empirical scaling arguments, and address some of its limitations using a numerical model of an idealized river plume. In a large number of simulations, we test whether the SP number can accurately describe export in unforced cases and with tidal and wind forcings imposed. Our numerical experiments confirm that the SP number can be used to estimate export and enable refinement of the quantitative relationships proposed by Sharples et al. We show that, in general, external forcing has only a weak influence compared to latitude and derive empirical relationships from the results of the numerical experiments that can be used to estimate riverine freshwater export to the open ocean.
Otsuka, Masaaki; Ueta, Toshiya; van Hoof, Peter A M; Sahai, Raghvendra; Aleman, Isabel; Zijlstra, Albert A; Chu, You-Hua; Villaver, Eva; Leal-Ferreira, Marcelo L; Kastner, Joel; Szczerba, Ryszard; Exter, Katrina M
2017-08-01
We perform a comprehensive analysis of the planetary nebula (PN) NGC 6781 to investigate the physical conditions of each of its ionized, atomic, and molecular gas and dust components and the object's evolution, based on panchromatic observational data ranging from UV to radio. Empirical nebular elemental abundances, compared with theoretical predictions via nucleosynthesis models of asymptotic giant branch (AGB) stars, indicate that the progenitor is a solar-metallicity, 2.25-3.0 M ⊙ initial-mass star. We derive the best-fit distance of 0.46 kpc by fitting the stellar luminosity (as a function of the distance and effective temperature of the central star) with the adopted post-AGB evolutionary tracks. Our excitation energy diagram analysis indicates high-excitation temperatures in the photodissociation region (PDR) beyond the ionized part of the nebula, suggesting extra heating by shock interactions between the slow AGB wind and the fast PN wind. Through iterative fitting using the Cloudy code with empirically derived constraints, we find the best-fit dusty photoionization model of the object that would inclusively reproduce all of the adopted panchromatic observational data. The estimated total gas mass (0.41 M ⊙ ) corresponds to the mass ejected during the last AGB thermal pulse event predicted for a 2.5 M ⊙ initial-mass star. A significant fraction of the total mass (about 70%) is found to exist in the PDR, demonstrating the critical importance of the PDR in PNe that are generally recognized as the hallmark of ionized/H + regions.
NASA Astrophysics Data System (ADS)
Bombaci, Ignazio; Logoteta, Domenico
2018-02-01
Aims: We report a new microscopic equation of state (EOS) of dense symmetric nuclear matter, pure neutron matter, and asymmetric and β-stable nuclear matter at zero temperature using recent realistic two-body and three-body nuclear interactions derived in the framework of chiral perturbation theory (ChPT) and including the Δ(1232) isobar intermediate state. This EOS is provided in tabular form and in parametrized form ready for use in numerical general relativity simulations of binary neutron star merging. Here we use our new EOS for β-stable nuclear matter to compute various structural properties of non-rotating neutron stars. Methods: The EOS is derived using the Brueckner-Bethe-Goldstone quantum many-body theory in the Brueckner-Hartree-Fock approximation. Neutron star properties are next computed solving numerically the Tolman-Oppenheimer-Volkov structure equations. Results: Our EOS models are able to reproduce the empirical saturation point of symmetric nuclear matter, the symmetry energy Esym, and its slope parameter L at the empirical saturation density n0. In addition, our EOS models are compatible with experimental data from collisions between heavy nuclei at energies ranging from a few tens of MeV up to several hundreds of MeV per nucleon. These experiments provide a selective test for constraining the nuclear EOS up to 4n0. Our EOS models are consistent with present measured neutron star masses and particularly with the mass M = 2.01 ± 0.04 M⊙ of the neutron stars in PSR J0348+0432.
NASA Astrophysics Data System (ADS)
Bora, S. S.; Cotton, F.; Scherbaum, F.; Kuehn, N. M.
2016-12-01
Adjustment of median ground motion prediction equations (GMPEs) from data-rich (host) regions to data-poor regions (target) is one of major challenges that remains with the current practice of engineering seismology and seismic hazard analysis. Fourier spectral representation of ground motion provides a solution to address the problem of adjustment that is physically transparent and consistent with the concepts of linear system theory. Also, it provides a direct interface to appreciate the physically expected behavior of seismological parameters on ground motion. In the present study, we derive an empirical Fourier model for computing regionally adjustable response spectral ordinates based on random vibration theory (RVT) from shallow crustal earthquakes in active tectonic regions, following the approach of Bora et al. (2014, 2015). , For this purpose, we use an expanded NGA-West2 database with M 3.2—7.9 earthquakes at distances ranging from 0 to 300 km. A mixed-effects regression technique is employed to further explore various components of variability. The NGA-West2 database expanded over a wide magnitude range provides a better understanding (and constraint) of source scaling of ground motion. The large global volume of the database also allows investigating regional patterns in distance-dependent attenuation (i.e., geometrical spreading and inelastic attenuation) of ground motion as well as in the source parameters (e.g., magnitude and stress drop). Furthermore, event-wise variability and its correlation with stress parameter are investigated. Finally, application of the derived Fourier model in generating adjustable response spectra will be shown.
Pearce, Marcus T
2018-05-11
Music perception depends on internal psychological models derived through exposure to a musical culture. It is hypothesized that this musical enculturation depends on two cognitive processes: (1) statistical learning, in which listeners acquire internal cognitive models of statistical regularities present in the music to which they are exposed; and (2) probabilistic prediction based on these learned models that enables listeners to organize and process their mental representations of music. To corroborate these hypotheses, I review research that uses a computational model of probabilistic prediction based on statistical learning (the information dynamics of music (IDyOM) model) to simulate data from empirical studies of human listeners. The results show that a broad range of psychological processes involved in music perception-expectation, emotion, memory, similarity, segmentation, and meter-can be understood in terms of a single, underlying process of probabilistic prediction using learned statistical models. Furthermore, IDyOM simulations of listeners from different musical cultures demonstrate that statistical learning can plausibly predict causal effects of differential cultural exposure to musical styles, providing a quantitative model of cultural distance. Understanding the neural basis of musical enculturation will benefit from close coordination between empirical neuroimaging and computational modeling of underlying mechanisms, as outlined here. © 2018 The Authors. Annals of the New York Academy of Sciences published by Wiley Periodicals, Inc. on behalf of New York Academy of Sciences.
On the effects of nonlinear boundary conditions in diffusive logistic equations on bounded domains
NASA Astrophysics Data System (ADS)
Cantrell, Robert Stephen; Cosner, Chris
We study a diffusive logistic equation with nonlinear boundary conditions. The equation arises as a model for a population that grows logistically inside a patch and crosses the patch boundary at a rate that depends on the population density. Specifically, the rate at which the population crosses the boundary is assumed to decrease as the density of the population increases. The model is motivated by empirical work on the Glanville fritillary butterfly. We derive local and global bifurcation results which show that the model can have multiple equilibria and in some parameter ranges can support Allee effects. The analysis leads to eigenvalue problems with nonstandard boundary conditions.
An investigation into exoplanet transits and uncertainties
NASA Astrophysics Data System (ADS)
Ji, Y.; Banks, T.; Budding, E.; Rhodes, M. D.
2017-06-01
A simple transit model is described along with tests of this model against published results for 4 exoplanet systems (Kepler-1, 2, 8, and 77). Data from the Kepler mission are used. The Markov Chain Monte Carlo (MCMC) method is applied to obtain realistic error estimates. Optimisation of limb darkening coefficients is subject to data quality. It is more likely for MCMC to derive an empirical limb darkening coefficient for light curves with S/N (signal to noise) above 15. Finally, the model is applied to Kepler data for 4 Kepler candidate systems (KOI 760.01, 767.01, 802.01, and 824.01) with previously unpublished results. Error estimates for these systems are obtained via the MCMC method.
Characterization of Nanoscale Gas Transport in Shale Formations
NASA Astrophysics Data System (ADS)
Chai, D.; Li, X.
2017-12-01
Non-Darcy flow behavior can be commonly observed in nano-sized pores of matrix. Most existing gas flow models characterize non-Darcy flow by empirical or semi-empirical methods without considering the real gas effect. In this paper, a novel layered model with physical meanings is proposed for both ideal and real gas transports in nanopores. It can be further coupled with hydraulic fracturing models and consequently benefit the storage evaluation and production prediction for shale gas recovery. It is hypothesized that a nanotube can be divided into a central circular zone where the viscous flow behavior mainly exists due to dominant intermolecular collisions and an outer annular zone where the Knudsen diffusion mainly exists because of dominant collisions between molecules and the wall. The flux is derived based on integration of two zones by applying the virtual boundary. Subsequently, the model is modified by incorporating slip effect, real gas effect, porosity distribution, and tortuosity. Meanwhile, a multi-objective optimization method (MOP) is applied to assist the validation of analytical model to search fitting parameters which are highly localized and contain significant uncertainties. The apparent permeability is finally derived and analyzed with various impact factors. The developed nanoscale gas transport model is well validated by the flux data collected from both laboratory experiments and molecular simulations over the entire spectrum of flow regimes. It has a decrease of as much as 43.8% in total molar flux when the real gas effect is considered in the model. Such an effect is found to be more significant as pore size shrinks. Knudsen diffusion accounts for more than 60% of the total gas flux when pressure is lower than 0.2 MPa and pore size is smaller than 50 nm. Overall, the apparent permeability is found to decrease with pressure, though it rarely changes when pressure is higher than 5.0 MPa and pore size is larger than 50 nm.
VLBI-derived troposphere parameters during CONT08
NASA Astrophysics Data System (ADS)
Heinkelmann, R.; Böhm, J.; Bolotin, S.; Engelhardt, G.; Haas, R.; Lanotte, R.; MacMillan, D. S.; Negusini, M.; Skurikhina, E.; Titov, O.; Schuh, H.
2011-07-01
Time-series of zenith wet and total troposphere delays as well as north and east gradients are compared, and zenith total delays ( ZTD) are combined on the level of parameter estimates. Input data sets are provided by ten Analysis Centers (ACs) of the International VLBI Service for Geodesy and Astrometry (IVS) for the CONT08 campaign (12-26 August 2008). The inconsistent usage of meteorological data and models, such as mapping functions, causes systematics among the ACs, and differing parameterizations and constraints add noise to the troposphere parameter estimates. The empirical standard deviation of ZTD among the ACs with regard to an unweighted mean is 4.6 mm. The ratio of the analysis noise to the observation noise assessed by the operator/software impact (OSI) model is about 2.5. These and other effects have to be accounted for to improve the intra-technique combination of VLBI-derived troposphere parameters. While the largest systematics caused by inconsistent usage of meteorological data can be avoided and the application of different mapping functions can be considered by applying empirical corrections, the noise has to be modeled in the stochastic model of intra-technique combination. The application of different stochastic models shows no significant effects on the combined parameters but results in different mean formal errors: the mean formal errors of the combined ZTD are 2.3 mm (unweighted), 4.4 mm (diagonal), 8.6 mm [variance component (VC) estimation], and 8.6 mm (operator/software impact, OSI). On the one hand, the OSI model, i.e. the inclusion of off-diagonal elements in the cofactor-matrix, considers the reapplication of observations yielding a factor of about two for mean formal errors as compared to the diagonal approach. On the other hand, the combination based on VC estimation shows large differences among the VCs and exhibits a comparable scaling of formal errors. Thus, for the combination of troposphere parameters a combination of the two extensions of the stochastic model is recommended.
Assessment of microclimate conditions under artificial shades in a ginseng field.
Lee, Kyu Jong; Lee, Byun-Woo; Kang, Je Yong; Lee, Dong Yun; Jang, Soo Won; Kim, Kwang Soo
2016-01-01
Knowledge on microclimate conditions under artificial shades in a ginseng field would facilitate climate-aware management of ginseng production. Weather data were measured under the shade and outside the shade at two fields located in Gochang-gun and Jeongeup-si, Korea, in 2011 and 2012 seasons to assess temperature and humidity conditions under the shade. An empirical approach was developed and validated for the estimation of leaf wetness duration (LWD) using weather measurements outside the shade as inputs to the model. Air temperature and relative humidity were similar between under the shade and outside the shade. For example, temperature conditions favorable for ginseng growth, e.g., between 8°C and 27°C, occurred slightly less frequently in hours during night times under the shade (91%) than outside (92%). Humidity conditions favorable for development of a foliar disease, e.g., relative humidity > 70%, occurred slightly more frequently under the shade (84%) than outside (82%). Effectiveness of correction schemes to an empirical LWD model differed by rainfall conditions for the estimation of LWD under the shade using weather measurements outside the shade as inputs to the model. During dew eligible days, a correction scheme to an empirical LWD model was slightly effective (10%) in reducing estimation errors under the shade. However, another correction approach during rainfall eligible days reduced errors of LWD estimation by 17%. Weather measurements outside the shade and LWD estimates derived from these measurements would be useful as inputs for decision support systems to predict ginseng growth and disease development.
Urbanowicz, Richard A; McClure, C Patrick; King, Barnabas; Mason, Christopher P; Ball, Jonathan K; Tarr, Alexander W
2016-09-01
Retrovirus pseudotypes are a highly tractable model used to study the entry pathways of enveloped viruses. This model has been extensively applied to the study of the hepatitis C virus (HCV) entry pathway, preclinical screening of antiviral antibodies and for assessing the phenotype of patient-derived viruses using HCV pseudoparticles (HCVpp) possessing the HCV E1 and E2 glycoproteins. However, not all patient-isolated clones produce particles that are infectious in this model. This study investigated factors that might limit phenotyping of patient-isolated HCV glycoproteins. Genetically related HCV glycoproteins from quasispecies in individual patients were discovered to behave very differently in this entry model. Empirical optimization of the ratio of packaging construct and glycoprotein-encoding plasmid was required for successful HCVpp genesis for different clones. The selection of retroviral packaging construct also influenced the function of HCV pseudoparticles. Some glycoprotein constructs tolerated a wide range of assay parameters, while others were much more sensitive to alterations. Furthermore, glycoproteins previously characterized as unable to mediate entry were found to be functional. These findings were validated using chimeric cell-cultured HCV bearing these glycoproteins. Using the same empirical approach we demonstrated that generation of infectious ebolavirus pseudoviruses (EBOVpv) was also sensitive to the amount and ratio of plasmids used, and that protocols for optimal production of these pseudoviruses are dependent on the exact virus glycoprotein construct. These findings demonstrate that it is crucial for studies utilizing pseudoviruses to conduct empirical optimization of pseudotype production for each specific glycoprotein sequence to achieve optimal titres and facilitate accurate phenotyping.
NASA Technical Reports Server (NTRS)
Nieves-Chinchilla, T.; Colaninno, R.; Vourlidas, A.; Szabo, A.; Lepping, R. P.; Boardsen, S. A.; Anderson, B. J.; Korth, H.
2012-01-01
During June 16-21, 2010, an Earth-directed Coronal Mass Ejection (CME) event was observed by instruments onboard STEREO, SOHO, MESSENGER and Wind. This event was the first direct detection of a rotating CME in the middle and outer corona. Here, we carry out a comprehensive analysis of the evolution of the CME in the interplanetary medium comparing in-situ and remote observations, with analytical models and three-dimensional reconstructions. In particular, we investigate the parallel and perpendicular cross section expansion of the CME from the corona through the heliosphere up to 1 AU. We use height-time measurements and the Gradual Cylindrical Shell (GCS) technique to model the imaging observations, remove the projection effects, and derive the 3-dimensional extent of the event. Then, we compare the results with in-situ analytical Magnetic Cloud (MC) models, and with geometrical predictions from past works. We nd that the parallel (along the propagation plane) cross section expansion agrees well with the in-situ model and with the Bothmer & Schwenn [1998] empirical relationship based on in-situ observations between 0.3 and 1 AU. Our results effectively extend this empirical relationship to about 5 solar radii. The expansion of the perpendicular diameter agrees very well with the in-situ results at MESSENGER ( 0:5 AU) but not at 1 AU. We also find a slightly different, from Bothmer & Schwenn [1998], empirical relationship for the perpendicular expansion. More importantly, we find no evidence that the CME undergoes a significant latitudinal over-expansion as it is commonly assumed
Bayes and empirical Bayes estimators of abundance and density from spatial capture-recapture data
Dorazio, Robert M.
2013-01-01
In capture-recapture and mark-resight surveys, movements of individuals both within and between sampling periods can alter the susceptibility of individuals to detection over the region of sampling. In these circumstances spatially explicit capture-recapture (SECR) models, which incorporate the observed locations of individuals, allow population density and abundance to be estimated while accounting for differences in detectability of individuals. In this paper I propose two Bayesian SECR models, one for the analysis of recaptures observed in trapping arrays and another for the analysis of recaptures observed in area searches. In formulating these models I used distinct submodels to specify the distribution of individual home-range centers and the observable recaptures associated with these individuals. This separation of ecological and observational processes allowed me to derive a formal connection between Bayes and empirical Bayes estimators of population abundance that has not been established previously. I showed that this connection applies to every Poisson point-process model of SECR data and provides theoretical support for a previously proposed estimator of abundance based on recaptures in trapping arrays. To illustrate results of both classical and Bayesian methods of analysis, I compared Bayes and empirical Bayes esimates of abundance and density using recaptures from simulated and real populations of animals. Real populations included two iconic datasets: recaptures of tigers detected in camera-trap surveys and recaptures of lizards detected in area-search surveys. In the datasets I analyzed, classical and Bayesian methods provided similar – and often identical – inferences, which is not surprising given the sample sizes and the noninformative priors used in the analyses.
Assessment of microclimate conditions under artificial shades in a ginseng field
Lee, Kyu Jong; Lee, Byun-Woo; Kang, Je Yong; Lee, Dong Yun; Jang, Soo Won; Kim, Kwang Soo
2015-01-01
Background Knowledge on microclimate conditions under artificial shades in a ginseng field would facilitate climate-aware management of ginseng production. Methods Weather data were measured under the shade and outside the shade at two fields located in Gochang-gun and Jeongeup-si, Korea, in 2011 and 2012 seasons to assess temperature and humidity conditions under the shade. An empirical approach was developed and validated for the estimation of leaf wetness duration (LWD) using weather measurements outside the shade as inputs to the model. Results Air temperature and relative humidity were similar between under the shade and outside the shade. For example, temperature conditions favorable for ginseng growth, e.g., between 8°C and 27°C, occurred slightly less frequently in hours during night times under the shade (91%) than outside (92%). Humidity conditions favorable for development of a foliar disease, e.g., relative humidity > 70%, occurred slightly more frequently under the shade (84%) than outside (82%). Effectiveness of correction schemes to an empirical LWD model differed by rainfall conditions for the estimation of LWD under the shade using weather measurements outside the shade as inputs to the model. During dew eligible days, a correction scheme to an empirical LWD model was slightly effective (10%) in reducing estimation errors under the shade. However, another correction approach during rainfall eligible days reduced errors of LWD estimation by 17%. Conclusion Weather measurements outside the shade and LWD estimates derived from these measurements would be useful as inputs for decision support systems to predict ginseng growth and disease development. PMID:26843827
Bayes and empirical Bayes estimators of abundance and density from spatial capture-recapture data.
Dorazio, Robert M
2013-01-01
In capture-recapture and mark-resight surveys, movements of individuals both within and between sampling periods can alter the susceptibility of individuals to detection over the region of sampling. In these circumstances spatially explicit capture-recapture (SECR) models, which incorporate the observed locations of individuals, allow population density and abundance to be estimated while accounting for differences in detectability of individuals. In this paper I propose two Bayesian SECR models, one for the analysis of recaptures observed in trapping arrays and another for the analysis of recaptures observed in area searches. In formulating these models I used distinct submodels to specify the distribution of individual home-range centers and the observable recaptures associated with these individuals. This separation of ecological and observational processes allowed me to derive a formal connection between Bayes and empirical Bayes estimators of population abundance that has not been established previously. I showed that this connection applies to every Poisson point-process model of SECR data and provides theoretical support for a previously proposed estimator of abundance based on recaptures in trapping arrays. To illustrate results of both classical and Bayesian methods of analysis, I compared Bayes and empirical Bayes esimates of abundance and density using recaptures from simulated and real populations of animals. Real populations included two iconic datasets: recaptures of tigers detected in camera-trap surveys and recaptures of lizards detected in area-search surveys. In the datasets I analyzed, classical and Bayesian methods provided similar - and often identical - inferences, which is not surprising given the sample sizes and the noninformative priors used in the analyses.
Ipsen, Andreas
2017-02-03
Here, the mass peak centroid is a quantity that is at the core of mass spectrometry (MS). However, despite its central status in the field, models of its statistical distribution are often chosen quite arbitrarily and without attempts at establishing a proper theoretical justification for their use. Recent work has demonstrated that for mass spectrometers employing analog-to-digital converters (ADCs) and electron multipliers, the statistical distribution of the mass peak intensity can be described via a relatively simple model derived essentially from first principles. Building on this result, the following article derives the corresponding statistical distribution for the mass peak centroidsmore » of such instruments. It is found that for increasing signal strength, the centroid distribution converges to a Gaussian distribution whose mean and variance are determined by physically meaningful parameters and which in turn determine bias and variability of the m/z measurements of the instrument. Through the introduction of the concept of “pulse-peak correlation”, the model also elucidates the complicated relationship between the shape of the voltage pulses produced by the preamplifier and the mean and variance of the centroid distribution. The predictions of the model are validated with empirical data and with Monte Carlo simulations.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ipsen, Andreas
Here, the mass peak centroid is a quantity that is at the core of mass spectrometry (MS). However, despite its central status in the field, models of its statistical distribution are often chosen quite arbitrarily and without attempts at establishing a proper theoretical justification for their use. Recent work has demonstrated that for mass spectrometers employing analog-to-digital converters (ADCs) and electron multipliers, the statistical distribution of the mass peak intensity can be described via a relatively simple model derived essentially from first principles. Building on this result, the following article derives the corresponding statistical distribution for the mass peak centroidsmore » of such instruments. It is found that for increasing signal strength, the centroid distribution converges to a Gaussian distribution whose mean and variance are determined by physically meaningful parameters and which in turn determine bias and variability of the m/z measurements of the instrument. Through the introduction of the concept of “pulse-peak correlation”, the model also elucidates the complicated relationship between the shape of the voltage pulses produced by the preamplifier and the mean and variance of the centroid distribution. The predictions of the model are validated with empirical data and with Monte Carlo simulations.« less
Support for viral persistence in bats from age-specific serology and models of maternal immunity.
Peel, Alison J; Baker, Kate S; Hayman, David T S; Broder, Christopher C; Cunningham, Andrew A; Fooks, Anthony R; Garnier, Romain; Wood, James L N; Restif, Olivier
2018-03-01
Spatiotemporally-localised prediction of virus emergence from wildlife requires focused studies on the ecology and immunology of reservoir hosts in their native habitat. Reliable predictions from mathematical models remain difficult in most systems due to a dearth of appropriate empirical data. Our goal was to study the circulation and immune dynamics of zoonotic viruses in bat populations and investigate the effects of maternally-derived and acquired immunity on viral persistence. Using rare age-specific serological data from wild-caught Eidolon helvum fruit bats as a case study, we estimated viral transmission parameters for a stochastic infection model. We estimated mean durations of around 6 months for maternally-derived immunity to Lagos bat virus and African henipavirus, whereas acquired immunity was long-lasting (Lagos bat virus: mean 12 years, henipavirus: mean 4 years). In the presence of a seasonal birth pulse, the effect of maternally-derived immunity on virus persistence within modelled bat populations was highly dependent on transmission characteristics. To explain previous reports of viral persistence within small natural and captive E. helvum populations, we hypothesise that some bats must experience prolonged infectious periods or within-host latency. By further elucidating plausible mechanisms of virus persistence in bat populations, we contribute to guidance of future field studies.
A steady state model of agricultural waste pyrolysis: A mini review.
Trninić, M; Jovović, A; Stojiljković, D
2016-09-01
Agricultural waste is one of the main renewable energy resources available, especially in an agricultural country such as Serbia. Pyrolysis has already been considered as an attractive alternative for disposal of agricultural waste, since the technique can convert this special biomass resource into granular charcoal, non-condensable gases and pyrolysis oils, which could furnish profitable energy and chemical products owing to their high calorific value. In this regard, the development of thermochemical processes requires a good understanding of pyrolysis mechanisms. Experimental and some literature data on the pyrolysis characteristics of corn cob and several other agricultural residues under inert atmosphere were structured and analysed in order to obtain conversion behaviour patterns of agricultural residues during pyrolysis within the temperature range from 300 °C to 1000 °C. Based on experimental and literature data analysis, empirical relationships were derived, including relations between the temperature of the process and yields of charcoal, tar and gas (CO2, CO, H2 and CH4). An analytical semi-empirical model was then used as a tool to analyse the general trends of biomass pyrolysis. Although this semi-empirical model needs further refinement before application to all types of biomass, its prediction capability was in good agreement with results obtained by the literature review. The compact representation could be used in other applications, to conveniently extrapolate and interpolate these results to other temperatures and biomass types. © The Author(s) 2016.
Leading change: a concept analysis.
Nelson-Brantley, Heather V; Ford, Debra J
2017-04-01
To report an analysis of the concept of leading change. Nurses have been called to lead change to advance the health of individuals, populations, and systems. Conceptual clarity about leading change in the context of nursing and healthcare systems provides an empirical direction for future research and theory development that can advance the science of leadership studies in nursing. Concept analysis. CINAHL, PubMed, PsycINFO, Psychology and Behavioral Sciences Collection, Health Business Elite and Business Source Premier databases were searched using the terms: leading change, transformation, reform, leadership and change. Literature published in English from 2001 - 2015 in the fields of nursing, medicine, organizational studies, business, education, psychology or sociology were included. Walker and Avant's method was used to identify descriptions, antecedents, consequences and empirical referents of the concept. Model, related and contrary cases were developed. Five defining attributes of leading change were identified: (a) individual and collective leadership; (b) operational support; (c) fostering relationships; (d) organizational learning; and (e) balance. Antecedents were external or internal driving forces and organizational readiness. The consequences of leading change included improved organizational performance and outcomes and new organizational culture and values. A theoretical definition and conceptual model of leading change were developed. Future studies that use and test the model may contribute to the refinement of a middle-range theory to advance nursing leadership research and education. From this, empirically derived interventions that prepare and enable nurses to lead change to advance health may be realized. © 2016 John Wiley & Sons Ltd.
An empirical model of electron and ion fluxes derived from observations at geosynchronous orbit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Denton, M. H.; Thomsen, M. F.; Jordanova, V. K.
Knowledge of the plasma fluxes at geosynchronous orbit is important to both scientific and operational investigations. We present a new empirical model of the ion flux and the electron flux at geosynchronous orbit (GEO) in the energy range ~1 eV to ~40 keV. The model is based on a total of 82 satellite-years of observations from the Magnetospheric Plasma Analyzer instruments on Los Alamos National Laboratory satellites at GEO. These data are assigned to a fixed grid of 24 local-times and 40 energies, at all possible values of Kp. Bi-linear interpolation is used between grid points to provide the ionmore » flux and the electron flux values at any energy and local-time, and for given values of geomagnetic activity (proxied by the 3-hour Kp index), and also for given values of solar activity (proxied by the daily F10.7 index). Initial comparison of the electron flux from the model with data from a Compact Environmental Anomaly Sensor II (CEASE-II), also located at geosynchronous orbit, indicate a good match during both quiet and disturbed periods. The model is available for distribution as a FORTRAN code that can be modified to suit user-requirements.« less
Obermaier, Michael; Bandarenka, Aliaksandr S; Lohri-Tymozhynsky, Cyrill
2018-03-21
Electrochemical impedance spectroscopy (EIS) is an indispensable tool for non-destructive operando characterization of Polymer Electrolyte Fuel Cells (PEFCs). However, in order to interpret the PEFC's impedance response and understand the phenomena revealed by EIS, numerous semi-empirical or purely empirical models are used. In this work, a relatively simple model for PEFC cathode catalyst layers in absence of oxygen has been developed, where all the equivalent circuit parameters have an entire physical meaning. It is based on: (i) experimental quantification of the catalyst layer pore radii, (ii) application of De Levie's analytical formula to calculate the response of a single pore, (iii) approximating the ionomer distribution within every pore, (iv) accounting for the specific adsorption of sulfonate groups and (v) accounting for a small H 2 crossover through ~15 μm ionomer membranes. The derived model has effectively only 6 independent fitting parameters and each of them has clear physical meaning. It was used to investigate the cathode catalyst layer and the double layer capacitance at the interface between the ionomer/membrane and Pt-electrocatalyst. The model has demonstrated excellent results in fitting and interpretation of the impedance data under different relative humidities. A simple script enabling fitting of impedance data is provided as supporting information.
An empirical model of electron and ion fluxes derived from observations at geosynchronous orbit
Denton, M. H.; Thomsen, M. F.; Jordanova, V. K.; ...
2015-04-01
Knowledge of the plasma fluxes at geosynchronous orbit is important to both scientific and operational investigations. We present a new empirical model of the ion flux and the electron flux at geosynchronous orbit (GEO) in the energy range ~1 eV to ~40 keV. The model is based on a total of 82 satellite-years of observations from the Magnetospheric Plasma Analyzer instruments on Los Alamos National Laboratory satellites at GEO. These data are assigned to a fixed grid of 24 local-times and 40 energies, at all possible values of Kp. Bi-linear interpolation is used between grid points to provide the ionmore » flux and the electron flux values at any energy and local-time, and for given values of geomagnetic activity (proxied by the 3-hour Kp index), and also for given values of solar activity (proxied by the daily F10.7 index). Initial comparison of the electron flux from the model with data from a Compact Environmental Anomaly Sensor II (CEASE-II), also located at geosynchronous orbit, indicate a good match during both quiet and disturbed periods. The model is available for distribution as a FORTRAN code that can be modified to suit user-requirements.« less
Modeling of Inverted Annular Film Boiling using an integral method
NASA Astrophysics Data System (ADS)
Sridharan, Arunkumar
In modeling Inverted Annular Film Boiling (IAFB), several important phenomena such as interaction between the liquid and the vapor phases and irregular nature of the interface, which greatly influence the momentum and heat transfer at the interface, need to be accounted for. However, due to the complexity of these phenomena, they were not modeled in previous studies. Since two-phase heat transfer equations and relationships rely heavily on experimental data, many closure relationships that were used in previous studies to solve the problem are empirical in nature. Also, in deriving the relationships, the experimental data were often extrapolated beyond the intended range of conditions, causing errors in predictions. In some cases, empirical correlations that were derived from situations other than IAFB, and whose applicability to IAFB was questionable, were used. Moreover, arbitrary constants were introduced in the model developed in previous studies to provide good fit to the experimental data. These constants have no physical basis, thereby leading to questionable accuracy in the model predictions. In the present work, modeling of Inverted Annular Film Boiling (IAFB) is done using Integral Method. Two-dimensional formulation of IAFB is presented. Separate equations for the conservation of mass, momentum and energy are derived from first principles, for the vapor film and the liquid core. Turbulence is incorporated in the formulation. The system of second-order partial differential equations is integrated over the radial direction to obtain a system of integral differential equations. In order to solve the system of equations, second order polynomial profiles are used to describe the nondimensional velocity and temperatures. The unknown coefficients in the profiles are functions of the axial direction alone. Using the boundary conditions that govern the physical problem, equations for the unknown coefficients are derived in terms of the primary dependent variables: wall shear stress, interfacial shear stress, film thickness, pressure, wall temperature and the mass transfer rate due to evaporation. A system of non-linear first order coupled ordinary differential equations is obtained. Due to the inherent mathematical complexity of the system of equations, simplifying assumptions are made to obtain a numerical solution. The system of equations is solved numerically to obtain values of the unknown quantities at each subsequent axial location. Derived quantities like void fraction and heat transfer coefficient are calculated at each axial location. The calculation is terminated when the void fraction reaches a value of 0.6, the upper limit of IAFB. The results obtained agree with the experimental trends observed. Void fraction increases along the heated length, while the heat transfer coefficient drops due to the increased resistance of the vapor film as expected.
The urban energy balance of a lightweight low-rise neighborhood in Andacollo, Chile
NASA Astrophysics Data System (ADS)
Crawford, Ben; Krayenhoff, E. Scott; Cordy, Paul
2018-01-01
Worldwide, the majority of rapidly growing neighborhoods are found in the Global South. They often exhibit different building construction and development patterns than the Global North, and urban climate research in many such neighborhoods has to date been sparse. This study presents local-scale observations of net radiation ( Q * ) and sensible heat flux ( Q H ) from a lightweight low-rise neighborhood in the desert climate of Andacollo, Chile, and compares observations with results from a process-based urban energy-balance model (TUF3D) and a local-scale empirical model (LUMPS) for a 14-day period in autumn 2009. This is a unique neighborhood-climate combination in the urban energy-balance literature, and results show good agreement between observations and models for Q * and Q H . The unmeasured latent heat flux ( Q E ) is modeled with an updated version of TUF3D and two versions of LUMPS (a forward and inverse application). Both LUMPS implementations predict slightly higher Q E than TUF3D, which may indicate a bias in LUMPS parameters towards mid-latitude, non-desert climates. Overall, the energy balance is dominated by sensible and storage heat fluxes with mean daytime Bowen ratios of 2.57 (observed Q H /LUMPS Q E )-3.46 (TUF3D). Storage heat flux ( ΔQ S ) is modeled with TUF3D, the empirical objective hysteresis model (OHM), and the inverse LUMPS implementation. Agreement between models is generally good; the OHM-predicted diurnal cycle deviates somewhat relative to the other two models, likely because OHM coefficients are not specified for the roof and wall construction materials found in this neighborhood. New facet-scale and local-scale OHM coefficients are developed based on modeled ΔQ S and observed Q * . Coefficients in the empirical models OHM and LUMPS are derived from observations in primarily non-desert climates in European/North American neighborhoods and must be updated as measurements in lightweight low-rise (and other) neighborhoods in various climates become available.
Fractal Analysis of Permeability of Unsaturated Fractured Rocks
Jiang, Guoping; Shi, Wei; Huang, Lili
2013-01-01
A physical conceptual model for water retention in fractured rocks is derived while taking into account the effect of pore size distribution and tortuosity of capillaries. The formula of calculating relative hydraulic conductivity of fractured rock is given based on fractal theory. It is an issue to choose an appropriate capillary pressure-saturation curve in the research of unsaturated fractured mass. The geometric pattern of the fracture bulk is described based on the fractal distribution of tortuosity. The resulting water content expression is then used to estimate the unsaturated hydraulic conductivity of the fractured medium based on the well-known model of Burdine. It is found that for large enough ranges of fracture apertures the new constitutive model converges to the empirical Brooks-Corey model. PMID:23690746
Fractal analysis of permeability of unsaturated fractured rocks.
Jiang, Guoping; Shi, Wei; Huang, Lili
2013-01-01
A physical conceptual model for water retention in fractured rocks is derived while taking into account the effect of pore size distribution and tortuosity of capillaries. The formula of calculating relative hydraulic conductivity of fractured rock is given based on fractal theory. It is an issue to choose an appropriate capillary pressure-saturation curve in the research of unsaturated fractured mass. The geometric pattern of the fracture bulk is described based on the fractal distribution of tortuosity. The resulting water content expression is then used to estimate the unsaturated hydraulic conductivity of the fractured medium based on the well-known model of Burdine. It is found that for large enough ranges of fracture apertures the new constitutive model converges to the empirical Brooks-Corey model.
VMF3/GPT3: refined discrete and empirical troposphere mapping functions
NASA Astrophysics Data System (ADS)
Landskron, Daniel; Böhm, Johannes
2018-04-01
Incorrect modeling of troposphere delays is one of the major error sources for space geodetic techniques such as Global Navigation Satellite Systems (GNSS) or Very Long Baseline Interferometry (VLBI). Over the years, many approaches have been devised which aim at mapping the delay of radio waves from zenith direction down to the observed elevation angle, so-called mapping functions. This paper contains a new approach intended to refine the currently most important discrete mapping function, the Vienna Mapping Functions 1 (VMF1), which is successively referred to as Vienna Mapping Functions 3 (VMF3). It is designed in such a way as to eliminate shortcomings in the empirical coefficients b and c and in the tuning for the specific elevation angle of 3°. Ray-traced delays of the ray-tracer RADIATE serve as the basis for the calculation of new mapping function coefficients. Comparisons of modeled slant delays demonstrate the ability of VMF3 to approximate the underlying ray-traced delays more accurately than VMF1 does, in particular at low elevation angles. In other words, when requiring highest precision, VMF3 is to be preferable to VMF1. Aside from revising the discrete form of mapping functions, we also present a new empirical model named Global Pressure and Temperature 3 (GPT3) on a 5°× 5° as well as a 1°× 1° global grid, which is generally based on the same data. Its main components are hydrostatic and wet empirical mapping function coefficients derived from special averaging techniques of the respective (discrete) VMF3 data. In addition, GPT3 also contains a set of meteorological quantities which are adopted as they stand from their predecessor, Global Pressure and Temperature 2 wet. Thus, GPT3 represents a very comprehensive troposphere model which can be used for a series of geodetic as well as meteorological and climatological purposes and is fully consistent with VMF3.
Benchmarking test of empirical root water uptake models
NASA Astrophysics Data System (ADS)
dos Santos, Marcos Alex; de Jong van Lier, Quirijn; van Dam, Jos C.; Freire Bezerra, Andre Herman
2017-01-01
Detailed physical models describing root water uptake (RWU) are an important tool for the prediction of RWU and crop transpiration, but the hydraulic parameters involved are hardly ever available, making them less attractive for many studies. Empirical models are more readily used because of their simplicity and the associated lower data requirements. The purpose of this study is to evaluate the capability of some empirical models to mimic the RWU distribution under varying environmental conditions predicted from numerical simulations with a detailed physical model. A review of some empirical models used as sub-models in ecohydrological models is presented, and alternative empirical RWU models are proposed. All these empirical models are analogous to the standard Feddes model, but differ in how RWU is partitioned over depth or how the transpiration reduction function is defined. The parameters of the empirical models are determined by inverse modelling of simulated depth-dependent RWU. The performance of the empirical models and their optimized empirical parameters depends on the scenario. The standard empirical Feddes model only performs well in scenarios with low root length density R, i.e. for scenarios with low RWU compensation
. For medium and high R, the Feddes RWU model cannot mimic properly the root uptake dynamics as predicted by the physical model. The Jarvis RWU model in combination with the Feddes reduction function (JMf) only provides good predictions for low and medium R scenarios. For high R, it cannot mimic the uptake patterns predicted by the physical model. Incorporating a newly proposed reduction function into the Jarvis model improved RWU predictions. Regarding the ability of the models to predict plant transpiration, all models accounting for compensation show good performance. The Akaike information criterion (AIC) indicates that the Jarvis (2010) model (JMII), with no empirical parameters to be estimated, is the best model
. The proposed models are better in predicting RWU patterns similar to the physical model. The statistical indices point to them as the best alternatives for mimicking RWU predictions of the physical model.
Clinical decision support alert malfunctions: analysis and empirically derived taxonomy.
Wright, Adam; Ai, Angela; Ash, Joan; Wiesen, Jane F; Hickman, Thu-Trang T; Aaron, Skye; McEvoy, Dustin; Borkowsky, Shane; Dissanayake, Pavithra I; Embi, Peter; Galanter, William; Harper, Jeremy; Kassakian, Steve Z; Ramoni, Rachel; Schreiber, Richard; Sirajuddin, Anwar; Bates, David W; Sittig, Dean F
2018-05-01
To develop an empirically derived taxonomy of clinical decision support (CDS) alert malfunctions. We identified CDS alert malfunctions using a mix of qualitative and quantitative methods: (1) site visits with interviews of chief medical informatics officers, CDS developers, clinical leaders, and CDS end users; (2) surveys of chief medical informatics officers; (3) analysis of CDS firing rates; and (4) analysis of CDS overrides. We used a multi-round, manual, iterative card sort to develop a multi-axial, empirically derived taxonomy of CDS malfunctions. We analyzed 68 CDS alert malfunction cases from 14 sites across the United States with diverse electronic health record systems. Four primary axes emerged: the cause of the malfunction, its mode of discovery, when it began, and how it affected rule firing. Build errors, conceptualization errors, and the introduction of new concepts or terms were the most frequent causes. User reports were the predominant mode of discovery. Many malfunctions within our database caused rules to fire for patients for whom they should not have (false positives), but the reverse (false negatives) was also common. Across organizations and electronic health record systems, similar malfunction patterns recurred. Challenges included updates to code sets and values, software issues at the time of system upgrades, difficulties with migration of CDS content between computing environments, and the challenge of correctly conceptualizing and building CDS. CDS alert malfunctions are frequent. The empirically derived taxonomy formalizes the common recurring issues that cause these malfunctions, helping CDS developers anticipate and prevent CDS malfunctions before they occur or detect and resolve them expediently.
ERIC Educational Resources Information Center
Poitras, Eric; Trevors, Gregory
2012-01-01
Planning, conducting, and reporting leading-edge research requires professionals who are capable of highly skilled reading. This study reports the development of an empirically informed computer-based learning environment designed to foster the acquisition of reading comprehension strategies that mediate expertise in the social sciences. Empirical…
Wood, Jonathan S; Donnell, Eric T; Porter, Richard J
2015-02-01
A variety of different study designs and analysis methods have been used to evaluate the performance of traffic safety countermeasures. The most common study designs and methods include observational before-after studies using the empirical Bayes method and cross-sectional studies using regression models. The propensity scores-potential outcomes framework has recently been proposed as an alternative traffic safety countermeasure evaluation method to address the challenges associated with selection biases that can be part of cross-sectional studies. Crash modification factors derived from the application of all three methods have not yet been compared. This paper compares the results of retrospective, observational evaluations of a traffic safety countermeasure using both before-after and cross-sectional study designs. The paper describes the strengths and limitations of each method, focusing primarily on how each addresses site selection bias, which is a common issue in observational safety studies. The Safety Edge paving technique, which seeks to mitigate crashes related to roadway departure events, is the countermeasure used in the present study to compare the alternative evaluation methods. The results indicated that all three methods yielded results that were consistent with each other and with previous research. The empirical Bayes results had the smallest standard errors. It is concluded that the propensity scores with potential outcomes framework is a viable alternative analysis method to the empirical Bayes before-after study. It should be considered whenever a before-after study is not possible or practical. Copyright © 2014 Elsevier Ltd. All rights reserved.
METAPHOR: Probability density estimation for machine learning based photometric redshifts
NASA Astrophysics Data System (ADS)
Amaro, V.; Cavuoti, S.; Brescia, M.; Vellucci, C.; Tortora, C.; Longo, G.
2017-06-01
We present METAPHOR (Machine-learning Estimation Tool for Accurate PHOtometric Redshifts), a method able to provide a reliable PDF for photometric galaxy redshifts estimated through empirical techniques. METAPHOR is a modular workflow, mainly based on the MLPQNA neural network as internal engine to derive photometric galaxy redshifts, but giving the possibility to easily replace MLPQNA with any other method to predict photo-z's and their PDF. We present here the results about a validation test of the workflow on the galaxies from SDSS-DR9, showing also the universality of the method by replacing MLPQNA with KNN and Random Forest models. The validation test include also a comparison with the PDF's derived from a traditional SED template fitting method (Le Phare).
NASA Technical Reports Server (NTRS)
Townshend, J. R. G.; Choudhury, B. J.; Tucker, C. J.; Giddings, L.; Justice, C. O.
1989-01-01
Comparison between the microwave polarized difference temperature (MPDT) derived from 37 GHz band data and the normalized difference vegetation index (NDVI) derived from near-infrared and red bands, from several empirical investigations are summarized. These indicate the complementary character of the two measures in environmental monitoring. Overall the NDVI is more sensitive to green leaf activity, whereas the MPDT appears also to be related to other elements of the above-ground biomass. Monitoring of hydrological phenomena is carried out much more effectively by the MPDT. Further work is needed to explain spectral and temporal variation in MPDT both through modelling and field experiments.
The demand for health: an empirical test of the Grossman model using panel data.
Nocera, S; Zweifel, P
1998-01-01
Grossman derives the demand for health from an optimal control model in which health capital is both a consumption and an investment good. In his approach, the individual chooses his level of health and therefore his life span. Initially an individual is endowed with a certain amount of health capital, which depreciates over time but can be replenished by investments like medical care, diet, exercise, etc. Therefore, the level of health is not treated as exogenous but depends on the amount of resources the individual allocates to the production of health. The production of health capital also depends on variables which modify the efficiency of the production process, therefore changing the shadow price of health capital. For example, more highly educated people are expected to be more efficient producers of health who thus face a lower price of health capital, an effect that should increase their quantity of health demanded. While the Grossman model provides a suitable theoretical framework for explaining the demand for health and the demand for medical services, it has not been too successful empirically. However, empirical tests up to this date have been exclusively based on cross section data, thus failing to take the dynamic nature of the Grossman model into account. By way of contrast, the present paper contains individual time series information not only on the utilization of medical services but also on income, wealth, work, and life style. The data come from two surveys carried out in 1981 and 1993 among members of a Swiss sick fund, with the linkage between the two waves provided by insurance records. In all, this comparatively rich data set holds the promise of permitting the Grossman model to be adequately tested for the first time.
Mealier, Anne-Laure; Pointeau, Gregoire; Mirliaz, Solène; Ogawa, Kenji; Finlayson, Mark; Dominey, Peter F
2017-01-01
It has been proposed that starting from meaning that the child derives directly from shared experience with others, adult narrative enriches this meaning and its structure, providing causal links between unseen intentional states and actions. This would require a means for representing meaning from experience-a situation model-and a mechanism that allows information to be extracted from sentences and mapped onto the situation model that has been derived from experience, thus enriching that representation. We present a hypothesis and theory concerning how the language processing infrastructure for grammatical constructions can naturally be extended to narrative constructions to provide a mechanism for using language to enrich meaning derived from physical experience. Toward this aim, the grammatical construction models are augmented with additional structures for representing relations between events across sentences. Simulation results demonstrate proof of concept for how the narrative construction model supports multiple successive levels of meaning creation which allows the system to learn about the intentionality of mental states, and argument substitution which allows extensions to metaphorical language and analogical problem solving. Cross-linguistic validity of the system is demonstrated in Japanese. The narrative construction model is then integrated into the cognitive system of a humanoid robot that provides the memory systems and world-interaction required for representing meaning in a situation model. In this context proof of concept is demonstrated for how the system enriches meaning in the situation model that has been directly derived from experience. In terms of links to empirical data, the model predicts strong usage based effects: that is, that the narrative constructions used by children will be highly correlated with those that they experience. It also relies on the notion of narrative or discourse function words. Both of these are validated in the experimental literature.
An empirical model of the auroral oval derived from CHAMP field-aligned current signatures - Part 2
NASA Astrophysics Data System (ADS)
Xiong, C.; Lühr, H.
2014-06-01
In this paper we introduce a new model for the location of the auroral oval. The auroral boundaries are derived from small- and medium-scale field-aligned current (FAC) based on the high-resolution CHAMP (CHAllenging Minisatellite Payload) magnetic field observations during the years 2000-2010. The basic shape of the auroral oval is controlled by the dayside merging electric field, Em, and can be fitted well by ellipses at all levels of activity. All five ellipse parameters show a dependence on Em which can be described by quadratic functions. Optimal delay times for the merging electric field at the bow shock are 30 and 15 min for the equatorward and poleward boundaries, respectively. A comparison between our model and the British Antarctic Survey (BAS) auroral model derived from IMAGE (Imager for Magnetopause-to-Aurora Global Exploration) optical observations has been performed. There is good agreement between the two models regarding both boundaries, and the differences show a Gaussian distribution with a width of ±2° in latitude. The difference of the equatorward boundary shows a local-time dependence, which is 1° in latitude poleward in the morning sector and 1° equatorward in the afternoon sector of the BAS model. We think the difference between the two models is caused by the appearance of auroral forms in connection with upward FACs. All information required for applying our auroral oval model (CH-Aurora-2014) is provided.
Bayesian uncertainty quantification in linear models for diffusion MRI.
Sjölund, Jens; Eklund, Anders; Özarslan, Evren; Herberthson, Magnus; Bånkestad, Maria; Knutsson, Hans
2018-03-29
Diffusion MRI (dMRI) is a valuable tool in the assessment of tissue microstructure. By fitting a model to the dMRI signal it is possible to derive various quantitative features. Several of the most popular dMRI signal models are expansions in an appropriately chosen basis, where the coefficients are determined using some variation of least-squares. However, such approaches lack any notion of uncertainty, which could be valuable in e.g. group analyses. In this work, we use a probabilistic interpretation of linear least-squares methods to recast popular dMRI models as Bayesian ones. This makes it possible to quantify the uncertainty of any derived quantity. In particular, for quantities that are affine functions of the coefficients, the posterior distribution can be expressed in closed-form. We simulated measurements from single- and double-tensor models where the correct values of several quantities are known, to validate that the theoretically derived quantiles agree with those observed empirically. We included results from residual bootstrap for comparison and found good agreement. The validation employed several different models: Diffusion Tensor Imaging (DTI), Mean Apparent Propagator MRI (MAP-MRI) and Constrained Spherical Deconvolution (CSD). We also used in vivo data to visualize maps of quantitative features and corresponding uncertainties, and to show how our approach can be used in a group analysis to downweight subjects with high uncertainty. In summary, we convert successful linear models for dMRI signal estimation to probabilistic models, capable of accurate uncertainty quantification. Copyright © 2018 Elsevier Inc. All rights reserved.
Data-driven regions of interest for longitudinal change in frontotemporal lobar degeneration.
Pankov, Aleksandr; Binney, Richard J; Staffaroni, Adam M; Kornak, John; Attygalle, Suneth; Schuff, Norbert; Weiner, Michael W; Kramer, Joel H; Dickerson, Bradford C; Miller, Bruce L; Rosen, Howard J
2016-01-01
Current research is investigating the potential utility of longitudinal measurement of brain structure as a marker of drug effect in clinical trials for neurodegenerative disease. Recent studies in Alzheimer's disease (AD) have shown that measurement of change in empirically derived regions of interest (ROIs) allows more reliable measurement of change over time compared with regions chosen a-priori based on known effects of AD on brain anatomy. Frontotemporal lobar degeneration (FTLD) is a devastating neurodegenerative disorder for which there are no approved treatments. The goal of this study was to identify an empirical ROI that maximizes the effect size for the annual rate of brain atrophy in FTLD compared with healthy age matched controls, and to estimate the effect size and associated power estimates for a theoretical study that would use change within this ROI as an outcome measure. Eighty six patients with FTLD were studied, including 43 who were imaged twice at 1.5 T and 43 at 3 T, along with 105 controls (37 imaged at 1.5 T and 67 at 3 T). Empirically-derived maps of change were generated separately for each field strength and included the bilateral insula, dorsolateral, medial and orbital frontal, basal ganglia and lateral and inferior temporal regions. The extent of regions included in the 3 T map was larger than that in the 1.5 T map. At both field strengths, the effect sizes for imaging were larger than for any clinical measures. At 3 T, the effect size for longitudinal change measured within the empirically derived ROI was larger than the effect sizes derived from frontal lobe, temporal lobe or whole brain ROIs. The effect size derived from the data-driven 1.5 T map was smaller than at 3 T, and was not larger than the effect size derived from a-priori ROIs. It was estimated that measurement of longitudinal change using 1.5 T MR systems requires approximately a 3-fold increase in sample size to obtain effect sizes equivalent to those seen at 3 T. While the results should be confirmed in additional datasets, these results indicate that empirically derived ROIs can reduce the number of subjects needed for a longitudinal study of drug effects in FTLD compared with a-priori ROIs. Field strength may have a significant impact on the utility of imaging for measuring longitudinal change.
Role of local network oscillations in resting-state functional connectivity.
Cabral, Joana; Hugues, Etienne; Sporns, Olaf; Deco, Gustavo
2011-07-01
Spatio-temporally organized low-frequency fluctuations (<0.1 Hz), observed in BOLD fMRI signal during rest, suggest the existence of underlying network dynamics that emerge spontaneously from intrinsic brain processes. Furthermore, significant correlations between distinct anatomical regions-or functional connectivity (FC)-have led to the identification of several widely distributed resting-state networks (RSNs). This slow dynamics seems to be highly structured by anatomical connectivity but the mechanism behind it and its relationship with neural activity, particularly in the gamma frequency range, remains largely unknown. Indeed, direct measurements of neuronal activity have revealed similar large-scale correlations, particularly in slow power fluctuations of local field potential gamma frequency range oscillations. To address these questions, we investigated neural dynamics in a large-scale model of the human brain's neural activity. A key ingredient of the model was a structural brain network defined by empirically derived long-range brain connectivity together with the corresponding conduction delays. A neural population, assumed to spontaneously oscillate in the gamma frequency range, was placed at each network node. When these oscillatory units are integrated in the network, they behave as weakly coupled oscillators. The time-delayed interaction between nodes is described by the Kuramoto model of phase oscillators, a biologically-based model of coupled oscillatory systems. For a realistic setting of axonal conduction speed, we show that time-delayed network interaction leads to the emergence of slow neural activity fluctuations, whose patterns correlate significantly with the empirically measured FC. The best agreement of the simulated FC with the empirically measured FC is found for a set of parameters where subsets of nodes tend to synchronize although the network is not globally synchronized. Inside such clusters, the simulated BOLD signal between nodes is found to be correlated, instantiating the empirically observed RSNs. Between clusters, patterns of positive and negative correlations are observed, as described in experimental studies. These results are found to be robust with respect to a biologically plausible range of model parameters. In conclusion, our model suggests how resting-state neural activity can originate from the interplay between the local neural dynamics and the large-scale structure of the brain. Copyright © 2011 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Claure, Yuri Navarro; Matsubara, Edson Takashi; Padovani, Carlos; Prati, Ronaldo Cristiano
2018-03-01
Traditional methods for estimating timing parameters in hydrological science require a rigorous study of the relations of flow resistance, slope, flow regime, watershed size, water velocity, and other local variables. These studies are mostly based on empirical observations, where the timing parameter is estimated using empirically derived formulas. The application of these studies to other locations is not always direct. The locations in which equations are used should have comparable characteristics to the locations from which such equations have been derived. To overcome this barrier, in this work, we developed a data-driven approach to estimate timing parameters such as travel time. Our proposal estimates timing parameters using historical data of the location without the need of adapting or using empirical formulas from other locations. The proposal only uses one variable measured at two different locations on the same river (for instance, two river-level measurements, one upstream and the other downstream on the same river). The recorded data from each location generates two time series. Our method aligns these two time series using derivative dynamic time warping (DDTW) and perceptually important points (PIP). Using data from timing parameters, a polynomial function generalizes the data by inducing a polynomial water travel time estimator, called PolyWaTT. To evaluate the potential of our proposal, we applied PolyWaTT to three different watersheds: a floodplain ecosystem located in the part of Brazil known as Pantanal, the world's largest tropical wetland area; and the Missouri River and the Pearl River, in United States of America. We compared our proposal with empirical formulas and a data-driven state-of-the-art method. The experimental results demonstrate that PolyWaTT showed a lower mean absolute error than all other methods tested in this study, and for longer distances the mean absolute error achieved by PolyWaTT is three times smaller than empirical formulas.
Evaluating scaling models in biology using hierarchical Bayesian approaches
Price, Charles A; Ogle, Kiona; White, Ethan P; Weitz, Joshua S
2009-01-01
Theoretical models for allometric relationships between organismal form and function are typically tested by comparing a single predicted relationship with empirical data. Several prominent models, however, predict more than one allometric relationship, and comparisons among alternative models have not taken this into account. Here we evaluate several different scaling models of plant morphology within a hierarchical Bayesian framework that simultaneously fits multiple scaling relationships to three large allometric datasets. The scaling models include: inflexible universal models derived from biophysical assumptions (e.g. elastic similarity or fractal networks), a flexible variation of a fractal network model, and a highly flexible model constrained only by basic algebraic relationships. We demonstrate that variation in intraspecific allometric scaling exponents is inconsistent with the universal models, and that more flexible approaches that allow for biological variability at the species level outperform universal models, even when accounting for relative increases in model complexity. PMID:19453621
TENSOR DECOMPOSITIONS AND SPARSE LOG-LINEAR MODELS
Johndrow, James E.; Bhattacharya, Anirban; Dunson, David B.
2017-01-01
Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. We derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions. PMID:29332971
NASA Astrophysics Data System (ADS)
Herrmann, K.
2009-11-01
Information-theoretic approaches still play a minor role in financial market analysis. Nonetheless, there have been two very similar approaches evolving during the last years, one in the so-called econophysics and the other in econometrics. Both generalize the notion of GARCH processes in an information-theoretic sense and are able to capture kurtosis better than traditional models. In this article we present both approaches in a more general framework. The latter allows the derivation of a wide range of new models. We choose a third model using an entropy measure suggested by Kapur. In an application to financial market data, we find that all considered models - with similar flexibility in terms of skewness and kurtosis - lead to very similar results.
Non-Asymptotic Oracle Inequalities for the High-Dimensional Cox Regression via Lasso.
Kong, Shengchun; Nan, Bin
2014-01-01
We consider finite sample properties of the regularized high-dimensional Cox regression via lasso. Existing literature focuses on linear models or generalized linear models with Lipschitz loss functions, where the empirical risk functions are the summations of independent and identically distributed (iid) losses. The summands in the negative log partial likelihood function for censored survival data, however, are neither iid nor Lipschitz.We first approximate the negative log partial likelihood function by a sum of iid non-Lipschitz terms, then derive the non-asymptotic oracle inequalities for the lasso penalized Cox regression using pointwise arguments to tackle the difficulties caused by lacking iid Lipschitz losses.
Non-Asymptotic Oracle Inequalities for the High-Dimensional Cox Regression via Lasso
Kong, Shengchun; Nan, Bin
2013-01-01
We consider finite sample properties of the regularized high-dimensional Cox regression via lasso. Existing literature focuses on linear models or generalized linear models with Lipschitz loss functions, where the empirical risk functions are the summations of independent and identically distributed (iid) losses. The summands in the negative log partial likelihood function for censored survival data, however, are neither iid nor Lipschitz.We first approximate the negative log partial likelihood function by a sum of iid non-Lipschitz terms, then derive the non-asymptotic oracle inequalities for the lasso penalized Cox regression using pointwise arguments to tackle the difficulties caused by lacking iid Lipschitz losses. PMID:24516328
The wavelength dependent model of extinction in fog and haze for free space optical communication.
Grabner, Martin; Kvicera, Vaclav
2011-02-14
The wavelength dependence of the extinction coefficient in fog and haze is investigated using Mie single scattering theory. It is shown that the effective radius of drop size distribution determines the slope of the log-log dependence of the extinction on wavelengths in the interval between 0.2 and 2 microns. The relation between the atmospheric visibility and the effective radius is derived from the empirical relationship of liquid water content and extinction. Based on these results, the model of the relationship between visibility and the extinction coefficient with different effective radii for fog and for haze conditions is proposed.