Sample records for empirical methods based

  1. A DEIM Induced CUR Factorization

    DTIC Science & Technology

    2015-09-18

    CUR approximate matrix factorization based on the Discrete Empirical Interpolation Method (DEIM). For a given matrix A, such a factorization provides a...CUR approximations based on leverage scores. 1 Introduction This work presents a new CUR matrix factorization based upon the Discrete Empirical...SUPPLEMENTARY NOTES 14. ABSTRACT We derive a CUR approximate matrix factorization based on the Discrete Empirical Interpolation Method (DEIM). For a given

  2. Assessing differential expression in two-color microarrays: a resampling-based empirical Bayes approach.

    PubMed

    Li, Dongmei; Le Pape, Marc A; Parikh, Nisha I; Chen, Will X; Dye, Timothy D

    2013-01-01

    Microarrays are widely used for examining differential gene expression, identifying single nucleotide polymorphisms, and detecting methylation loci. Multiple testing methods in microarray data analysis aim at controlling both Type I and Type II error rates; however, real microarray data do not always fit their distribution assumptions. Smyth's ubiquitous parametric method, for example, inadequately accommodates violations of normality assumptions, resulting in inflated Type I error rates. The Significance Analysis of Microarrays, another widely used microarray data analysis method, is based on a permutation test and is robust to non-normally distributed data; however, the Significance Analysis of Microarrays method fold change criteria are problematic, and can critically alter the conclusion of a study, as a result of compositional changes of the control data set in the analysis. We propose a novel approach, combining resampling with empirical Bayes methods: the Resampling-based empirical Bayes Methods. This approach not only reduces false discovery rates for non-normally distributed microarray data, but it is also impervious to fold change threshold since no control data set selection is needed. Through simulation studies, sensitivities, specificities, total rejections, and false discovery rates are compared across the Smyth's parametric method, the Significance Analysis of Microarrays, and the Resampling-based empirical Bayes Methods. Differences in false discovery rates controls between each approach are illustrated through a preterm delivery methylation study. The results show that the Resampling-based empirical Bayes Methods offer significantly higher specificity and lower false discovery rates compared to Smyth's parametric method when data are not normally distributed. The Resampling-based empirical Bayes Methods also offers higher statistical power than the Significance Analysis of Microarrays method when the proportion of significantly differentially expressed genes is large for both normally and non-normally distributed data. Finally, the Resampling-based empirical Bayes Methods are generalizable to next generation sequencing RNA-seq data analysis.

  3. Artifact removal from EEG data with empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Grubov, Vadim V.; Runnova, Anastasiya E.; Efremova, Tatyana Yu.; Hramov, Alexander E.

    2017-03-01

    In the paper we propose the novel method for dealing with the physiological artifacts caused by intensive activity of facial and neck muscles and other movements in experimental human EEG recordings. The method is based on analysis of EEG signals with empirical mode decomposition (Hilbert-Huang transform). We introduce the mathematical algorithm of the method with following steps: empirical mode decomposition of EEG signal, choosing of empirical modes with artifacts, removing empirical modes with artifacts, reconstruction of the initial EEG signal. We test the method on filtration of experimental human EEG signals from movement artifacts and show high efficiency of the method.

  4. Comparison of the Various Methodologies Used in Studying Runoff and Sediment Load in the Yellow River Basin

    NASA Astrophysics Data System (ADS)

    Xu, M., III; Liu, X.

    2017-12-01

    In the past 60 years, both the runoff and sediment load in the Yellow River Basin showed significant decreasing trends owing to the influences of human activities and climate change. Quantifying the impact of each factor (e.g. precipitation, sediment trapping dams, pasture, terrace, etc.) on the runoff and sediment load is among the key issues to guide the implement of water and soil conservation measures, and to predict the variation trends in the future. Hundreds of methods have been developed for studying the runoff and sediment load in the Yellow River Basin. Generally, these methods can be classified into empirical methods and physical-based models. The empirical methods, including hydrological method, soil and water conservation method, etc., are widely used in the Yellow River management engineering. These methods generally apply the statistical analyses like the regression analysis to build the empirical relationships between the main characteristic variables in a river basin. The elasticity method extensively used in the hydrological research can be classified into empirical method as it is mathematically deduced to be equivalent with the hydrological method. Physical-based models mainly include conceptual models and distributed models. The conceptual models are usually lumped models (e.g. SYMHD model, etc.) and can be regarded as transition of empirical models and distributed models. Seen from the publications that less studies have been conducted applying distributed models than empirical models as the simulation results of runoff and sediment load based on distributed models (e.g. the Digital Yellow Integrated Model, the Geomorphology-Based Hydrological Model, etc.) were usually not so satisfied owing to the intensive human activities in the Yellow River Basin. Therefore, this study primarily summarizes the empirical models applied in the Yellow River Basin and theoretically analyzes the main causes for the significantly different results using different empirical researching methods. Besides, we put forward an assessment frame for the researching methods of the runoff and sediment load variations in the Yellow River Basin from the point of view of inputting data, model structure and result output. And the assessment frame was then applied in the Huangfuchuan River.

  5. Empirical Evidence or Intuition? An Activity Involving the Scientific Method

    ERIC Educational Resources Information Center

    Overway, Ken

    2007-01-01

    Students need to have basic understanding of scientific method during their introductory science classes and for this purpose an activity was devised which involved a game based on famous Monty Hall game problem. This particular activity allowed students to banish or confirm their intuition based on empirical evidence.

  6. Dealing with noise and physiological artifacts in human EEG recordings: empirical mode methods

    NASA Astrophysics Data System (ADS)

    Runnova, Anastasiya E.; Grubov, Vadim V.; Khramova, Marina V.; Hramov, Alexander E.

    2017-04-01

    In the paper we propose the new method for removing noise and physiological artifacts in human EEG recordings based on empirical mode decomposition (Hilbert-Huang transform). As physiological artifacts we consider specific oscillatory patterns that cause problems during EEG analysis and can be detected with additional signals recorded simultaneously with EEG (ECG, EMG, EOG, etc.) We introduce the algorithm of the proposed method with steps including empirical mode decomposition of EEG signal, choosing of empirical modes with artifacts, removing these empirical modes and reconstructing of initial EEG signal. We show the efficiency of the method on the example of filtration of human EEG signal from eye-moving artifacts.

  7. Filtration of human EEG recordings from physiological artifacts with empirical mode method

    NASA Astrophysics Data System (ADS)

    Grubov, Vadim V.; Runnova, Anastasiya E.; Khramova, Marina V.

    2017-03-01

    In the paper we propose the new method for dealing with noise and physiological artifacts in experimental human EEG recordings. The method is based on analysis of EEG signals with empirical mode decomposition (Hilbert-Huang transform). We consider noises and physiological artifacts on EEG as specific oscillatory patterns that cause problems during EEG analysis and can be detected with additional signals recorded simultaneously with EEG (ECG, EMG, EOG, etc.) We introduce the algorithm of the method with following steps: empirical mode decomposition of EEG signal, choosing of empirical modes with artifacts, removing empirical modes with artifacts, reconstruction of the initial EEG signal. We test the method on filtration of experimental human EEG signals from eye-moving artifacts and show high efficiency of the method.

  8. Application of empirical and mechanistic-empirical pavement design procedures to Mn/ROAD concrete pavement test sections

    DOT National Transportation Integrated Search

    1997-05-01

    Current pavement design procedures are based principally on empirical approaches. The current trend toward developing more mechanistic-empirical type pavement design methods led Minnesota to develop the Minnesota Road Research Project (Mn/ROAD), a lo...

  9. Comparison of artificial intelligence methods and empirical equations to estimate daily solar radiation

    NASA Astrophysics Data System (ADS)

    Mehdizadeh, Saeid; Behmanesh, Javad; Khalili, Keivan

    2016-08-01

    In the present research, three artificial intelligence methods including Gene Expression Programming (GEP), Artificial Neural Networks (ANN) and Adaptive Neuro-Fuzzy Inference System (ANFIS) as well as, 48 empirical equations (10, 12 and 26 equations were temperature-based, sunshine-based and meteorological parameters-based, respectively) were used to estimate daily solar radiation in Kerman, Iran in the period of 1992-2009. To develop the GEP, ANN and ANFIS models, depending on the used empirical equations, various combinations of minimum air temperature, maximum air temperature, mean air temperature, extraterrestrial radiation, actual sunshine duration, maximum possible sunshine duration, sunshine duration ratio, relative humidity and precipitation were considered as inputs in the mentioned intelligent methods. To compare the accuracy of empirical equations and intelligent models, root mean square error (RMSE), mean absolute error (MAE), mean absolute relative error (MARE) and determination coefficient (R2) indices were used. The results showed that in general, sunshine-based and meteorological parameters-based scenarios in ANN and ANFIS models presented high accuracy than mentioned empirical equations. Moreover, the most accurate method in the studied region was ANN11 scenario with five inputs. The values of RMSE, MAE, MARE and R2 indices for the mentioned model were 1.850 MJ m-2 day-1, 1.184 MJ m-2 day-1, 9.58% and 0.935, respectively.

  10. High Speed Jet Noise Prediction Using Large Eddy Simulation

    NASA Technical Reports Server (NTRS)

    Lele, Sanjiva K.

    2002-01-01

    Current methods for predicting the noise of high speed jets are largely empirical. These empirical methods are based on the jet noise data gathered by varying primarily the jet flow speed, and jet temperature for a fixed nozzle geometry. Efforts have been made to correlate the noise data of co-annular (multi-stream) jets and for the changes associated with the forward flight within these empirical correlations. But ultimately these emipirical methods fail to provide suitable guidance in the selection of new, low-noise nozzle designs. This motivates the development of a new class of prediction methods which are based on computational simulations, in an attempt to remove the empiricism of the present day noise predictions.

  11. Empirical likelihood-based confidence intervals for mean medical cost with censored data.

    PubMed

    Jeyarajah, Jenny; Qin, Gengsheng

    2017-11-10

    In this paper, we propose empirical likelihood methods based on influence function and jackknife techniques for constructing confidence intervals for mean medical cost with censored data. We conduct a simulation study to compare the coverage probabilities and interval lengths of our proposed confidence intervals with that of the existing normal approximation-based confidence intervals and bootstrap confidence intervals. The proposed methods have better finite-sample performances than existing methods. Finally, we illustrate our proposed methods with a relevant example. Copyright © 2017 John Wiley & Sons, Ltd.

  12. System and methods for determining masking signals for applying empirical mode decomposition (EMD) and for demodulating intrinsic mode functions obtained from application of EMD

    DOEpatents

    Senroy, Nilanjan [New Delhi, IN; Suryanarayanan, Siddharth [Littleton, CO

    2011-03-15

    A computer-implemented method of signal processing is provided. The method includes generating one or more masking signals based upon a computed Fourier transform of a received signal. The method further includes determining one or more intrinsic mode functions (IMFs) of the received signal by performing a masking-signal-based empirical mode decomposition (EMD) using the at least one masking signal.

  13. Information-Processing Theory and Perspectives on Development: A Look at Concepts and Methods--The View of a Developmental Ethologist.

    ERIC Educational Resources Information Center

    Jesness, Bradley

    This paper examines concepts in information-processing theory which are likely to be relevant to development and characterizes the methods and data upon which the concepts are based. Among the concepts examined are those which have slight empirical grounds. Other concepts examined are those which seem to have empirical bases but which are…

  14. An empirical inferential method of estimating nitrogen deposition to Mediterranean-type ecosystems: the San Bernardino Mountains case study

    Treesearch

    A. Bytnerowicz; R.F. Johnson; L. Zhang; G.D. Jenerette; M.E. Fenn; S.L. Schilling; I. Gonzalez-Fernandez

    2015-01-01

    The empirical inferential method (EIM) allows for spatially and temporally-dense estimates of atmospheric nitrogen (N) deposition to Mediterranean ecosystems. This method, set within a GIS platform, is based on ambient concentrations of NH3, NO, NO2 and HNO3; surface conductance of NH4...

  15. Learning linear transformations between counting-based and prediction-based word embeddings

    PubMed Central

    Hayashi, Kohei; Kawarabayashi, Ken-ichi

    2017-01-01

    Despite the growing interest in prediction-based word embedding learning methods, it remains unclear as to how the vector spaces learnt by the prediction-based methods differ from that of the counting-based methods, or whether one can be transformed into the other. To study the relationship between counting-based and prediction-based embeddings, we propose a method for learning a linear transformation between two given sets of word embeddings. Our proposal contributes to the word embedding learning research in three ways: (a) we propose an efficient method to learn a linear transformation between two sets of word embeddings, (b) using the transformation learnt in (a), we empirically show that it is possible to predict distributed word embeddings for novel unseen words, and (c) empirically it is possible to linearly transform counting-based embeddings to prediction-based embeddings, for frequent words, different POS categories, and varying degrees of ambiguities. PMID:28926629

  16. Empirical Data Collection and Analysis Using Camtasia and Transana

    ERIC Educational Resources Information Center

    Thorsteinsson, Gisli; Page, Tom

    2009-01-01

    One of the possible techniques for collecting empirical data is video recordings of a computer screen with specific screen capture software. This method for collecting empirical data shows how students use the BSCWII (Be Smart Cooperate Worldwide--a web based collaboration/groupware environment) to coordinate their work and collaborate in…

  17. Evidence-based ethics? On evidence-based practice and the "empirical turn" from normative bioethics

    PubMed Central

    Goldenberg, Maya J

    2005-01-01

    Background The increase in empirical methods of research in bioethics over the last two decades is typically perceived as a welcomed broadening of the discipline, with increased integration of social and life scientists into the field and ethics consultants into the clinical setting, however it also represents a loss of confidence in the typical normative and analytic methods of bioethics. Discussion The recent incipiency of "Evidence-Based Ethics" attests to this phenomenon and should be rejected as a solution to the current ambivalence toward the normative resolution of moral problems in a pluralistic society. While "evidence-based" is typically read in medicine and other life and social sciences as the empirically-adequate standard of reasonable practice and a means for increasing certainty, I propose that the evidence-based movement in fact gains consensus by displacing normative discourse with aggregate or statistically-derived empirical evidence as the "bottom line". Therefore, along with wavering on the fact/value distinction, evidence-based ethics threatens bioethics' normative mandate. The appeal of the evidence-based approach is that it offers a means of negotiating the demands of moral pluralism. Rather than appealing to explicit values that are likely not shared by all, "the evidence" is proposed to adjudicate between competing claims. Quantified measures are notably more "neutral" and democratic than liberal markers like "species normal functioning". Yet the positivist notion that claims stand or fall in light of the evidence is untenable; furthermore, the legacy of positivism entails the quieting of empirically non-verifiable (or at least non-falsifiable) considerations like moral claims and judgments. As a result, evidence-based ethics proposes to operate with the implicit normativity that accompanies the production and presentation of all biomedical and scientific facts unchecked. Summary The "empirical turn" in bioethics signals a need for reconsideration of the methods used for moral evaluation and resolution, however the options should not include obscuring normative content by seemingly neutral technical measure. PMID:16277663

  18. Performance-based quality assurance/quality control (QA/QC) acceptance procedures for in-place soil testing phase 3.

    DOT National Transportation Integrated Search

    2015-01-01

    One of the objectives of this study was to evaluate soil testing equipment based on its capability of measuring in-place stiffness or modulus values. : As design criteria transition from empirical to mechanistic-empirical, soil test methods and equip...

  19. An Empirical Review of Research Methodologies and Methods in Creativity Studies (2003-2012)

    ERIC Educational Resources Information Center

    Long, Haiying

    2014-01-01

    Based on the data collected from 5 prestigious creativity journals, research methodologies and methods of 612 empirical studies on creativity, published between 2003 and 2012, were reviewed and compared to those in gifted education. Major findings included: (a) Creativity research was predominantly quantitative and psychometrics and experiment…

  20. Using Loss Functions for DIF Detection: An Empirical Bayes Approach.

    ERIC Educational Resources Information Center

    Zwick, Rebecca; Thayer, Dorothy; Lewis, Charles

    2000-01-01

    Studied a method for flagging differential item functioning (DIF) based on loss functions. Builds on earlier research that led to the development of an empirical Bayes enhancement to the Mantel-Haenszel DIF analysis. Tested the method through simulation and found its performance better than some commonly used DIF classification systems. (SLD)

  1. Retrieving hydrological connectivity from empirical causality in karst systems

    NASA Astrophysics Data System (ADS)

    Delforge, Damien; Vanclooster, Marnik; Van Camp, Michel; Poulain, Amaël; Watlet, Arnaud; Hallet, Vincent; Kaufmann, Olivier; Francis, Olivier

    2017-04-01

    Because of their complexity, karst systems exhibit nonlinear dynamics. Moreover, if one attempts to model a karst, the hidden behavior complicates the choice of the most suitable model. Therefore, both intense investigation methods and nonlinear data analysis are needed to reveal the underlying hydrological connectivity as a prior for a consistent physically based modelling approach. Convergent Cross Mapping (CCM), a recent method, promises to identify causal relationships between time series belonging to the same dynamical systems. The method is based on phase space reconstruction and is suitable for nonlinear dynamics. As an empirical causation detection method, it could be used to highlight the hidden complexity of a karst system by revealing its inner hydrological and dynamical connectivity. Hence, if one can link causal relationships to physical processes, the method should show great potential to support physically based model structure selection. We present the results of numerical experiments using karst model blocks combined in different structures to generate time series from actual rainfall series. CCM is applied between the time series to investigate if the empirical causation detection is consistent with the hydrological connectivity suggested by the karst model.

  2. A Rapid Empirical Method for Estimating the Gross Takeoff Weight of a High Speed Civil Transport

    NASA Technical Reports Server (NTRS)

    Mack, Robert J.

    1999-01-01

    During the cruise segment of the flight mission, aircraft flying at supersonic speeds generate sonic booms that are usually maximum at the beginning of cruise. The pressure signature with the shocks causing these perceived booms can be predicted if the aircraft's geometry, Mach number, altitude, angle of attack, and cruise weight are known. Most methods for estimating aircraft weight, especially beginning-cruise weight, are empirical and based on least- square-fit equations that best represent a body of component weight data. The empirical method discussed in this report used simplified weight equations based on a study of performance and weight data from conceptual and real transport aircraft. Like other weight-estimation methods, weights were determined at several points in the mission. While these additional weights were found to be useful, it is the determination of beginning-cruise weight that is most important for the prediction of the aircraft's sonic-boom characteristics.

  3. Empirical Observations on the Sensitivity of Hot Cathode Ionization Type Vacuum Gages

    NASA Technical Reports Server (NTRS)

    Summers, R. L.

    1969-01-01

    A study of empirical methods of predicting tile relative sensitivities of hot cathode ionization gages is presented. Using previously published gage sensitivities, several rules for predicting relative sensitivity are tested. The relative sensitivity to different gases is shown to be invariant with gage type, in the linear range of gage operation. The total ionization cross section, molecular and molar polarizability, and refractive index are demonstrated to be useful parameters for predicting relative gage sensitivity. Using data from the literature, the probable error of predictions of relative gage sensitivity based on these molecular properties is found to be about 10 percent. A comprehensive table of predicted relative sensitivities, based on empirical methods, is presented.

  4. Fight the power: the limits of empiricism and the costs of positivistic rigor.

    PubMed

    Indick, William

    2002-01-01

    A summary of the influence of positivistic philosophy and empiricism on the field of psychology is followed by a critique of the empirical method. The dialectic process is advocated as an alternative method of inquiry. The main advantage of the dialectic method is that it is open to any logical argument, including empirical hypotheses, but unlike empiricism, it does not automatically reject arguments that are not based on observable data. Evolutionary and moral psychology are discussed as examples of important fields of study that could benefit from types of arguments that frequently do not conform to the empirical standards of systematic observation and falsifiability of hypotheses. A dialectic method is shown to be a suitable perspective for those fields of research, because it allows for logical arguments that are not empirical and because it fosters a functionalist perspective, which is indispensable for both evolutionary and moral theories. It is suggested that all psychologists may gain from adopting a dialectic approach, rather than restricting themselves to empirical arguments alone.

  5. Untangling the Evidence: Introducing an Empirical Model for Evidence-Based Library and Information Practice

    ERIC Educational Resources Information Center

    Gillespie, Ann

    2014-01-01

    Introduction: This research is the first to investigate the experiences of teacher-librarians as evidence-based practice. An empirically derived model is presented in this paper. Method: This qualitative study utilised the expanded critical incident approach, and investigated the real-life experiences of fifteen Australian teacher-librarians,…

  6. Evaluating the predictive performance of empirical estimators of natural mortality rate using information on over 200 fish species

    USGS Publications Warehouse

    Then, Amy Y.; Hoenig, John M; Hall, Norman G.; Hewitt, David A.

    2015-01-01

    Many methods have been developed in the last 70 years to predict the natural mortality rate, M, of a stock based on empirical evidence from comparative life history studies. These indirect or empirical methods are used in most stock assessments to (i) obtain estimates of M in the absence of direct information, (ii) check on the reasonableness of a direct estimate of M, (iii) examine the range of plausible M estimates for the stock under consideration, and (iv) define prior distributions for Bayesian analyses. The two most cited empirical methods have appeared in the literature over 2500 times to date. Despite the importance of these methods, there is no consensus in the literature on how well these methods work in terms of prediction error or how their performance may be ranked. We evaluate estimators based on various combinations of maximum age (tmax), growth parameters, and water temperature by seeing how well they reproduce >200 independent, direct estimates of M. We use tenfold cross-validation to estimate the prediction error of the estimators and to rank their performance. With updated and carefully reviewed data, we conclude that a tmax-based estimator performs the best among all estimators evaluated. The tmax-based estimators in turn perform better than the Alverson–Carney method based on tmax and the von Bertalanffy K coefficient, Pauly’s method based on growth parameters and water temperature and methods based just on K. It is possible to combine two independent methods by computing a weighted mean but the improvement over the tmax-based methods is slight. Based on cross-validation prediction error, model residual patterns, model parsimony, and biological considerations, we recommend the use of a tmax-based estimator (M=4.899tmax−0.916">M=4.899t−0.916maxM=4.899tmax−0.916, prediction error = 0.32) when possible and a growth-based method (M=4.118K0.73L∞−0.33">M=4.118K0.73L−0.33∞M=4.118K0.73L∞−0.33 , prediction error = 0.6, length in cm) otherwise.

  7. Density-based empirical likelihood procedures for testing symmetry of data distributions and K-sample comparisons.

    PubMed

    Vexler, Albert; Tanajian, Hovig; Hutson, Alan D

    In practice, parametric likelihood-ratio techniques are powerful statistical tools. In this article, we propose and examine novel and simple distribution-free test statistics that efficiently approximate parametric likelihood ratios to analyze and compare distributions of K groups of observations. Using the density-based empirical likelihood methodology, we develop a Stata package that applies to a test for symmetry of data distributions and compares K -sample distributions. Recognizing that recent statistical software packages do not sufficiently address K -sample nonparametric comparisons of data distributions, we propose a new Stata command, vxdbel, to execute exact density-based empirical likelihood-ratio tests using K samples. To calculate p -values of the proposed tests, we use the following methods: 1) a classical technique based on Monte Carlo p -value evaluations; 2) an interpolation technique based on tabulated critical values; and 3) a new hybrid technique that combines methods 1 and 2. The third, cutting-edge method is shown to be very efficient in the context of exact-test p -value computations. This Bayesian-type method considers tabulated critical values as prior information and Monte Carlo generations of test statistic values as data used to depict the likelihood function. In this case, a nonparametric Bayesian method is proposed to compute critical values of exact tests.

  8. Chronic Fatigue Syndrome and Myalgic Encephalomyelitis: Toward An Empirical Case Definition

    PubMed Central

    Jason, Leonard A.; Kot, Bobby; Sunnquist, Madison; Brown, Abigail; Evans, Meredyth; Jantke, Rachel; Williams, Yolonda; Furst, Jacob; Vernon, Suzanne D.

    2015-01-01

    Current case definitions of Myalgic Encephalomyelitis (ME) and chronic fatigue syndrome (CFS) have been based on consensus methods, but empirical methods could be used to identify core symptoms and thereby improve the reliability. In the present study, several methods (i.e., continuous scores of symptoms, theoretically and empirically derived cut off scores of symptoms) were used to identify core symptoms best differentiating patients from controls. In addition, data mining with decision trees was conducted. Our study found a small number of core symptoms that have good sensitivity and specificity, and these included fatigue, post-exertional malaise, a neurocognitive symptom, and unrefreshing sleep. Outcomes from these analyses suggest that using empirically selected symptoms can help guide the creation of a more reliable case definition. PMID:26029488

  9. An Empirical Study on Washback Effects of the Internet-Based College English Test Band 4 in China

    ERIC Educational Resources Information Center

    Wang, Chao; Yan, Jiaolan; Liu, Bao

    2014-01-01

    Based on Bailey's washback model, in respect of participants, process and products, the present empirical study was conducted to find the actual washback effects of the internet-based College English Test Band 4 (IB CET-4). The methods adopted are questionnaires, class observation, interview and the analysis of both the CET-4 teaching and testing…

  10. Empirical Bayes methods for smoothing data and for simultaneous estimation of many parameters.

    PubMed Central

    Yanagimoto, T; Kashiwagi, N

    1990-01-01

    A recent successful development is found in a series of innovative, new statistical methods for smoothing data that are based on the empirical Bayes method. This paper emphasizes their practical usefulness in medical sciences and their theoretically close relationship with the problem of simultaneous estimation of parameters, depending on strata. The paper also presents two examples of analyzing epidemiological data obtained in Japan using the smoothing methods to illustrate their favorable performance. PMID:2148512

  11. A Compound Fault Diagnosis for Rolling Bearings Method Based on Blind Source Separation and Ensemble Empirical Mode Decomposition

    PubMed Central

    Wang, Huaqing; Li, Ruitong; Tang, Gang; Yuan, Hongfang; Zhao, Qingliang; Cao, Xi

    2014-01-01

    A Compound fault signal usually contains multiple characteristic signals and strong confusion noise, which makes it difficult to separate week fault signals from them through conventional ways, such as FFT-based envelope detection, wavelet transform or empirical mode decomposition individually. In order to improve the compound faults diagnose of rolling bearings via signals’ separation, the present paper proposes a new method to identify compound faults from measured mixed-signals, which is based on ensemble empirical mode decomposition (EEMD) method and independent component analysis (ICA) technique. With the approach, a vibration signal is firstly decomposed into intrinsic mode functions (IMF) by EEMD method to obtain multichannel signals. Then, according to a cross correlation criterion, the corresponding IMF is selected as the input matrix of ICA. Finally, the compound faults can be separated effectively by executing ICA method, which makes the fault features more easily extracted and more clearly identified. Experimental results validate the effectiveness of the proposed method in compound fault separating, which works not only for the outer race defect, but also for the rollers defect and the unbalance fault of the experimental system. PMID:25289644

  12. Palm vein recognition based on directional empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Lee, Jen-Chun; Chang, Chien-Ping; Chen, Wei-Kuei

    2014-04-01

    Directional empirical mode decomposition (DEMD) has recently been proposed to make empirical mode decomposition suitable for the processing of texture analysis. Using DEMD, samples are decomposed into a series of images, referred to as two-dimensional intrinsic mode functions (2-D IMFs), from finer to large scale. A DEMD-based 2 linear discriminant analysis (LDA) for palm vein recognition is proposed. The proposed method progresses through three steps: (i) a set of 2-D IMF features of various scale and orientation are extracted using DEMD, (ii) the 2LDA method is then applied to reduce the dimensionality of the feature space in both the row and column directions, and (iii) the nearest neighbor classifier is used for classification. We also propose two strategies for using the set of 2-D IMF features: ensemble DEMD vein representation (EDVR) and multichannel DEMD vein representation (MDVR). In experiments using palm vein databases, the proposed MDVR-based 2LDA method achieved recognition accuracy of 99.73%, thereby demonstrating its feasibility for palm vein recognition.

  13. Recognizing of stereotypic patterns in epileptic EEG using empirical modes and wavelets

    NASA Astrophysics Data System (ADS)

    Grubov, V. V.; Sitnikova, E.; Pavlov, A. N.; Koronovskii, A. A.; Hramov, A. E.

    2017-11-01

    Epileptic activity in the form of spike-wave discharges (SWD) appears in the electroencephalogram (EEG) during absence seizures. This paper evaluates two approaches for detecting stereotypic rhythmic activities in EEG, i.e., the continuous wavelet transform (CWT) and the empirical mode decomposition (EMD). The CWT is a well-known method of time-frequency analysis of EEG, whereas EMD is a relatively novel approach for extracting signal's waveforms. A new method for pattern recognition based on combination of CWT and EMD is proposed. It was found that this combined approach resulted to the sensitivity of 86.5% and specificity of 92.9% for sleep spindles and 97.6% and 93.2% for SWD, correspondingly. Considering strong within- and between-subjects variability of sleep spindles, the obtained efficiency in their detection was high in comparison with other methods based on CWT. It is concluded that the combination of a wavelet-based approach and empirical modes increases the quality of automatic detection of stereotypic patterns in rat's EEG.

  14. Controlling bias and inflation in epigenome- and transcriptome-wide association studies using the empirical null distribution.

    PubMed

    van Iterson, Maarten; van Zwet, Erik W; Heijmans, Bastiaan T

    2017-01-27

    We show that epigenome- and transcriptome-wide association studies (EWAS and TWAS) are prone to significant inflation and bias of test statistics, an unrecognized phenomenon introducing spurious findings if left unaddressed. Neither GWAS-based methodology nor state-of-the-art confounder adjustment methods completely remove bias and inflation. We propose a Bayesian method to control bias and inflation in EWAS and TWAS based on estimation of the empirical null distribution. Using simulations and real data, we demonstrate that our method maximizes power while properly controlling the false positive rate. We illustrate the utility of our method in large-scale EWAS and TWAS meta-analyses of age and smoking.

  15. Image fusion method based on regional feature and improved bidimensional empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Qin, Xinqiang; Hu, Gang; Hu, Kai

    2018-01-01

    The decomposition of multiple source images using bidimensional empirical mode decomposition (BEMD) often produces mismatched bidimensional intrinsic mode functions, either by their number or their frequency, making image fusion difficult. A solution to this problem is proposed using a fixed number of iterations and a union operation in the sifting process. By combining the local regional features of the images, an image fusion method has been developed. First, the source images are decomposed using the proposed BEMD to produce the first intrinsic mode function (IMF) and residue component. Second, for the IMF component, a selection and weighted average strategy based on local area energy is used to obtain a high-frequency fusion component. Third, for the residue component, a selection and weighted average strategy based on local average gray difference is used to obtain a low-frequency fusion component. Finally, the fused image is obtained by applying the inverse BEMD transform. Experimental results show that the proposed algorithm provides superior performance over methods based on wavelet transform, line and column-based EMD, and complex empirical mode decomposition, both in terms of visual quality and objective evaluation criteria.

  16. Skills-Based Learning for Reproducible Expertise: Looking Elsewhere for Guidance

    ERIC Educational Resources Information Center

    Roessger, Kevin M.

    2016-01-01

    Despite the prevalence of adult skills-based learning, adult education researchers continue to ignore effective interdisciplinary skills-based methods. Prominent researchers dismiss empirically supported teaching guidelines, preferring situational, emancipatory methods with no demonstrable effect on skilled performance or reproducible expertise.…

  17. Evidence-based ethics – What it should be and what it shouldn't

    PubMed Central

    Strech, Daniel

    2008-01-01

    Background The concept of evidence-based medicine has strongly influenced the appraisal and application of empirical information in health care decision-making. One principal characteristic of this concept is the distinction between "evidence" in the sense of high-quality empirical information on the one hand and rather low-quality empirical information on the other hand. In the last 5 to 10 years an increasing number of articles published in international journals have made use of the term "evidence-based ethics", making a systematic analysis and explication of the term and its applicability in ethics important. Discussion In this article four descriptive and two normative characteristics of the general concept "evidence-based" are presented and explained systematically. These characteristics are to then serve as a framework for assessing the methodological and practical challenges of evidence-based ethics as a developing methodology. The superiority of evidence in contrast to other empirical information has several normative implications such as the legitimization of decisions in medicine and ethics. This implicit normativity poses ethical concerns if there is no formal consent on which sort of empirical information deserves the label "evidence" and which does not. In empirical ethics, which relies primarily on interview research and other methods from the social sciences, we still lack gold standards for assessing the quality of study designs and appraising their findings. Conclusion The use of the term "evidence-based ethics" should be discouraged, unless there is enough consensus on how to differentiate between high- and low-quality information produced by empirical ethics. In the meantime, whenever empirical information plays a role, the process of ethical decision-making should make use of systematic reviews of empirical studies that involve a critical appraisal and comparative discussion of data. PMID:18937838

  18. Multi-focus image fusion based on window empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Qin, Xinqiang; Zheng, Jiaoyue; Hu, Gang; Wang, Jiao

    2017-09-01

    In order to improve multi-focus image fusion quality, a novel fusion algorithm based on window empirical mode decomposition (WEMD) is proposed. This WEMD is an improved form of bidimensional empirical mode decomposition (BEMD), due to its decomposition process using the adding window principle, effectively resolving the signal concealment problem. We used WEMD for multi-focus image fusion, and formulated different fusion rules for bidimensional intrinsic mode function (BIMF) components and the residue component. For fusion of the BIMF components, the concept of the Sum-modified-Laplacian was used and a scheme based on the visual feature contrast adopted; when choosing the residue coefficients, a pixel value based on the local visibility was selected. We carried out four groups of multi-focus image fusion experiments and compared objective evaluation criteria with other three fusion methods. The experimental results show that the proposed fusion approach is effective and performs better at fusing multi-focus images than some traditional methods.

  19. Empirical research in medical ethics: how conceptual accounts on normative-empirical collaboration may improve research practice.

    PubMed

    Salloch, Sabine; Schildmann, Jan; Vollmann, Jochen

    2012-04-13

    The methodology of medical ethics during the last few decades has shifted from a predominant use of normative-philosophical analyses to an increasing involvement of empirical methods. The articles which have been published in the course of this so-called 'empirical turn' can be divided into conceptual accounts of empirical-normative collaboration and studies which use socio-empirical methods to investigate ethically relevant issues in concrete social contexts. A considered reference to normative research questions can be expected from good quality empirical research in medical ethics. However, a significant proportion of empirical studies currently published in medical ethics lacks such linkage between the empirical research and the normative analysis. In the first part of this paper, we will outline two typical shortcomings of empirical studies in medical ethics with regard to a link between normative questions and empirical data: (1) The complete lack of normative analysis, and (2) cryptonormativity and a missing account with regard to the relationship between 'is' and 'ought' statements. Subsequently, two selected concepts of empirical-normative collaboration will be presented and how these concepts may contribute to improve the linkage between normative and empirical aspects of empirical research in medical ethics will be demonstrated. Based on our analysis, as well as our own practical experience with empirical research in medical ethics, we conclude with a sketch of concrete suggestions for the conduct of empirical research in medical ethics. High quality empirical research in medical ethics is in need of a considered reference to normative analysis. In this paper, we demonstrate how conceptual approaches of empirical-normative collaboration can enhance empirical research in medical ethics with regard to the link between empirical research and normative analysis.

  20. Empirical research in medical ethics: How conceptual accounts on normative-empirical collaboration may improve research practice

    PubMed Central

    2012-01-01

    Background The methodology of medical ethics during the last few decades has shifted from a predominant use of normative-philosophical analyses to an increasing involvement of empirical methods. The articles which have been published in the course of this so-called 'empirical turn' can be divided into conceptual accounts of empirical-normative collaboration and studies which use socio-empirical methods to investigate ethically relevant issues in concrete social contexts. Discussion A considered reference to normative research questions can be expected from good quality empirical research in medical ethics. However, a significant proportion of empirical studies currently published in medical ethics lacks such linkage between the empirical research and the normative analysis. In the first part of this paper, we will outline two typical shortcomings of empirical studies in medical ethics with regard to a link between normative questions and empirical data: (1) The complete lack of normative analysis, and (2) cryptonormativity and a missing account with regard to the relationship between 'is' and 'ought' statements. Subsequently, two selected concepts of empirical-normative collaboration will be presented and how these concepts may contribute to improve the linkage between normative and empirical aspects of empirical research in medical ethics will be demonstrated. Based on our analysis, as well as our own practical experience with empirical research in medical ethics, we conclude with a sketch of concrete suggestions for the conduct of empirical research in medical ethics. Summary High quality empirical research in medical ethics is in need of a considered reference to normative analysis. In this paper, we demonstrate how conceptual approaches of empirical-normative collaboration can enhance empirical research in medical ethics with regard to the link between empirical research and normative analysis. PMID:22500496

  1. An empirical study using permutation-based resampling in meta-regression

    PubMed Central

    2012-01-01

    Background In meta-regression, as the number of trials in the analyses decreases, the risk of false positives or false negatives increases. This is partly due to the assumption of normality that may not hold in small samples. Creation of a distribution from the observed trials using permutation methods to calculate P values may allow for less spurious findings. Permutation has not been empirically tested in meta-regression. The objective of this study was to perform an empirical investigation to explore the differences in results for meta-analyses on a small number of trials using standard large sample approaches verses permutation-based methods for meta-regression. Methods We isolated a sample of randomized controlled clinical trials (RCTs) for interventions that have a small number of trials (herbal medicine trials). Trials were then grouped by herbal species and condition and assessed for methodological quality using the Jadad scale, and data were extracted for each outcome. Finally, we performed meta-analyses on the primary outcome of each group of trials and meta-regression for methodological quality subgroups within each meta-analysis. We used large sample methods and permutation methods in our meta-regression modeling. We then compared final models and final P values between methods. Results We collected 110 trials across 5 intervention/outcome pairings and 5 to 10 trials per covariate. When applying large sample methods and permutation-based methods in our backwards stepwise regression the covariates in the final models were identical in all cases. The P values for the covariates in the final model were larger in 78% (7/9) of the cases for permutation and identical for 22% (2/9) of the cases. Conclusions We present empirical evidence that permutation-based resampling may not change final models when using backwards stepwise regression, but may increase P values in meta-regression of multiple covariates for relatively small amount of trials. PMID:22587815

  2. Storytelling as an Instructional Method: Definitions and Research Questions

    ERIC Educational Resources Information Center

    Andrews, Dee H.; Hull, Thomas D.; Donahue, Jennifer A.

    2009-01-01

    This paper discusses the theoretical and empirical foundations of the use of storytelling in instruction. The definition of "story" is given and four instructional methods are identified related to storytelling: case-based, narrative-based, scenario-based, and problem-based instruction. The article provides descriptions of the four…

  3. High Velocity Jet Noise Source Location and Reduction. Task 3 - Experimental Investigation of Suppression Principles. Volume I. Suppressor Concepts Optimization

    DTIC Science & Technology

    1978-12-01

    multinational corporation in the 1960’s placed extreme emphasis on the need for effective and efficient noise suppression devices. Phase I of work...through model and engine testing applicable to an afterburning turbojet engine. Suppressor designs were based primarily on empirical methods. Phase II...using "ray" acoustics. This method is in contrast to the purely empirical method which consists of the curve -fitting of normalized data. In order to

  4. Subgrade evaluation based on theoretical concepts.

    DOT National Transportation Integrated Search

    1971-01-01

    Evaluations of pavement soil subgrades for the purpose of design are mostly based on empirical methods such as the CBR, California soil resistance method, etc. The need for the application of theory and the evaluation of subgrade strength in terms of...

  5. Petrophysical approach for S-wave velocity prediction based on brittleness index and total organic carbon of shale gas reservoir: A case study from Horn River Basin, Canada

    NASA Astrophysics Data System (ADS)

    Kim, Taeyoun; Hwang, Seho; Jang, Seonghyung

    2017-01-01

    When finding the "sweet spot" of a shale gas reservoir, it is essential to estimate the brittleness index (BI) and total organic carbon (TOC) of the formation. Particularly, the BI is one of the key factors in determining the crack propagation and crushing efficiency for hydraulic fracturing. There are several methods for estimating the BI of a formation, but most of them are empirical equations that are specific to particular rock types. We estimated the mineralogical BI based on elemental capture spectroscopy (ECS) log and elastic BI based on well log data, and we propose a new method for predicting S-wave velocity (VS) using mineralogical BI and elastic BI. The TOC is related to the gas content of shale gas reservoirs. Since it is difficult to perform core analysis for all intervals of shale gas reservoirs, we make empirical equations for the Horn River Basin, Canada, as well as TOC log using a linear relation between core-tested TOC and well log data. In addition, two empirical equations have been suggested for VS prediction based on density and gamma ray log used for TOC analysis. By applying the empirical equations proposed from the perspective of BI and TOC to another well log data and then comparing predicted VS log with real VS log, the validity of empirical equations suggested in this paper has been tested.

  6. Multiscale Characterization of PM2.5 in Southern Taiwan based on Noise-assisted Multivariate Empirical Mode Decomposition and Time-dependent Intrinsic Correlation

    NASA Astrophysics Data System (ADS)

    Hsiao, Y. R.; Tsai, C.

    2017-12-01

    As the WHO Air Quality Guideline indicates, ambient air pollution exposes world populations under threat of fatal symptoms (e.g. heart disease, lung cancer, asthma etc.), raising concerns of air pollution sources and relative factors. This study presents a novel approach to investigating the multiscale variations of PM2.5 in southern Taiwan over the past decade, with four meteorological influencing factors (Temperature, relative humidity, precipitation and wind speed),based on Noise-assisted Multivariate Empirical Mode Decomposition(NAMEMD) algorithm, Hilbert Spectral Analysis(HSA) and Time-dependent Intrinsic Correlation(TDIC) method. NAMEMD algorithm is a fully data-driven approach designed for nonlinear and nonstationary multivariate signals, and is performed to decompose multivariate signals into a collection of channels of Intrinsic Mode Functions (IMFs). TDIC method is an EMD-based method using a set of sliding window sizes to quantify localized correlation coefficients for multiscale signals. With the alignment property and quasi-dyadic filter bank of NAMEMD algorithm, one is able to produce same number of IMFs for all variables and estimates the cross correlation in a more accurate way. The performance of spectral representation of NAMEMD-HSA method is compared with Complementary Empirical Mode Decomposition/ Hilbert Spectral Analysis (CEEMD-HSA) and Wavelet Analysis. The nature of NAMAMD-based TDICC analysis is then compared with CEEMD-based TDIC analysis and the traditional correlation analysis.

  7. An improved method for predicting the effects of flight on jet mixing noise

    NASA Technical Reports Server (NTRS)

    Stone, J. R.

    1979-01-01

    The NASA method (1976) for predicting the effects of flight on jet mixing noise was improved. The earlier method agreed reasonably well with experimental flight data for jet velocities up to about 520 m/sec (approximately 1700 ft/sec). The poorer agreement at high jet velocities appeared to be due primarily to the manner in which supersonic convection effects were formulated. The purely empirical supersonic convection formulation of the earlier method was replaced by one based on theoretical considerations. Other improvements of an empirical nature included were based on model-jet/free-jet simulated flight tests. The revised prediction method is presented and compared with experimental data obtained from the Bertin Aerotrain with a J85 engine, the DC-10 airplane with JT9D engines, and the DC-9 airplane with refanned JT8D engines. It is shown that the new method agrees better with the data base than a recently proposed SAE method.

  8. Harmonic analysis of electrified railway based on improved HHT

    NASA Astrophysics Data System (ADS)

    Wang, Feng

    2018-04-01

    In this paper, the causes and harms of the current electric locomotive electrical system harmonics are firstly studied and analyzed. Based on the characteristics of the harmonics in the electrical system, the Hilbert-Huang transform method is introduced. Based on the in-depth analysis of the empirical mode decomposition method and the Hilbert transform method, the reasons and solutions to the endpoint effect and modal aliasing problem in the HHT method are explored. For the endpoint effect of HHT, this paper uses point-symmetric extension method to extend the collected data; In allusion to the modal aliasing problem, this paper uses the high frequency harmonic assistant method to preprocess the signal and gives the empirical formula of high frequency auxiliary harmonic. Finally, combining the suppression of HHT endpoint effect and modal aliasing problem, an improved HHT method is proposed and simulated by matlab. The simulation results show that the improved HHT is effective for the electric locomotive power supply system.

  9. Comparison of two interpolation methods for empirical mode decomposition based evaluation of radiographic femur bone images.

    PubMed

    Udhayakumar, Ganesan; Sujatha, Chinnaswamy Manoharan; Ramakrishnan, Swaminathan

    2013-01-01

    Analysis of bone strength in radiographic images is an important component of estimation of bone quality in diseases such as osteoporosis. Conventional radiographic femur bone images are used to analyze its architecture using bi-dimensional empirical mode decomposition method. Surface interpolation of local maxima and minima points of an image is a crucial part of bi-dimensional empirical mode decomposition method and the choice of appropriate interpolation depends on specific structure of the problem. In this work, two interpolation methods of bi-dimensional empirical mode decomposition are analyzed to characterize the trabecular femur bone architecture of radiographic images. The trabecular bone regions of normal and osteoporotic femur bone images (N = 40) recorded under standard condition are used for this study. The compressive and tensile strength regions of the images are delineated using pre-processing procedures. The delineated images are decomposed into their corresponding intrinsic mode functions using interpolation methods such as Radial basis function multiquadratic and hierarchical b-spline techniques. Results show that bi-dimensional empirical mode decomposition analyses using both interpolations are able to represent architectural variations of femur bone radiographic images. As the strength of the bone depends on architectural variation in addition to bone mass, this study seems to be clinically useful.

  10. Ensemble empirical mode decomposition based fluorescence spectral noise reduction for low concentration PAHs

    NASA Astrophysics Data System (ADS)

    Wang, Shu-tao; Yang, Xue-ying; Kong, De-ming; Wang, Yu-tian

    2017-11-01

    A new noise reduction method based on ensemble empirical mode decomposition (EEMD) is proposed to improve the detection effect for fluorescence spectra. Polycyclic aromatic hydrocarbons (PAHs) pollutants, as a kind of important current environmental pollution source, are highly oncogenic. Using the fluorescence spectroscopy method, the PAHs pollutants can be detected. However, instrument will produce noise in the experiment. Weak fluorescent signals can be affected by noise, so we propose a way to denoise and improve the detection effect. Firstly, we use fluorescence spectrometer to detect PAHs to obtain fluorescence spectra. Subsequently, noises are reduced by EEMD algorithm. Finally, the experiment results show the proposed method is feasible.

  11. Generalized empirical Bayesian methods for discovery of differential data in high-throughput biology.

    PubMed

    Hardcastle, Thomas J

    2016-01-15

    High-throughput data are now commonplace in biological research. Rapidly changing technologies and application mean that novel methods for detecting differential behaviour that account for a 'large P, small n' setting are required at an increasing rate. The development of such methods is, in general, being done on an ad hoc basis, requiring further development cycles and a lack of standardization between analyses. We present here a generalized method for identifying differential behaviour within high-throughput biological data through empirical Bayesian methods. This approach is based on our baySeq algorithm for identification of differential expression in RNA-seq data based on a negative binomial distribution, and in paired data based on a beta-binomial distribution. Here we show how the same empirical Bayesian approach can be applied to any parametric distribution, removing the need for lengthy development of novel methods for differently distributed data. Comparisons with existing methods developed to address specific problems in high-throughput biological data show that these generic methods can achieve equivalent or better performance. A number of enhancements to the basic algorithm are also presented to increase flexibility and reduce computational costs. The methods are implemented in the R baySeq (v2) package, available on Bioconductor http://www.bioconductor.org/packages/release/bioc/html/baySeq.html. tjh48@cam.ac.uk Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  12. Detection and localization of change points in temporal networks with the aid of stochastic block models

    NASA Astrophysics Data System (ADS)

    De Ridder, Simon; Vandermarliere, Benjamin; Ryckebusch, Jan

    2016-11-01

    A framework based on generalized hierarchical random graphs (GHRGs) for the detection of change points in the structure of temporal networks has recently been developed by Peel and Clauset (2015 Proc. 29th AAAI Conf. on Artificial Intelligence). We build on this methodology and extend it to also include the versatile stochastic block models (SBMs) as a parametric family for reconstructing the empirical networks. We use five different techniques for change point detection on prototypical temporal networks, including empirical and synthetic ones. We find that none of the considered methods can consistently outperform the others when it comes to detecting and locating the expected change points in empirical temporal networks. With respect to the precision and the recall of the results of the change points, we find that the method based on a degree-corrected SBM has better recall properties than other dedicated methods, especially for sparse networks and smaller sliding time window widths.

  13. An empirical method for computing leeside centerline heating on the Space Shuttle Orbiter

    NASA Technical Reports Server (NTRS)

    Helms, V. T., III

    1981-01-01

    An empirical method is presented for computing top centerline heating on the Space Shuttle Orbiter at simulated reentry conditions. It is shown that the Shuttle's top centerline can be thought of as being under the influence of a swept cylinder flow field. The effective geometry of the flow field, as well as top centerline heating, are directly related to oil-flow patterns on the upper surface of the fuselage. An empirical turbulent swept cylinder heating method was developed based on these considerations. The method takes into account the effects of the vortex-dominated leeside flow field without actually having to compute the detailed properties of such a complex flow. The heating method closely predicts experimental heat-transfer values on the top centerline of a Shuttle model at Mach numbers of 6 and 10 over a wide range in Reynolds number and angle of attack.

  14. Probabilistic analysis of tsunami hazards

    USGS Publications Warehouse

    Geist, E.L.; Parsons, T.

    2006-01-01

    Determining the likelihood of a disaster is a key component of any comprehensive hazard assessment. This is particularly true for tsunamis, even though most tsunami hazard assessments have in the past relied on scenario or deterministic type models. We discuss probabilistic tsunami hazard analysis (PTHA) from the standpoint of integrating computational methods with empirical analysis of past tsunami runup. PTHA is derived from probabilistic seismic hazard analysis (PSHA), with the main difference being that PTHA must account for far-field sources. The computational methods rely on numerical tsunami propagation models rather than empirical attenuation relationships as in PSHA in determining ground motions. Because a number of source parameters affect local tsunami runup height, PTHA can become complex and computationally intensive. Empirical analysis can function in one of two ways, depending on the length and completeness of the tsunami catalog. For site-specific studies where there is sufficient tsunami runup data available, hazard curves can primarily be derived from empirical analysis, with computational methods used to highlight deficiencies in the tsunami catalog. For region-wide analyses and sites where there are little to no tsunami data, a computationally based method such as Monte Carlo simulation is the primary method to establish tsunami hazards. Two case studies that describe how computational and empirical methods can be integrated are presented for Acapulco, Mexico (site-specific) and the U.S. Pacific Northwest coastline (region-wide analysis).

  15. Evidence-based Nursing Education - a Systematic Review of Empirical Research

    PubMed Central

    Reiber, Karin

    2011-01-01

    The project „Evidence-based Nursing Education – Preparatory Stage“, funded by the Landesstiftung Baden-Württemberg within the programme Impulsfinanzierung Forschung (Funding to Stimulate Research), aims to collect information on current research concerned with nursing education and to process existing data. The results of empirical research which has already been carried out were systematically evaluated with aim of identifying further topics, fields and matters of interest for empirical research in nursing education. In the course of the project, the available empirical studies on nursing education were scientifically analysed and systematised. The over-arching aim of the evidence-based training approach – which extends beyond the aims of this project - is the conception, organisation and evaluation of vocational training and educational processes in the caring professions on the basis of empirical data. The following contribution first provides a systematic, theoretical link to the over-arching reference framework, as the evidence-based approach is adapted from thematically related specialist fields. The research design of the project is oriented towards criteria introduced from a selection of studies and carries out a two-stage systematic review of the selected studies. As a result, the current status of research in nursing education, as well as its organisation and structure, and questions relating to specialist training and comparative education are introduced and discussed. Finally, the empirical research on nursing training is critically appraised as a complementary element in educational theory/psychology of learning and in the ethical tradition of research. This contribution aims, on the one hand, to derive and describe the methods used, and to introduce the steps followed in gathering and evaluating the data. On the other hand, it is intended to give a systematic overview of empirical research work in nursing education. In order to preserve a holistic view of the research field and methods, detailed individual findings are not included. PMID:21818237

  16. An Empirical Method Permitting Rapid Determination of the Area, Rate and Distribution of Water-Drop Impingement on an Airfoil of Arbitrary Section at Subsonic Speeds

    NASA Technical Reports Server (NTRS)

    Bergrun, N. R.

    1951-01-01

    An empirical method for the determination of the area, rate, and distribution of water-drop impingement on airfoils of arbitrary section is presented. The procedure represents an initial step toward the development of a method which is generally applicable in the design of thermal ice-prevention equipment for airplane wing and tail surfaces. Results given by the proposed empirical method are expected to be sufficiently accurate for the purpose of heated-wing design, and can be obtained from a few numerical computations once the velocity distribution over the airfoil has been determined. The empirical method presented for incompressible flow is based on results of extensive water-drop. trajectory computations for five airfoil cases which consisted of 15-percent-thick airfoils encompassing a moderate lift-coefficient range. The differential equations pertaining to the paths of the drops were solved by a differential analyzer. The method developed for incompressible flow is extended to the calculation of area and rate of impingement on straight wings in subsonic compressible flow to indicate the probable effects of compressibility for airfoils at low subsonic Mach numbers.

  17. Empirical conversion of the vertical profile of reflectivity from Ku-band to S-band frequency

    NASA Astrophysics Data System (ADS)

    Cao, Qing; Hong, Yang; Qi, Youcun; Wen, Yixin; Zhang, Jian; Gourley, Jonathan J.; Liao, Liang

    2013-02-01

    ABSTRACT This paper presents an empirical method for converting reflectivity from Ku-band (13.8 GHz) to S-band (2.8 GHz) for several hydrometeor species, which facilitates the incorporation of Tropical Rainfall Measuring Mission (TRMM) Precipitation Radar (PR) measurements into quantitative precipitation estimation (QPE) products from the U.S. Next-Generation Radar (NEXRAD). The development of empirical dual-frequency relations is based on theoretical simulations, which have assumed appropriate scattering and microphysical models for liquid and solid hydrometeors (raindrops, snow, and ice/hail). Particle phase, shape, orientation, and density (especially for snow particles) have been considered in applying the T-matrix method to compute the scattering amplitudes. Gamma particle size distribution (PSD) is utilized to model the microphysical properties in the ice region, melting layer, and raining region of precipitating clouds. The variability of PSD parameters is considered to study the characteristics of dual-frequency reflectivity, especially the variations in radar dual-frequency ratio (DFR). The empirical relations between DFR and Ku-band reflectivity have been derived for particles in different regions within the vertical structure of precipitating clouds. The reflectivity conversion using the proposed empirical relations has been tested using real data collected by TRMM-PR and a prototype polarimetric WSR-88D (Weather Surveillance Radar 88 Doppler) radar, KOUN. The processing and analysis of collocated data demonstrate the validity of the proposed empirical relations and substantiate their practical significance for reflectivity conversion, which is essential to the TRMM-based vertical profile of reflectivity correction approach in improving NEXRAD-based QPE.

  18. A hybrid filtering method based on a novel empirical mode decomposition for friction signals

    NASA Astrophysics Data System (ADS)

    Li, Chengwei; Zhan, Liwei

    2015-12-01

    During a measurement, the measured signal usually contains noise. To remove the noise and preserve the important feature of the signal, we introduce a hybrid filtering method that uses a new intrinsic mode function (NIMF) and a modified Hausdorff distance. The NIMF is defined as the difference between the noisy signal and each intrinsic mode function (IMF), which is obtained by empirical mode decomposition (EMD), ensemble EMD, complementary ensemble EMD, or complete ensemble EMD with adaptive noise (CEEMDAN). The relevant mode selecting is based on the similarity between the first NIMF and the rest of the NIMFs. With this filtering method, the EMD and improved versions are used to filter the simulation and friction signals. The friction signal between an airplane tire and the runaway is recorded during a simulated airplane touchdown and features spikes of various amplitudes and noise. The filtering effectiveness of the four hybrid filtering methods are compared and discussed. The results show that the filtering method based on CEEMDAN outperforms other signal filtering methods.

  19. Sport fishing: a comparison of three indirect methods for estimating benefits.

    Treesearch

    Darrell L. Hueth; Elizabeth J. Strong; Roger D. Fight

    1988-01-01

    Three market-based methods for estimating values of sport fishing were compared by using a common data base. The three approaches were the travel-cost method, the hedonic travel-cost method, and the household-production method. A theoretical comparison of the resulting values showed that the results were not fully comparable in several ways. The comparison of empirical...

  20. A theoretical method for the analysis and design of axisymmetric bodies. [flow distribution and incompressible fluids

    NASA Technical Reports Server (NTRS)

    Beatty, T. D.

    1975-01-01

    A theoretical method is presented for the computation of the flow field about an axisymmetric body operating in a viscous, incompressible fluid. A potential flow method was used to determine the inviscid flow field and to yield the boundary conditions for the boundary layer solutions. Boundary layer effects in the forces of displacement thickness and empirically modeled separation streamlines are accounted for in subsequent potential flow solutions. This procedure is repeated until the solutions converge. An empirical method was used to determine base drag allowing configuration drag to be computed.

  1. Delimiting the Unconceived

    NASA Astrophysics Data System (ADS)

    Dawid, Richard

    2018-01-01

    It has been argued in Dawid (String theory and the scientific method, Cambridge University Press, Cambridge, [4]) that physicists at times generate substantial trust in an empirically unconfirmed theory based on observations that lie beyond the theory's intended domain. A crucial role in the reconstruction of this argument of "non-empirical confirmation" is played by limitations to scientific underdetermination. The present paper discusses the question as to how generic the role of limitations to scientific underdetermination really is. It is argued that assessing such limitations is essential for generating trust in any theory's predictions, be it empirically confirmed or not. The emerging view suggests that empirical and non-empirical confirmation are more closely related to each other than one may expect at first glance.

  2. Delimiting the Unconceived

    NASA Astrophysics Data System (ADS)

    Dawid, Richard

    2018-05-01

    It has been argued in Dawid (String theory and the scientific method, Cambridge University Press, Cambridge, [4]) that physicists at times generate substantial trust in an empirically unconfirmed theory based on observations that lie beyond the theory's intended domain. A crucial role in the reconstruction of this argument of "non-empirical confirmation" is played by limitations to scientific underdetermination. The present paper discusses the question as to how generic the role of limitations to scientific underdetermination really is. It is argued that assessing such limitations is essential for generating trust in any theory's predictions, be it empirically confirmed or not. The emerging view suggests that empirical and non-empirical confirmation are more closely related to each other than one may expect at first glance.

  3. Adaptive Fourier decomposition based ECG denoising.

    PubMed

    Wang, Ze; Wan, Feng; Wong, Chi Man; Zhang, Liming

    2016-10-01

    A novel ECG denoising method is proposed based on the adaptive Fourier decomposition (AFD). The AFD decomposes a signal according to its energy distribution, thereby making this algorithm suitable for separating pure ECG signal and noise with overlapping frequency ranges but different energy distributions. A stop criterion for the iterative decomposition process in the AFD is calculated on the basis of the estimated signal-to-noise ratio (SNR) of the noisy signal. The proposed AFD-based method is validated by the synthetic ECG signal using an ECG model and also real ECG signals from the MIT-BIH Arrhythmia Database both with additive Gaussian white noise. Simulation results of the proposed method show better performance on the denoising and the QRS detection in comparing with major ECG denoising schemes based on the wavelet transform, the Stockwell transform, the empirical mode decomposition, and the ensemble empirical mode decomposition. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Universal Design for Instruction in Postsecondary Education: A Systematic Review of Empirically Based Articles

    ERIC Educational Resources Information Center

    Roberts, Kelly D.; Park, Hye Jin; Brown, Steven; Cook, Bryan

    2011-01-01

    Universal Design for Instruction (UDI) in postsecondary education is a relatively new concept/framework that has generated significant support. The purpose of this literature review was to examine existing empirical research, including qualitative, quantitative, and mixed methods, on the use of UDI (and related terms) in postsecondary education.…

  5. Early Child Disaster Mental Health Interventions: A Review of the Empirical Evidence

    ERIC Educational Resources Information Center

    Pfefferbaum, Betty; Nitiéma, Pascal; Tucker, Phebe; Newman, Elana

    2017-01-01

    Background: The need to establish an evidence base for early child disaster interventions has been long recognized. Objective: This paper presents a descriptive analysis of the empirical research on early disaster mental health interventions delivered to children within the first 3 months post event. Methods: Characteristics and findings of the…

  6. Evaluation and parameterization of ATCOR3 topographic correction method for forest cover mapping in mountain areas

    NASA Astrophysics Data System (ADS)

    Balthazar, Vincent; Vanacker, Veerle; Lambin, Eric F.

    2012-08-01

    A topographic correction of optical remote sensing data is necessary to improve the quality of quantitative forest cover change analyses in mountainous terrain. The implementation of semi-empirical correction methods requires the calibration of model parameters that are empirically defined. This study develops a method to improve the performance of topographic corrections for forest cover change detection in mountainous terrain through an iterative tuning method of model parameters based on a systematic evaluation of the performance of the correction. The latter was based on: (i) the general matching of reflectances between sunlit and shaded slopes and (ii) the occurrence of abnormal reflectance values, qualified as statistical outliers, in very low illuminated areas. The method was tested on Landsat ETM+ data for rough (Ecuadorian Andes) and very rough mountainous terrain (Bhutan Himalayas). Compared to a reference level (no topographic correction), the ATCOR3 semi-empirical correction method resulted in a considerable reduction of dissimilarities between reflectance values of forested sites in different topographic orientations. Our results indicate that optimal parameter combinations are depending on the site, sun elevation and azimuth and spectral conditions. We demonstrate that the results of relatively simple topographic correction methods can be greatly improved through a feedback loop between parameter tuning and evaluation of the performance of the correction model.

  7. Usability Evaluation of a Web-Based Learning System

    ERIC Educational Resources Information Center

    Nguyen, Thao

    2012-01-01

    The paper proposes a contingent, learner-centred usability evaluation method and a prototype tool of such systems. This is a new usability evaluation method for web-based learning systems using a set of empirically-supported usability factors and can be done effectively with limited resources. During the evaluation process, the method allows for…

  8. Intrinsic fluorescence of protein in turbid media using empirical relation based on Monte Carlo lookup table

    NASA Astrophysics Data System (ADS)

    Einstein, Gnanatheepam; Udayakumar, Kanniyappan; Aruna, Prakasarao; Ganesan, Singaravelu

    2017-03-01

    Fluorescence of Protein has been widely used in diagnostic oncology for characterizing cellular metabolism. However, the intensity of fluorescence emission is affected due to the absorbers and scatterers in tissue, which may lead to error in estimating exact protein content in tissue. Extraction of intrinsic fluorescence from measured fluorescence has been achieved by different methods. Among them, Monte Carlo based method yields the highest accuracy for extracting intrinsic fluorescence. In this work, we have attempted to generate a lookup table for Monte Carlo simulation of fluorescence emission by protein. Furthermore, we fitted the generated lookup table using an empirical relation. The empirical relation between measured and intrinsic fluorescence is validated using tissue phantom experiments. The proposed relation can be used for estimating intrinsic fluorescence of protein for real-time diagnostic applications and thereby improving the clinical interpretation of fluorescence spectroscopic data.

  9. Measurement invariance study of the training satisfaction questionnaire (TSQ).

    PubMed

    Sanduvete-Chaves, Susana; Holgado-Tello, F Pablo; Chacón-Moscoso, Salvador; Barbero-García, M Isabel

    2013-01-01

    This article presents an empirical measurement invariance study in the substantive area of satisfaction evaluation in training programs. Specifically, it (I) provides an empirical solution to the lack of explicit measurement models of satisfaction scales, offering a way of analyzing and operationalizing the substantive theoretical dimensions; (II) outlines and discusses the analytical consequences of considering the effects of categorizing supposedly continuous variables, which are not usually taken into account; (III) presents empirical results from a measurement invariance study based on 5,272 participants' responses to a training satisfaction questionnaire in three different organizations and in two different training methods, taking into account the factor structure of the measured construct and the ordinal nature of the recorded data; and (IV) describes the substantive implications in the area of training satisfaction evaluation, such as the usefulness of the training satisfaction questionnaire to measure satisfaction in different organizations and different training methods. It also discusses further research based on these findings.

  10. Theoretical geology

    NASA Astrophysics Data System (ADS)

    Mikeš, Daniel

    2010-05-01

    Theoretical geology Present day geology is mostly empirical of nature. I claim that geology is by nature complex and that the empirical approach is bound to fail. Let's consider the input to be the set of ambient conditions and the output to be the sedimentary rock record. I claim that the output can only be deduced from the input if the relation from input to output be known. The fundamental question is therefore the following: Can one predict the output from the input or can one predict the behaviour of a sedimentary system? If one can, than the empirical/deductive method has changes, if one can't than that method is bound to fail. The fundamental problem to solve is therefore the following: How to predict the behaviour of a sedimentary system? It is interesting to observe that this question is never asked and many a study is conducted by the empirical/deductive method; it seems that the empirical method has been accepted as being appropriate without question. It is, however, easy to argument that a sedimentary system is by nature complex and that several input parameters vary at the same time and that they can create similar output in the rock record. It follows trivially from these first principles that in such a case the deductive solution cannot be unique. At the same time several geological methods depart precisely from the assumption, that one particular variable is the dictator/driver and that the others are constant, even though the data do not support such an assumption. The method of "sequence stratigraphy" is a typical example of such a dogma. It can be easily argued that all the interpretation resulting from a method that is built on uncertain or wrong assumptions is erroneous. Still, this method has survived for many years, nonwithstanding all the critics it has received. This is just one example of the present day geological world and is not unique. Even the alternative methods criticising sequence stratigraphy actually depart from the same erroneous assumptions and do not solve the very fundamental issue that lies at the base of the problem. This problem is straighforward and obvious: a sedimentary system is inherently four-dimensional (3 spatial dimensions + 1 temporal dimension). Any method using an inferior number or dimensions is bound to fail to describe the evolution of a sedimentary system. It is indicative of the present day geological world that such fundamental issues be overlooked. The only reason for which one can appoint the socalled "rationality" in todays society. Simple "common sense" leads us to the conclusion that in this case the empirical method is bound to fail and the only method that can solve the problem is the theoretical approach. Reasoning that is completely trivial for the traditional exact sciences like physics and mathematics and applied sciences like engineering. However, not for geology, a science that was traditionally descriptive and jumped to empirical science, skipping the stage of theoretical science. I argue that the gap of theoretical geology is left open and needs to be filled. Every discipline in geology lacks a theoretical base. This base can only be filled by the theoretical/inductive approach and can impossibly be filled by the empirical/deductive approach. Once a critical mass of geologists realises this flaw in todays geology, we can start solving the fundamental problems in geology.

  11. Development of vulnerability curves to typhoon hazards based on insurance policy and claim dataset

    NASA Astrophysics Data System (ADS)

    Mo, Wanmei; Fang, Weihua; li, Xinze; Wu, Peng; Tong, Xingwei

    2016-04-01

    Vulnerability refers to the characteristics and circumstances of an exposure that make it vulnerable to the effects of some certain hazards. It can be divided into physical vulnerability, social vulnerability, economic vulnerabilities and environmental vulnerability. Physical vulnerability indicates the potential physical damage of exposure caused by natural hazards. Vulnerability curves, quantifying the loss ratio against hazard intensity with a horizontal axis for the intensity and a vertical axis for the Mean Damage Ratio (MDR), is essential to the vulnerability assessment and quantitative evaluation of disasters. Fragility refers to the probability of diverse damage states under different hazard intensity, revealing a kind of characteristic of the exposure. Fragility curves are often used to quantify the probability of a given set of exposure at or exceeding a certain damage state. The development of quantitative fragility and vulnerability curves is the basis of catastrophe modeling. Generally, methods for quantitative fragility and vulnerability assessment can be categorized into empirical, analytical and expert opinion or judgment-based ones. Empirical method is one of the most popular methods and it relies heavily on the availability and quality of historical hazard and loss dataset, which has always been a great challenge. Analytical method is usually based on the engineering experiments and it is time-consuming and lacks built-in validation, so its credibility is also sometimes criticized widely. Expert opinion or judgment-based method is quite effective in the absence of data but the results could be too subjective so that the uncertainty is likely to be underestimated. In this study, we will present the fragility and vulnerability curves developed with empirical method based on simulated historical typhoon wind, rainfall and induced flood, and insurance policy and claim datasets of more than 100 historical typhoon events. Firstly, an insurance exposure classification system is built according to structure type, occupation type and insurance coverage. Then MDR estimation method based on considering insurance policy structure and claim information is proposed and validated. Following that, fragility and vulnerability curves of the major exposure types for construction, homeowner insurance and enterprise property insurance are fitted with empirical function based on the historical dataset. The results of this study can not only help understand catastrophe risk and mange insured disaster risks, but can also be applied in other disaster risk reduction efforts.

  12. Generalized Bootstrap Method for Assessment of Uncertainty in Semivariogram Inference

    USGS Publications Warehouse

    Olea, R.A.; Pardo-Iguzquiza, E.

    2011-01-01

    The semivariogram and its related function, the covariance, play a central role in classical geostatistics for modeling the average continuity of spatially correlated attributes. Whereas all methods are formulated in terms of the true semivariogram, in practice what can be used are estimated semivariograms and models based on samples. A generalized form of the bootstrap method to properly model spatially correlated data is used to advance knowledge about the reliability of empirical semivariograms and semivariogram models based on a single sample. Among several methods available to generate spatially correlated resamples, we selected a method based on the LU decomposition and used several examples to illustrate the approach. The first one is a synthetic, isotropic, exhaustive sample following a normal distribution, the second example is also a synthetic but following a non-Gaussian random field, and a third empirical sample consists of actual raingauge measurements. Results show wider confidence intervals than those found previously by others with inadequate application of the bootstrap. Also, even for the Gaussian example, distributions for estimated semivariogram values and model parameters are positively skewed. In this sense, bootstrap percentile confidence intervals, which are not centered around the empirical semivariogram and do not require distributional assumptions for its construction, provide an achieved coverage similar to the nominal coverage. The latter cannot be achieved by symmetrical confidence intervals based on the standard error, regardless if the standard error is estimated from a parametric equation or from bootstrap. ?? 2010 International Association for Mathematical Geosciences.

  13. Analyzing Interactions by an IIS-Map-Based Method in Face-to-Face Collaborative Learning: An Empirical Study

    ERIC Educational Resources Information Center

    Zheng, Lanqin; Yang, Kaicheng; Huang, Ronghuai

    2012-01-01

    This study proposes a new method named the IIS-map-based method for analyzing interactions in face-to-face collaborative learning settings. This analysis method is conducted in three steps: firstly, drawing an initial IIS-map according to collaborative tasks; secondly, coding and segmenting information flows into information items of IIS; thirdly,…

  14. Towards an Airframe Noise Prediction Methodology: Survey of Current Approaches

    NASA Technical Reports Server (NTRS)

    Farassat, Fereidoun; Casper, Jay H.

    2006-01-01

    In this paper, we present a critical survey of the current airframe noise (AFN) prediction methodologies. Four methodologies are recognized. These are the fully analytic method, CFD combined with the acoustic analogy, the semi-empirical method and fully numerical method. It is argued that for the immediate need of the aircraft industry, the semi-empirical method based on recent high quality acoustic database is the best available method. The method based on CFD and the Ffowcs William- Hawkings (FW-H) equation with penetrable data surface (FW-Hpds ) has advanced considerably and much experience has been gained in its use. However, more research is needed in the near future particularly in the area of turbulence simulation. The fully numerical method will take longer to reach maturity. Based on the current trends, it is predicted that this method will eventually develop into the method of choice. Both the turbulence simulation and propagation methods need to develop more for this method to become useful. Nonetheless, the authors propose that the method based on a combination of numerical and analytical techniques, e.g., CFD combined with FW-H equation, should also be worked on. In this effort, the current symbolic algebra software will allow more analytical approaches to be incorporated into AFN prediction methods.

  15. A New Strategy for ECG Baseline Wander Elimination Using Empirical Mode Decomposition

    NASA Astrophysics Data System (ADS)

    Shahbakhti, Mohammad; Bagheri, Hamed; Shekarchi, Babak; Mohammadi, Somayeh; Naji, Mohsen

    2016-06-01

    Electrocardiogram (ECG) signals might be affected by various artifacts and noises that have biological and external sources. Baseline wander (BW) is a low-frequency artifact that may be caused by breathing, body movements and loose sensor contact. In this paper, a novel method based on empirical mode decomposition (EMD) for removal of baseline noise from ECG is presented. When compared to other EMD-based methods, the novelty of this research is to reach the optimized number of decomposed levels for ECG BW de-noising using mean power frequency (MPF), while the reduction of processing time is considered. To evaluate the performance of the proposed method, a fifth-order Butterworth high pass filtering (BHPF) with cut-off frequency at 0.5Hz and wavelet approach are applied. Three performance indices, signal-to-noise ratio (SNR), mean square error (MSE) and correlation coefficient (CC), between pure and filtered signals have been utilized for qualification of presented techniques. Results suggest that the EMD-based method outperforms the other filtering method.

  16. A Literature Survey of Private Sector Methods of Determining Personal Financial Responsibility

    DTIC Science & Technology

    1988-09-01

    private sector methods to the public sector is also discussed. The judgmental and empirical methods are each effective. Their utilization is based upon their respective abilities to minimize cost while achieving the organization’s

  17. Exploring Advertising in Higher Education: An Empirical Analysis in North America, Europe, and Japan

    ERIC Educational Resources Information Center

    Papadimitriou, Antigoni; Blanco Ramírez, Gerardo

    2015-01-01

    This empirical study explores higher education advertising campaigns displayed in five world cities: Boston, New York, Oslo, Tokyo, and Toronto. The study follows a mixed-methods research design relying on content analysis and multimodal semiotic analysis and employs a conceptual framework based on the knowledge triangle of education, research,…

  18. An Empirical Model for the Use of Biglan's Disciplinary Categories. AIR Forum 1979 Paper.

    ERIC Educational Resources Information Center

    Muffo, John A.; Langston, Ira W., IV

    The Biglan method of grouping academic disciplines for comparative purposes is discussed as well as an empirically-based system for making internal comparisons among different academic units. The clusters of disciplines developed by Biglan (pure and applied, soft and hard, life and nonlife) are useful guides in working with data involving…

  19. Communication: Charge-population based dispersion interactions for molecules and materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stöhr, Martin; Department Chemie, Technische Universität München, Lichtenbergstr. 4, D-85748 Garching; Michelitsch, Georg S.

    2016-04-21

    We introduce a system-independent method to derive effective atomic C{sub 6} coefficients and polarizabilities in molecules and materials purely from charge population analysis. This enables the use of dispersion-correction schemes in electronic structure calculations without recourse to electron-density partitioning schemes and expands their applicability to semi-empirical methods and tight-binding Hamiltonians. We show that the accuracy of our method is en par with established electron-density partitioning based approaches in describing intermolecular C{sub 6} coefficients as well as dispersion energies of weakly bound molecular dimers, organic crystals, and supramolecular complexes. We showcase the utility of our approach by incorporation of the recentlymore » developed many-body dispersion method [Tkatchenko et al., Phys. Rev. Lett. 108, 236402 (2012)] into the semi-empirical density functional tight-binding method and propose the latter as a viable technique to study hybrid organic-inorganic interfaces.« less

  20. Analysis of Vibration and Noise of Construction Machinery Based on Ensemble Empirical Mode Decomposition and Spectral Correlation Analysis Method

    NASA Astrophysics Data System (ADS)

    Chen, Yuebiao; Zhou, Yiqi; Yu, Gang; Lu, Dan

    In order to analyze the effect of engine vibration on cab noise of construction machinery in multi-frequency bands, a new method based on ensemble empirical mode decomposition (EEMD) and spectral correlation analysis is proposed. Firstly, the intrinsic mode functions (IMFs) of vibration and noise signals were obtained by EEMD method, and then the IMFs which have the same frequency bands were selected. Secondly, we calculated the spectral correlation coefficients between the selected IMFs, getting the main frequency bands in which engine vibration has significant impact on cab noise. Thirdly, the dominated frequencies were picked out and analyzed by spectral analysis method. The study result shows that the main frequency bands and dominated frequencies in which engine vibration have serious impact on cab noise can be identified effectively by the proposed method, which provides effective guidance to noise reduction of construction machinery.

  1. AGENT-BASED MODELS IN EMPIRICAL SOCIAL RESEARCH*

    PubMed Central

    Bruch, Elizabeth; Atwell, Jon

    2014-01-01

    Agent-based modeling has become increasingly popular in recent years, but there is still no codified set of recommendations or practices for how to use these models within a program of empirical research. This article provides ideas and practical guidelines drawn from sociology, biology, computer science, epidemiology, and statistics. We first discuss the motivations for using agent-based models in both basic science and policy-oriented social research. Next, we provide an overview of methods and strategies for incorporating data on behavior and populations into agent-based models, and review techniques for validating and testing the sensitivity of agent-based models. We close with suggested directions for future research. PMID:25983351

  2. Asymmetric MF-DCCA method based on risk conduction and its application in the Chinese and foreign stock markets

    NASA Astrophysics Data System (ADS)

    Cao, Guangxi; Han, Yan; Li, Qingchen; Xu, Wei

    2017-02-01

    The acceleration of economic globalization gradually shows the linkage of the stock markets in various counties and produces a risk conduction effect. An asymmetric MF-DCCA method is conducted based on the different directions of risk conduction (DMF-ADCCA) and by using the traditional MF-DCCA. To ensure that the empirical results are more objective and robust, this study selects the stock index data of China, the US, Germany, India, and Brazil from January 2011 to September 2014 using the asymmetric MF-DCCA method based on different risk conduction effects and nonlinear Granger causality tests to study the asymmetric cross-correlation between domestic and foreign stock markets. Empirical results indicate the existence of a bidirectional conduction effect between domestic and foreign stock markets, and the greater influence degree from foreign countries to domestic market compared with that from the domestic market to foreign countries.

  3. A novel approach for baseline correction in 1H-MRS signals based on ensemble empirical mode decomposition.

    PubMed

    Parto Dezfouli, Mohammad Ali; Dezfouli, Mohsen Parto; Rad, Hamidreza Saligheh

    2014-01-01

    Proton magnetic resonance spectroscopy ((1)H-MRS) is a non-invasive diagnostic tool for measuring biochemical changes in the human body. Acquired (1)H-MRS signals may be corrupted due to a wideband baseline signal generated by macromolecules. Recently, several methods have been developed for the correction of such baseline signals, however most of them are not able to estimate baseline in complex overlapped signal. In this study, a novel automatic baseline correction method is proposed for (1)H-MRS spectra based on ensemble empirical mode decomposition (EEMD). This investigation was applied on both the simulated data and the in-vivo (1)H-MRS of human brain signals. Results justify the efficiency of the proposed method to remove the baseline from (1)H-MRS signals.

  4. Integrating biological knowledge into variable selection: an empirical Bayes approach with an application in cancer biology

    PubMed Central

    2012-01-01

    Background An important question in the analysis of biochemical data is that of identifying subsets of molecular variables that may jointly influence a biological response. Statistical variable selection methods have been widely used for this purpose. In many settings, it may be important to incorporate ancillary biological information concerning the variables of interest. Pathway and network maps are one example of a source of such information. However, although ancillary information is increasingly available, it is not always clear how it should be used nor how it should be weighted in relation to primary data. Results We put forward an approach in which biological knowledge is incorporated using informative prior distributions over variable subsets, with prior information selected and weighted in an automated, objective manner using an empirical Bayes formulation. We employ continuous, linear models with interaction terms and exploit biochemically-motivated sparsity constraints to permit exact inference. We show an example of priors for pathway- and network-based information and illustrate our proposed method on both synthetic response data and by an application to cancer drug response data. Comparisons are also made to alternative Bayesian and frequentist penalised-likelihood methods for incorporating network-based information. Conclusions The empirical Bayes method proposed here can aid prior elicitation for Bayesian variable selection studies and help to guard against mis-specification of priors. Empirical Bayes, together with the proposed pathway-based priors, results in an approach with a competitive variable selection performance. In addition, the overall procedure is fast, deterministic, and has very few user-set parameters, yet is capable of capturing interplay between molecular players. The approach presented is general and readily applicable in any setting with multiple sources of biological prior knowledge. PMID:22578440

  5. Compensation of hospital-based physicians.

    PubMed Central

    Steinwald, B

    1983-01-01

    This study is concerned with methods of compensating hospital-based physicians (HBPs) in five medical specialties: anesthesiology, pathology, radiology, cardiology, and emergency medicine. Data on 2232 nonfederal, short-term general hospitals came from a mail questionnaire survey conducted in Fall 1979. The data indicate that numerous compensation methods exist but these methods, without much loss of precision, can be reduced to salary, percentage of department revenue, and fee-for-service. When HBPs are compensated by salary or percentage methods, most patient billing is conducted by the hospital. In contrast, most fee-for-service HBPs bill their patients directly. Determinants of HBP compensation methods are investigated via multinomial logit analysis. This analysis indicates that choice of HBP compensation methods are investigated via multinomial logit analysis. This analysis indicates that choice of HBP compensation methods is sensitive to a number of hospital characteristics and attributes of both the hospital and physicians' services markets. The empirical findings are discussed in light of past conceptual and empirical research on physician compensation, and current policy issues in the health services sector. PMID:6841112

  6. Nonlinear Model Reduction in Power Systems by Balancing of Empirical Controllability and Observability Covariances

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qi, Junjian; Wang, Jianhui; Liu, Hui

    Abstract: In this paper, nonlinear model reduction for power systems is performed by the balancing of empirical controllability and observability covariances that are calculated around the operating region. Unlike existing model reduction methods, the external system does not need to be linearized but is directly dealt with as a nonlinear system. A transformation is found to balance the controllability and observability covariances in order to determine which states have the greatest contribution to the input-output behavior. The original system model is then reduced by Galerkin projection based on this transformation. The proposed method is tested and validated on a systemmore » comprised of a 16-machine 68-bus system and an IEEE 50-machine 145-bus system. The results show that by using the proposed model reduction the calculation efficiency can be greatly improved; at the same time, the obtained state trajectories are close to those for directly simulating the whole system or partitioning the system while not performing reduction. Compared with the balanced truncation method based on a linearized model, the proposed nonlinear model reduction method can guarantee higher accuracy and similar calculation efficiency. It is shown that the proposed method is not sensitive to the choice of the matrices for calculating the empirical covariances.« less

  7. An empirically based model for knowledge management in health care organizations.

    PubMed

    Sibbald, Shannon L; Wathen, C Nadine; Kothari, Anita

    2016-01-01

    Knowledge management (KM) encompasses strategies, processes, and practices that allow an organization to capture, share, store, access, and use knowledge. Ideal KM combines different sources of knowledge to support innovation and improve performance. Despite the importance of KM in health care organizations (HCOs), there has been very little empirical research to describe KM in this context. This study explores KM in HCOs, focusing on the status of current intraorganizational KM. The intention is to provide insight for future studies and model development for effective KM implementation in HCOs. A qualitative methods approach was used to create an empirically based model of KM in HCOs. Methods included (a) qualitative interviews (n = 24) with senior leadership to identify types of knowledge important in these roles plus current information-seeking behaviors/needs and (b) in-depth case study with leaders in new executive positions (n = 2). The data were collected from 10 HCOs. Our empirically based model for KM was assessed for face and content validity. The findings highlight the paucity of formal KM in our sample HCOs. Organizational culture, leadership, and resources are instrumental in supporting KM processes. An executive's knowledge needs are extensive, but knowledge assets are often limited or difficult to acquire as much of the available information is not in a usable format. We propose an empirically based model for KM to highlight the importance of context (internal and external), and knowledge seeking, synthesis, sharing, and organization. Participants who reviewed the model supported its basic components and processes, and potential for incorporating KM into organizational processes. Our results articulate ways to improve KM, increase organizational learning, and support evidence-informed decision-making. This research has implications for how to better integrate evidence and knowledge into organizations while considering context and the role of organizational processes.

  8. Sediment yield estimation in mountain catchments of the Camastra reservoir, southern Italy: a comparison among different empirical methods

    NASA Astrophysics Data System (ADS)

    Lazzari, Maurizio; Danese, Maria; Gioia, Dario; Piccarreta, Marco

    2013-04-01

    Sedimentary budget estimation is an important topic for both scientific and social community, because it is crucial to understand both dynamics of orogenic belts and many practical problems, such as soil conservation and sediment accumulation in reservoir. Estimations of sediment yield or denudation rates in southern-central Italy are generally obtained by simple empirical relationships based on statistical regression between geomorphic parameters of the drainage network and the measured suspended sediment yield at the outlet of several drainage basins or through the use of models based on sediment delivery ratio or on soil loss equations. In this work, we perform a study of catchment dynamics and an estimation of sedimentary yield for several mountain catchments of the central-western sector of the Basilicata region, southern Italy. Sediment yield estimation has been obtained through both an indirect estimation of suspended sediment yield based on the Tu index (mean annual suspension sediment yield, Ciccacci et al., 1980) and the application of the Rusle (Renard et al., 1997) and the USPED (Mitasova et al., 1996) empirical methods. The preliminary results indicate a reliable difference between the RUSLE and USPED methods and the estimation based on the Tu index; a critical data analysis of results has been carried out considering also the present-day spatial distribution of erosion, transport and depositional processes in relation to the maps obtained from the application of those different empirical methods. The studied catchments drain an artificial reservoir (i.e. the Camastra dam), where a detailed evaluation of the amount of historical sediment storage has been collected. Sediment yield estimation obtained by means of the empirical methods have been compared and checked with historical data of sediment accumulation measured in the artificial reservoir of the Camastra dam. The validation of such estimations of sediment yield at the scale of large catchments using sediment storage in reservoirs provides a good opportunity: i) to test the reliability of the empirical methods used to estimate the sediment yield; ii) to investigate the catchment dynamics and its spatial and temporal evolution in terms of erosion, transport and deposition. References Ciccacci S., Fredi F., Lupia Palmieri E., Pugliese F., 1980. Contributo dell'analisi geomorfica quantitativa alla valutazione dell'entita dell'erosione nei bacini fluviali. Bollettino della Società Geologica Italiana 99: 455-516. Mitasova H, Hofierka J, Zlocha M, Iverson LR. 1996. Modeling topographic potential for erosion and deposition using GIS. International Journal of Geographical Information Systems 10: 629-641. Renard K.G., Foster G.R., Weesies G.A., McCool D.K., Yoder D.C., 1997. Predicting soil erosion by water: a guide to conservation planning with the Revised Universal Soil Loss Equation (RUSLE), USDA-ARS, Agricultural Handbook No. 703.

  9. Computer Model of the Empirical Knowledge of Physics Formation: Coordination with Testing Results

    ERIC Educational Resources Information Center

    Mayer, Robert V.

    2016-01-01

    The use of method of imitational modeling to study forming the empirical knowledge in pupil's consciousness is discussed. The offered model is based on division of the physical facts into three categories: 1) the facts established in everyday life; 2) the facts, which the pupil can experimentally establish at a physics lesson; 3) the facts which…

  10. Pulling It Together: Using Integrative Assignments as Empirical Direct Measures of Student Learning for Learning Community Program Assessment

    ERIC Educational Resources Information Center

    Huerta, Juan Carlos; Sperry, Rita

    2013-01-01

    This article outlines a systematic and manageable method for learning community program assessment based on collecting empirical direct measures of student learning. Developed at Texas A&M University--Corpus Christi where all full-time, first-year students are in learning communities, the approach ties integrative assignment design to a rubric…

  11. Towards Validation of an Adaptive Flight Control Simulation Using Statistical Emulation

    NASA Technical Reports Server (NTRS)

    He, Yuning; Lee, Herbert K. H.; Davies, Misty D.

    2012-01-01

    Traditional validation of flight control systems is based primarily upon empirical testing. Empirical testing is sufficient for simple systems in which a.) the behavior is approximately linear and b.) humans are in-the-loop and responsible for off-nominal flight regimes. A different possible concept of operation is to use adaptive flight control systems with online learning neural networks (OLNNs) in combination with a human pilot for off-nominal flight behavior (such as when a plane has been damaged). Validating these systems is difficult because the controller is changing during the flight in a nonlinear way, and because the pilot and the control system have the potential to co-adapt in adverse ways traditional empirical methods are unlikely to provide any guarantees in this case. Additionally, the time it takes to find unsafe regions within the flight envelope using empirical testing means that the time between adaptive controller design iterations is large. This paper describes a new concept for validating adaptive control systems using methods based on Bayesian statistics. This validation framework allows the analyst to build nonlinear models with modal behavior, and to have an uncertainty estimate for the difference between the behaviors of the model and system under test.

  12. A discrete element method-based approach to predict the breakage of coal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gupta, Varun; Sun, Xin; Xu, Wei

    Pulverization is an essential pre-combustion technique employed for solid fuels, such as coal, to reduce particle sizes. Smaller particles ensure rapid and complete combustion, leading to low carbon emissions. Traditionally, the resulting particle size distributions from pulverizers have been determined by empirical or semi-empirical approaches that rely on extensive data gathered over several decades during operations or experiments, with limited predictive capabilities for new coals and processes. Our work presents a Discrete Element Method (DEM)-based computational approach to model coal particle breakage with experimentally characterized coal physical properties. We also examined the effect of select operating parameters on the breakagemore » behavior of coal particles.« less

  13. A discrete element method-based approach to predict the breakage of coal

    DOE PAGES

    Gupta, Varun; Sun, Xin; Xu, Wei; ...

    2017-08-05

    Pulverization is an essential pre-combustion technique employed for solid fuels, such as coal, to reduce particle sizes. Smaller particles ensure rapid and complete combustion, leading to low carbon emissions. Traditionally, the resulting particle size distributions from pulverizers have been determined by empirical or semi-empirical approaches that rely on extensive data gathered over several decades during operations or experiments, with limited predictive capabilities for new coals and processes. Our work presents a Discrete Element Method (DEM)-based computational approach to model coal particle breakage with experimentally characterized coal physical properties. We also examined the effect of select operating parameters on the breakagemore » behavior of coal particles.« less

  14. Species delimitation using Bayes factors: simulations and application to the Sceloporus scalaris species group (Squamata: Phrynosomatidae).

    PubMed

    Grummer, Jared A; Bryson, Robert W; Reeder, Tod W

    2014-03-01

    Current molecular methods of species delimitation are limited by the types of species delimitation models and scenarios that can be tested. Bayes factors allow for more flexibility in testing non-nested species delimitation models and hypotheses of individual assignment to alternative lineages. Here, we examined the efficacy of Bayes factors in delimiting species through simulations and empirical data from the Sceloporus scalaris species group. Marginal-likelihood scores of competing species delimitation models, from which Bayes factor values were compared, were estimated with four different methods: harmonic mean estimation (HME), smoothed harmonic mean estimation (sHME), path-sampling/thermodynamic integration (PS), and stepping-stone (SS) analysis. We also performed model selection using a posterior simulation-based analog of the Akaike information criterion through Markov chain Monte Carlo analysis (AICM). Bayes factor species delimitation results from the empirical data were then compared with results from the reversible-jump MCMC (rjMCMC) coalescent-based species delimitation method Bayesian Phylogenetics and Phylogeography (BP&P). Simulation results show that HME and sHME perform poorly compared with PS and SS marginal-likelihood estimators when identifying the true species delimitation model. Furthermore, Bayes factor delimitation (BFD) of species showed improved performance when species limits are tested by reassigning individuals between species, as opposed to either lumping or splitting lineages. In the empirical data, BFD through PS and SS analyses, as well as the rjMCMC method, each provide support for the recognition of all scalaris group taxa as independent evolutionary lineages. Bayes factor species delimitation and BP&P also support the recognition of three previously undescribed lineages. In both simulated and empirical data sets, harmonic and smoothed harmonic mean marginal-likelihood estimators provided much higher marginal-likelihood estimates than PS and SS estimators. The AICM displayed poor repeatability in both simulated and empirical data sets, and produced inconsistent model rankings across replicate runs with the empirical data. Our results suggest that species delimitation through the use of Bayes factors with marginal-likelihood estimates via PS or SS analyses provide a useful and complementary alternative to existing species delimitation methods.

  15. Hard-Rock Stability Analysis for Span Design in Entry-Type Excavations with Learning Classifiers

    PubMed Central

    García-Gonzalo, Esperanza; Fernández-Muñiz, Zulima; García Nieto, Paulino José; Bernardo Sánchez, Antonio; Menéndez Fernández, Marta

    2016-01-01

    The mining industry relies heavily on empirical analysis for design and prediction. An empirical design method, called the critical span graph, was developed specifically for rock stability analysis in entry-type excavations, based on an extensive case-history database of cut and fill mining in Canada. This empirical span design chart plots the critical span against rock mass rating for the observed case histories and has been accepted by many mining operations for the initial span design of cut and fill stopes. Different types of analysis have been used to classify the observed cases into stable, potentially unstable and unstable groups. The main purpose of this paper is to present a new method for defining rock stability areas of the critical span graph, which applies machine learning classifiers (support vector machine and extreme learning machine). The results show a reasonable correlation with previous guidelines. These machine learning methods are good tools for developing empirical methods, since they make no assumptions about the regression function. With this software, it is easy to add new field observations to a previous database, improving prediction output with the addition of data that consider the local conditions for each mine. PMID:28773653

  16. Hard-Rock Stability Analysis for Span Design in Entry-Type Excavations with Learning Classifiers.

    PubMed

    García-Gonzalo, Esperanza; Fernández-Muñiz, Zulima; García Nieto, Paulino José; Bernardo Sánchez, Antonio; Menéndez Fernández, Marta

    2016-06-29

    The mining industry relies heavily on empirical analysis for design and prediction. An empirical design method, called the critical span graph, was developed specifically for rock stability analysis in entry-type excavations, based on an extensive case-history database of cut and fill mining in Canada. This empirical span design chart plots the critical span against rock mass rating for the observed case histories and has been accepted by many mining operations for the initial span design of cut and fill stopes. Different types of analysis have been used to classify the observed cases into stable, potentially unstable and unstable groups. The main purpose of this paper is to present a new method for defining rock stability areas of the critical span graph, which applies machine learning classifiers (support vector machine and extreme learning machine). The results show a reasonable correlation with previous guidelines. These machine learning methods are good tools for developing empirical methods, since they make no assumptions about the regression function. With this software, it is easy to add new field observations to a previous database, improving prediction output with the addition of data that consider the local conditions for each mine.

  17. An empirical Bayes method for updating inferences in analysis of quantitative trait loci using information from related genome scans.

    PubMed

    Zhang, Kui; Wiener, Howard; Beasley, Mark; George, Varghese; Amos, Christopher I; Allison, David B

    2006-08-01

    Individual genome scans for quantitative trait loci (QTL) mapping often suffer from low statistical power and imprecise estimates of QTL location and effect. This lack of precision yields large confidence intervals for QTL location, which are problematic for subsequent fine mapping and positional cloning. In prioritizing areas for follow-up after an initial genome scan and in evaluating the credibility of apparent linkage signals, investigators typically examine the results of other genome scans of the same phenotype and informally update their beliefs about which linkage signals in their scan most merit confidence and follow-up via a subjective-intuitive integration approach. A method that acknowledges the wisdom of this general paradigm but formally borrows information from other scans to increase confidence in objectivity would be a benefit. We developed an empirical Bayes analytic method to integrate information from multiple genome scans. The linkage statistic obtained from a single genome scan study is updated by incorporating statistics from other genome scans as prior information. This technique does not require that all studies have an identical marker map or a common estimated QTL effect. The updated linkage statistic can then be used for the estimation of QTL location and effect. We evaluate the performance of our method by using extensive simulations based on actual marker spacing and allele frequencies from available data. Results indicate that the empirical Bayes method can account for between-study heterogeneity, estimate the QTL location and effect more precisely, and provide narrower confidence intervals than results from any single individual study. We also compared the empirical Bayes method with a method originally developed for meta-analysis (a closely related but distinct purpose). In the face of marked heterogeneity among studies, the empirical Bayes method outperforms the comparator.

  18. Comparing Impact Findings from Design-Based and Model-Based Methods: An Empirical Investigation. NCEE 2017-4026

    ERIC Educational Resources Information Center

    Kautz, Tim; Schochet, Peter Z.; Tilley, Charles

    2017-01-01

    A new design-based theory has recently been developed to estimate impacts for randomized controlled trials (RCTs) and basic quasi-experimental designs (QEDs) for a wide range of designs used in social policy research (Imbens & Rubin, 2015; Schochet, 2016). These methods use the potential outcomes framework and known features of study designs…

  19. Empirical research in service engineering based on AHP and fuzzy methods

    NASA Astrophysics Data System (ADS)

    Zhang, Yanrui; Cao, Wenfu; Zhang, Lina

    2015-12-01

    Recent years, management consulting industry has been rapidly developing worldwide. Taking a big management consulting company as research object, this paper established an index system of service quality of consulting, based on customer satisfaction survey, evaluated service quality of the consulting company by AHP and fuzzy comprehensive evaluation methods.

  20. The Use of a Corpus in Contrastive Studies.

    ERIC Educational Resources Information Center

    Filipovic, Rudolf

    1973-01-01

    Before beginning the Serbocroatian-English Contrastive Project, it was necessary to determine whether to base the analysis on a corpus or on native intuitions. It seemed that the best method would combine the theoretical and the empirical. A translation method based on a corpus of text was adopted. The Brown University "Standard Sample of…

  1. Scaffolding Wiki-Supported Collaborative Learning for Small-Group Projects and Whole-Class Collaborative Knowledge Building

    ERIC Educational Resources Information Center

    Lin, C-Y.; Reigeluth, C. M.

    2016-01-01

    While educators value wikis' potential, wikis may fail to support collaborative constructive learning without careful scaffolding. This article proposes literature-based instructional methods, revised based on two expert instructors' input, presents the collected empirical evidence on the effects of these methods and proposes directions for future…

  2. Fuel consumption modeling in support of ATM environmental decision-making

    DOT National Transportation Integrated Search

    2009-07-01

    The FAA has recently updated the airport terminal : area fuel consumption methods used in its environmental models. : These methods are based on fitting manufacturers fuel : consumption data to empirical equations. The new fuel : consumption metho...

  3. Bivariate empirical mode decomposition for ECG-based biometric identification with emotional data.

    PubMed

    Ferdinando, Hany; Seppanen, Tapio; Alasaarela, Esko

    2017-07-01

    Emotions modulate ECG signals such that they might affect ECG-based biometric identification in real life application. It motivated in finding good feature extraction methods where the emotional state of the subjects has minimum impacts. This paper evaluates feature extraction based on bivariate empirical mode decomposition (BEMD) for biometric identification when emotion is considered. Using the ECG signal from the Mahnob-HCI database for affect recognition, the features were statistical distributions of dominant frequency after applying BEMD analysis to ECG signals. The achieved accuracy was 99.5% with high consistency using kNN classifier in 10-fold cross validation to identify 26 subjects when the emotional states of the subjects were ignored. When the emotional states of the subject were considered, the proposed method also delivered high accuracy, around 99.4%. We concluded that the proposed method offers emotion-independent features for ECG-based biometric identification. The proposed method needs more evaluation related to testing with other classifier and variation in ECG signals, e.g. normal ECG vs. ECG with arrhythmias, ECG from various ages, and ECG from other affective databases.

  4. Fringe-projection profilometry based on two-dimensional empirical mode decomposition.

    PubMed

    Zheng, Suzhen; Cao, Yiping

    2013-11-01

    In 3D shape measurement, because deformed fringes often contain low-frequency information degraded with random noise and background intensity information, a new fringe-projection profilometry is proposed based on 2D empirical mode decomposition (2D-EMD). The fringe pattern is first decomposed into numbers of intrinsic mode functions by 2D-EMD. Because the method has partial noise reduction, the background components can be removed to obtain the fundamental components needed to perform Hilbert transformation to retrieve the phase information. The 2D-EMD can effectively extract the modulation phase of a single direction fringe and an inclined fringe pattern because it is a full 2D analysis method and considers the relationship between adjacent lines of a fringe patterns. In addition, as the method does not add noise repeatedly, as does ensemble EMD, the data processing time is shortened. Computer simulations and experiments prove the feasibility of this method.

  5. Protein structure refinement using a quantum mechanics-based chemical shielding predictor.

    PubMed

    Bratholm, Lars A; Jensen, Jan H

    2017-03-01

    The accurate prediction of protein chemical shifts using a quantum mechanics (QM)-based method has been the subject of intense research for more than 20 years but so far empirical methods for chemical shift prediction have proven more accurate. In this paper we show that a QM-based predictor of a protein backbone and CB chemical shifts (ProCS15, PeerJ , 2016, 3, e1344) is of comparable accuracy to empirical chemical shift predictors after chemical shift-based structural refinement that removes small structural errors. We present a method by which quantum chemistry based predictions of isotropic chemical shielding values (ProCS15) can be used to refine protein structures using Markov Chain Monte Carlo (MCMC) simulations, relating the chemical shielding values to the experimental chemical shifts probabilistically. Two kinds of MCMC structural refinement simulations were performed using force field geometry optimized X-ray structures as starting points: simulated annealing of the starting structure and constant temperature MCMC simulation followed by simulated annealing of a representative ensemble structure. Annealing of the CHARMM structure changes the CA-RMSD by an average of 0.4 Å but lowers the chemical shift RMSD by 1.0 and 0.7 ppm for CA and N. Conformational averaging has a relatively small effect (0.1-0.2 ppm) on the overall agreement with carbon chemical shifts but lowers the error for nitrogen chemical shifts by 0.4 ppm. If an amino acid specific offset is included the ProCS15 predicted chemical shifts have RMSD values relative to experiments that are comparable to popular empirical chemical shift predictors. The annealed representative ensemble structures differ in CA-RMSD relative to the initial structures by an average of 2.0 Å, with >2.0 Å difference for six proteins. In four of the cases, the largest structural differences arise in structurally flexible regions of the protein as determined by NMR, and in the remaining two cases, the large structural change may be due to force field deficiencies. The overall accuracy of the empirical methods are slightly improved by annealing the CHARMM structure with ProCS15, which may suggest that the minor structural changes introduced by ProCS15-based annealing improves the accuracy of the protein structures. Having established that QM-based chemical shift prediction can deliver the same accuracy as empirical shift predictors we hope this can help increase the accuracy of related approaches such as QM/MM or linear scaling approaches or interpreting protein structural dynamics from QM-derived chemical shift.

  6. Control Theory and Statistical Generalizations.

    ERIC Educational Resources Information Center

    Powers, William T.

    1990-01-01

    Contrasts modeling methods in control theory to the methods of statistical generalizations in empirical studies of human or animal behavior. Presents a computer simulation that predicts behavior based on variables (effort and rewards) determined by the invariable (desired reward). Argues that control theory methods better reflect relationships to…

  7. Analytical Fuselage and Wing Weight Estimation of Transport Aircraft

    NASA Technical Reports Server (NTRS)

    Chambers, Mark C.; Ardema, Mark D.; Patron, Anthony P.; Hahn, Andrew S.; Miura, Hirokazu; Moore, Mark D.

    1996-01-01

    A method of estimating the load-bearing fuselage weight and wing weight of transport aircraft based on fundamental structural principles has been developed. This method of weight estimation represents a compromise between the rapid assessment of component weight using empirical methods based on actual weights of existing aircraft, and detailed, but time-consuming, analysis using the finite element method. The method was applied to eight existing subsonic transports for validation and correlation. Integration of the resulting computer program, PDCYL, has been made into the weights-calculating module of the AirCraft SYNThesis (ACSYNT) computer program. ACSYNT has traditionally used only empirical weight estimation methods; PDCYL adds to ACSYNT a rapid, accurate means of assessing the fuselage and wing weights of unconventional aircraft. PDCYL also allows flexibility in the choice of structural concept, as well as a direct means of determining the impact of advanced materials on structural weight. Using statistical analysis techniques, relations between the load-bearing fuselage and wing weights calculated by PDCYL and corresponding actual weights were determined.

  8. Investigation of KDP crystal surface based on an improved bidimensional empirical mode decomposition method

    NASA Astrophysics Data System (ADS)

    Lu, Lei; Yan, Jihong; Chen, Wanqun; An, Shi

    2018-03-01

    This paper proposed a novel spatial frequency analysis method for the investigation of potassium dihydrogen phosphate (KDP) crystal surface based on an improved bidimensional empirical mode decomposition (BEMD) method. Aiming to eliminate end effects of the BEMD method and improve the intrinsic mode functions (IMFs) for the efficient identification of texture features, a denoising process was embedded in the sifting iteration of BEMD method. With removing redundant information in decomposed sub-components of KDP crystal surface, middle spatial frequencies of the cutting and feeding processes were identified. Comparative study with the power spectral density method, two-dimensional wavelet transform (2D-WT), as well as the traditional BEMD method, demonstrated that the method developed in this paper can efficiently extract texture features and reveal gradient development of KDP crystal surface. Furthermore, the proposed method was a self-adaptive data driven technique without prior knowledge, which overcame shortcomings of the 2D-WT model such as the parameters selection. Additionally, the proposed method was a promising tool for the application of online monitoring and optimal control of precision machining process.

  9. Application of a net-based baseline correction scheme to strong-motion records of the 2011 Mw 9.0 Tohoku earthquake

    NASA Astrophysics Data System (ADS)

    Tu, Rui; Wang, Rongjiang; Zhang, Yong; Walter, Thomas R.

    2014-06-01

    The description of static displacements associated with earthquakes is traditionally achieved using GPS, EDM or InSAR data. In addition, displacement histories can be derived from strong-motion records, allowing an improvement of geodetic networks at a high sampling rate and a better physical understanding of earthquake processes. Strong-motion records require a correction procedure appropriate for baseline shifts that may be caused by rotational motion, tilting and other instrumental effects. Common methods use an empirical bilinear correction on the velocity seismograms integrated from the strong-motion records. In this study, we overcome the weaknesses of an empirically based bilinear baseline correction scheme by using a net-based criterion to select the timing parameters. This idea is based on the physical principle that low-frequency seismic waveforms at neighbouring stations are coherent if the interstation distance is much smaller than the distance to the seismic source. For a dense strong-motion network, it is plausible to select the timing parameters so that the correlation coefficient between the velocity seismograms of two neighbouring stations is maximized after the baseline correction. We applied this new concept to the KiK-Net and K-Net strong-motion data available for the 2011 Mw 9.0 Tohoku earthquake. We compared the derived coseismic static displacement with high-quality GPS data, and with the results obtained using empirical methods. The results show that the proposed net-based approach is feasible and more robust than the individual empirical approaches. The outliers caused by unknown problems in the measurement system can be easily detected and quantified.

  10. Liquefaction assessment based on combined use of CPT and shear wave velocity measurements

    NASA Astrophysics Data System (ADS)

    Bán, Zoltán; Mahler, András; Győri, Erzsébet

    2017-04-01

    Soil liquefaction is one of the most devastating secondary effects of earthquakes and can cause significant damage in built infrastructure. For this reason liquefaction hazard shall be considered in all regions where moderate-to-high seismic activity encounters with saturated, loose, granular soil deposits. Several approaches exist to take into account this hazard, from which the in-situ test based empirical methods are the most commonly used in practice. These methods are generally based on the results of CPT, SPT or shear wave velocity measurements. In more complex or high risk projects CPT and VS measurement are often performed at the same location commonly in the form of seismic CPT. Furthermore, VS profile determined by surface wave methods can also supplement the standard CPT measurement. However, combined use of both in-situ indices in one single empirical method is limited. For this reason, the goal of this research was to develop such an empirical method within the framework of simplified empirical procedures where the results of CPT and VS measurements are used in parallel and can supplement each other. The combination of two in-situ indices, a small strain property measurement with a large strain measurement, can reduce uncertainty of empirical methods. In the first step by careful reviewing of the already existing liquefaction case history databases, sites were selected where the records of both CPT and VS measurement are available. After implementing the necessary corrections on the gathered 98 case histories with respect to fines content, overburden pressure and magnitude, a logistic regression was performed to obtain the probability contours of liquefaction occurrence. Logistic regression is often used to explore the relationship between a binary response and a set of explanatory variables. The occurrence or absence of liquefaction can be considered as binary outcome and the equivalent clean sand value of normalized overburden corrected cone tip resistance (qc1Ncs), the overburden corrected shear wave velocity (V S1), and the magnitude and effective stress corrected cyclic stress ratio (CSRM=7.5,σv'=1atm) were considered as input variables. In this case the graphical representation of the cyclic resistance ratio curve for a given probability has been replaced by a surface that separates the liquefaction and non-liquefaction cases.

  11. The Objective Borderline method (OBM): a probability-based model for setting up an objective pass/fail cut-off score in medical programme assessments.

    PubMed

    Shulruf, Boaz; Turner, Rolf; Poole, Phillippa; Wilkinson, Tim

    2013-05-01

    The decision to pass or fail a medical student is a 'high stakes' one. The aim of this study is to introduce and demonstrate the feasibility and practicality of a new objective standard-setting method for determining the pass/fail cut-off score from borderline grades. Three methods for setting up pass/fail cut-off scores were compared: the Regression Method, the Borderline Group Method, and the new Objective Borderline Method (OBM). Using Year 5 students' OSCE results from one medical school we established the pass/fail cut-off scores by the abovementioned three methods. The comparison indicated that the pass/fail cut-off scores generated by the OBM were similar to those generated by the more established methods (0.840 ≤ r ≤ 0.998; p < .0001). Based on theoretical and empirical analysis, we suggest that the OBM has advantages over existing methods in that it combines objectivity, realism, robust empirical basis and, no less importantly, is simple to use.

  12. Development of Advanced Methods of Structural and Trajectory Analysis for Transport Aircraft

    NASA Technical Reports Server (NTRS)

    Ardema, Mark D.

    1996-01-01

    In this report the author describes: (1) development of advanced methods of structural weight estimation, and (2) development of advanced methods of flight path optimization. A method of estimating the load-bearing fuselage weight and wing weight of transport aircraft based on fundamental structural principles has been developed. This method of weight estimation represents a compromise between the rapid assessment of component weight using empirical methods based on actual weights of existing aircraft and detailed, but time-consuming, analysis using the finite element method. The method was applied to eight existing subsonic transports for validation and correlation. Integration of the resulting computer program, PDCYL, has been made into the weights-calculating module of the AirCraft SYNThesis (ACSYNT) computer program. ACSYNT bas traditionally used only empirical weight estimation methods; PDCYL adds to ACSYNT a rapid, accurate means of assessing the fuselage and wing weights of unconventional aircraft. PDCYL also allows flexibility in the choice of structural concept, as well as a direct means of determining the impact of advanced materials on structural weight.

  13. Investigating Measurement Invariance in Computer-Based Personality Testing: The Impact of Using Anchor Items on Effect Size Indices

    ERIC Educational Resources Information Center

    Egberink, Iris J. L.; Meijer, Rob R.; Tendeiro, Jorge N.

    2015-01-01

    A popular method to assess measurement invariance of a particular item is based on likelihood ratio tests with all other items as anchor items. The results of this method are often only reported in terms of statistical significance, and researchers proposed different methods to empirically select anchor items. It is unclear, however, how many…

  14. Standardizing lightweight deflectometer modulus measurements for compaction quality assurance : research summary.

    DOT National Transportation Integrated Search

    2017-09-01

    The mechanistic-empirical pavement design method requires the elastic resilient modulus as the key input for characterization of geomaterials. Current density-based QA procedures do not measure resilient modulus. Additionally, the density-based metho...

  15. Modeling Infrared Signal Reflections to Characterize Indoor Multipath Propagation

    PubMed Central

    De-La-Llana-Calvo, Álvaro; Lázaro-Galilea, José Luis; Gardel-Vicente, Alfredo; Rodríguez-Navarro, David; Bravo-Muñoz, Ignacio; Tsirigotis, Georgios; Iglesias-Miguel, Juan

    2017-01-01

    In this paper, we propose a model to characterize Infrared (IR) signal reflections on any kind of surface material, together with a simplified procedure to compute the model parameters. The model works within the framework of Local Positioning Systems (LPS) based on IR signals (IR-LPS) to evaluate the behavior of transmitted signal Multipaths (MP), which are the main cause of error in IR-LPS, and makes several contributions to mitigation methods. Current methods are based on physics, optics, geometry and empirical methods, but these do not meet our requirements because of the need to apply several different restrictions and employ complex tools. We propose a simplified model based on only two reflection components, together with a method for determining the model parameters based on 12 empirical measurements that are easily performed in the real environment where the IR-LPS is being applied. Our experimental results show that the model provides a comprehensive solution to the real behavior of IR MP, yielding small errors when comparing real and modeled data (the mean error ranges from 1% to 4% depending on the environment surface materials). Other state-of-the-art methods yielded mean errors ranging from 15% to 40% in test measurements. PMID:28406436

  16. Comparing and Contrasting Consensus versus Empirical Domains

    PubMed Central

    Jason, Leonard A.; Kot, Bobby; Sunnquist, Madison; Brown, Abigail; Reed, Jordan; Furst, Jacob; Newton, Julia L.; Strand, Elin Bolle; Vernon, Suzanne D.

    2015-01-01

    Background Since the publication of the CFS case definition [1], there have been a number of other criteria proposed including the Canadian Consensus Criteria [2] and the Myalgic Encephalomyelitis: International Consensus Criteria. [3] Purpose The current study compared these domains that were developed through consensus methods to one obtained through more empirical approaches using factor analysis. Methods Using data mining, we compared and contrasted fundamental features of consensus-based criteria versus empirical latent factors. In general, these approaches found the domain of Fatigue/Post-exertional malaise as best differentiating patients from controls. Results Findings indicated that the Fukuda et al. criteria had the worst sensitivity and specificity. Conclusions These outcomes might help both theorists and researchers better determine which fundamental domains to be used for the case definition. PMID:26977374

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dierauf, Timothy; Kurtz, Sarah; Riley, Evan

    This paper provides a recommended method for evaluating the AC capacity of a photovoltaic (PV) generating station. It also presents companion guidance on setting the facilitys capacity guarantee value. This is a principles-based approach that incorporates plant fundamental design parameters such as loss factors, module coefficients, and inverter constraints. This method has been used to prove contract guarantees for over 700 MW of installed projects. The method is transparent, and the results are deterministic. In contrast, current industry practices incorporate statistical regression where the empirical coefficients may only characterize the collected data. Though these methods may work well when extrapolationmore » is not required, there are other situations where the empirical coefficients may not adequately model actual performance.This proposed Fundamentals Approach method provides consistent results even where regression methods start to lose fidelity.« less

  18. A discrete element method-based approach to predict the breakage of coal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gupta, Varun; Sun, Xin; Xu, Wei

    Pulverization is an essential pre-combustion technique employed for solid fuels, such as coal, to reduce particle sizes. Smaller particles ensure rapid and complete combustion, leading to low carbon emissions. Traditionally, the resulting particle size distributions from pulverizers have been informed by empirical or semi-empirical approaches that rely on extensive data gathered over several decades during operations or experiments. However, the predictive capabilities for new coals and processes are limited. This work presents a Discrete Element Method based computational framework to predict particle size distribution resulting from the breakage of coal particles characterized by the coal’s physical properties. The effect ofmore » certain operating parameters on the breakage behavior of coal particles also is examined.« less

  19. Two Improved Access Methods on Compact Binary (CB) Trees.

    ERIC Educational Resources Information Center

    Shishibori, Masami; Koyama, Masafumi; Okada, Makoto; Aoe, Jun-ichi

    2000-01-01

    Discusses information retrieval and the use of binary trees as a fast access method for search strategies such as hashing. Proposes new methods based on compact binary trees that provide faster access and more compact storage, explains the theoretical basis, and confirms the validity of the methods through empirical observations. (LRW)

  20. Patterns of Cognitive Strengths and Weaknesses: Identification Rates, Agreement, and Validity for Learning Disabilities Identification

    ERIC Educational Resources Information Center

    Miciak, Jeremy; Fletcher, Jack M.; Stuebing, Karla K.; Vaughn, Sharon; Tolar, Tammy D.

    2014-01-01

    Few empirical investigations have evaluated learning disabilities (LD) identification methods based on a pattern of cognitive strengths and weaknesses (PSW). This study investigated the reliability and validity of two proposed PSW methods: the concordance/discordance method (C/DM) and cross battery assessment (XBA) method. Cognitive assessment…

  1. Empirical intrinsic geometry for nonlinear modeling and time series filtering.

    PubMed

    Talmon, Ronen; Coifman, Ronald R

    2013-07-30

    In this paper, we present a method for time series analysis based on empirical intrinsic geometry (EIG). EIG enables one to reveal the low-dimensional parametric manifold as well as to infer the underlying dynamics of high-dimensional time series. By incorporating concepts of information geometry, this method extends existing geometric analysis tools to support stochastic settings and parametrizes the geometry of empirical distributions. However, the statistical models are not required as priors; hence, EIG may be applied to a wide range of real signals without existing definitive models. We show that the inferred model is noise-resilient and invariant under different observation and instrumental modalities. In addition, we show that it can be extended efficiently to newly acquired measurements in a sequential manner. These two advantages enable us to revisit the Bayesian approach and incorporate empirical dynamics and intrinsic geometry into a nonlinear filtering framework. We show applications to nonlinear and non-Gaussian tracking problems as well as to acoustic signal localization.

  2. Patterns of Cognitive Strengths and Weaknesses: Identification Rates, Agreement, and Validity for Learning Disabilities Identification

    PubMed Central

    Miciak, Jeremy; Fletcher, Jack M.; Stuebing, Karla; Vaughn, Sharon; Tolar, Tammy D.

    2014-01-01

    Purpose Few empirical investigations have evaluated LD identification methods based on a pattern of cognitive strengths and weaknesses (PSW). This study investigated the reliability and validity of two proposed PSW methods: the concordance/discordance method (C/DM) and cross battery assessment (XBA) method. Methods Cognitive assessment data for 139 adolescents demonstrating inadequate response to intervention was utilized to empirically classify participants as meeting or not meeting PSW LD identification criteria using the two approaches, permitting an analysis of: (1) LD identification rates; (2) agreement between methods; and (3) external validity. Results LD identification rates varied between the two methods depending upon the cut point for low achievement, with low agreement for LD identification decisions. Comparisons of groups that met and did not meet LD identification criteria on external academic variables were largely null, raising questions of external validity. Conclusions This study found low agreement and little evidence of validity for LD identification decisions based on PSW methods. An alternative may be to use multiple measures of academic achievement to guide intervention. PMID:24274155

  3. Artificial Intelligence Methods in Computer-Based Instructional Design. The Minnesota Adaptive Instructional System.

    ERIC Educational Resources Information Center

    Tennyson, Robert

    1984-01-01

    Reviews educational applications of artificial intelligence and presents empirically-based design variables for developing a computer-based instruction management system. Taken from a programmatic research effort based on the Minnesota Adaptive Instructional System, variables include amount and sequence of instruction, display time, advisement,…

  4. Appraising the reliability of visual impact assessment methods

    Treesearch

    Nickolaus R. Feimer; Kenneth H. Craik; Richard C. Smardon; Stephen R.J. Sheppard

    1979-01-01

    This paper presents the research approach and selected results of an empirical investigation aimed at the evaluation of selected observer-based visual impact assessment (VIA) methods. The VIA methods under examination were chosen to cover a range of VIA methods currently in use in both applied and research settings. Variation in three facets of VIA methods were...

  5. A new multivariate empirical mode decomposition method for improving the performance of SSVEP-based brain-computer interface

    NASA Astrophysics Data System (ADS)

    Chen, Yi-Feng; Atal, Kiran; Xie, Sheng-Quan; Liu, Quan

    2017-08-01

    Objective. Accurate and efficient detection of steady-state visual evoked potentials (SSVEP) in electroencephalogram (EEG) is essential for the related brain-computer interface (BCI) applications. Approach. Although the canonical correlation analysis (CCA) has been applied extensively and successfully to SSVEP recognition, the spontaneous EEG activities and artifacts that often occur during data recording can deteriorate the recognition performance. Therefore, it is meaningful to extract a few frequency sub-bands of interest to avoid or reduce the influence of unrelated brain activity and artifacts. This paper presents an improved method to detect the frequency component associated with SSVEP using multivariate empirical mode decomposition (MEMD) and CCA (MEMD-CCA). EEG signals from nine healthy volunteers were recorded to evaluate the performance of the proposed method for SSVEP recognition. Main results. We compared our method with CCA and temporally local multivariate synchronization index (TMSI). The results suggest that the MEMD-CCA achieved significantly higher accuracy in contrast to standard CCA and TMSI. It gave the improvements of 1.34%, 3.11%, 3.33%, 10.45%, 15.78%, 18.45%, 15.00% and 14.22% on average over CCA at time windows from 0.5 s to 5 s and 0.55%, 1.56%, 7.78%, 14.67%, 13.67%, 7.33% and 7.78% over TMSI from 0.75 s to 5 s. The method outperformed the filter-based decomposition (FB), empirical mode decomposition (EMD) and wavelet decomposition (WT) based CCA for SSVEP recognition. Significance. The results demonstrate the ability of our proposed MEMD-CCA to improve the performance of SSVEP-based BCI.

  6. Fire risk in San Diego County, California: A weighted Bayesian model approach

    USGS Publications Warehouse

    Kolden, Crystal A.; Weigel, Timothy J.

    2007-01-01

    Fire risk models are widely utilized to mitigate wildfire hazards, but models are often based on expert opinions of less understood fire-ignition and spread processes. In this study, we used an empirically derived weights-of-evidence model to assess what factors produce fire ignitions east of San Diego, California. We created and validated a dynamic model of fire-ignition risk based on land characteristics and existing fire-ignition history data, and predicted ignition risk for a future urbanization scenario. We then combined our empirical ignition-risk model with a fuzzy fire behavior-risk model developed by wildfire experts to create a hybrid model of overall fire risk. We found that roads influence fire ignitions and that future growth will increase risk in new rural development areas. We conclude that empirically derived risk models and hybrid models offer an alternative method to assess current and future fire risk based on management actions.

  7. Introducing Postphenomenological Research: A Brief and Selective Sketch of Phenomenological Research Methods

    ERIC Educational Resources Information Center

    Aagaard, Jesper

    2017-01-01

    In time, phenomenology has become a viable approach to conducting qualitative studies in education. Popular and well-established methods include descriptive and hermeneutic phenomenology. Based on critiques of the essentialism and receptivity of these two methods, however, this article offers a third variation of empirical phenomenology:…

  8. 75 FR 13745 - Office of Innovation and Improvement Overview Information; Ready To Teach Program-General...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-23

    ... on rigorous, scientifically based research methods to assess the effectiveness of a particular... and programs; and (B) Includes research that-- (i) Employs systematic, empirical methods that draw on... hypotheses and justify the general conclusions drawn; (iii) Relies on measurements or observational methods...

  9. 75 FR 13515 - Office of Innovation and Improvement (OII); Overview Information; Ready-to-Learn Television...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-22

    ... on rigorous scientifically based research methods to assess the effectiveness of a particular... activities and programs; and (B) Includes research that-- (i) Employs systematic, empirical methods that draw... or observational methods that provide reliable and valid data across evaluators and observers, across...

  10. Determination of knock characteristics in spark ignition engines: an approach based on ensemble empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Li, Ning; Yang, Jianguo; Zhou, Rui; Liang, Caiping

    2016-04-01

    Knock is one of the major constraints to improve the performance and thermal efficiency of spark ignition (SI) engines. It can also result in severe permanent engine damage under certain operating conditions. Based on the ensemble empirical mode decomposition (EEMD), this paper proposes a new approach to determine the knock characteristics in SI engines. By adding a uniformly distributed and finite white Gaussian noise, the EEMD can preserve signal continuity in different scales and therefore alleviates the mode-mixing problem occurring in the classic empirical mode decomposition (EMD). The feasibilities of applying the EEMD to detect the knock signatures of a test SI engine via the pressure signal measured from combustion chamber and the vibration signal measured from cylinder head are investigated. Experimental results show that the EEMD-based method is able to detect the knock signatures from both the pressure signal and vibration signal, even in initial stage of knock. Finally, by comparing the application results with those obtained by short-time Fourier transform (STFT), Wigner-Ville distribution (WVD) and discrete wavelet transform (DWT), the superiority of the EEMD method in determining knock characteristics is demonstrated.

  11. Using change-point models to estimate empirical critical loads for nitrogen in mountain ecosystems.

    PubMed

    Roth, Tobias; Kohli, Lukas; Rihm, Beat; Meier, Reto; Achermann, Beat

    2017-01-01

    To protect ecosystems and their services, the critical load concept has been implemented under the framework of the Convention on Long-range Transboundary Air Pollution (UNECE) to develop effects-oriented air pollution abatement strategies. Critical loads are thresholds below which damaging effects on sensitive habitats do not occur according to current knowledge. Here we use change-point models applied in a Bayesian context to overcome some of the difficulties when estimating empirical critical loads for nitrogen (N) from empirical data. We tested the method using simulated data with varying sample sizes, varying effects of confounding variables, and with varying negative effects of N deposition on species richness. The method was applied to the national-scale plant species richness data from mountain hay meadows and (sub)alpine scrubs sites in Switzerland. Seven confounding factors (elevation, inclination, precipitation, calcareous content, aspect as well as indicator values for humidity and light) were selected based on earlier studies examining numerous environmental factors to explain Swiss vascular plant diversity. The estimated critical load confirmed the existing empirical critical load of 5-15 kg N ha -1 yr -1 for (sub)alpine scrubs, while for mountain hay meadows the estimated critical load was at the lower end of the current empirical critical load range. Based on these results, we suggest to narrow down the critical load range for mountain hay meadows to 10-15 kg N ha -1 yr -1 . Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Patterns Exploration on Patterns of Empirical Herbal Formula of Chinese Medicine by Association Rules

    PubMed Central

    Huang, Li; Yuan, Jiamin; Yang, Zhimin; Xu, Fuping; Huang, Chunhua

    2015-01-01

    Background. In this study, we use association rules to explore the latent rules and patterns of prescribing and adjusting the ingredients of herbal decoctions based on empirical herbal formula of Chinese Medicine (CM). Materials and Methods. The consideration and development of CM prescriptions based on the knowledge of CM doctors are analyzed. The study contained three stages. The first stage is to identify the chief symptoms to a specific empirical herbal formula, which can serve as the key indication for herb addition and cancellation. The second stage is to conduct a case study on the empirical CM herbal formula for insomnia. Doctors will add extra ingredients or cancel some of them by CM syndrome diagnosis. The last stage of the study is to divide the observed cases into the effective group and ineffective group based on the assessed clinical effect by doctors. The patterns during the diagnosis and treatment are selected by the applied algorithm and the relations between clinical symptoms or indications and herb choosing principles will be selected by the association rules algorithm. Results. Totally 40 patients were observed in this study: 28 patients were considered effective after treatment and the remaining 12 were ineffective. 206 patterns related to clinical indications of Chinese Medicine were checked and screened with each observed case. In the analysis of the effective group, we used the algorithm of association rules to select combinations between 28 herbal adjustment strategies of the empirical herbal formula and the 190 patterns of individual clinical manifestations. During this stage, 11 common patterns were eliminated and 5 major symptoms for insomnia remained. 12 association rules were identified which included 5 herbal adjustment strategies. Conclusion. The association rules method is an effective algorithm to explore the latent relations between clinical indications and herbal adjustment strategies for the study on empirical herbal formulas. PMID:26495415

  13. PolyWaTT: A polynomial water travel time estimator based on Derivative Dynamic Time Warping and Perceptually Important Points

    NASA Astrophysics Data System (ADS)

    Claure, Yuri Navarro; Matsubara, Edson Takashi; Padovani, Carlos; Prati, Ronaldo Cristiano

    2018-03-01

    Traditional methods for estimating timing parameters in hydrological science require a rigorous study of the relations of flow resistance, slope, flow regime, watershed size, water velocity, and other local variables. These studies are mostly based on empirical observations, where the timing parameter is estimated using empirically derived formulas. The application of these studies to other locations is not always direct. The locations in which equations are used should have comparable characteristics to the locations from which such equations have been derived. To overcome this barrier, in this work, we developed a data-driven approach to estimate timing parameters such as travel time. Our proposal estimates timing parameters using historical data of the location without the need of adapting or using empirical formulas from other locations. The proposal only uses one variable measured at two different locations on the same river (for instance, two river-level measurements, one upstream and the other downstream on the same river). The recorded data from each location generates two time series. Our method aligns these two time series using derivative dynamic time warping (DDTW) and perceptually important points (PIP). Using data from timing parameters, a polynomial function generalizes the data by inducing a polynomial water travel time estimator, called PolyWaTT. To evaluate the potential of our proposal, we applied PolyWaTT to three different watersheds: a floodplain ecosystem located in the part of Brazil known as Pantanal, the world's largest tropical wetland area; and the Missouri River and the Pearl River, in United States of America. We compared our proposal with empirical formulas and a data-driven state-of-the-art method. The experimental results demonstrate that PolyWaTT showed a lower mean absolute error than all other methods tested in this study, and for longer distances the mean absolute error achieved by PolyWaTT is three times smaller than empirical formulas.

  14. Empirical Investigation of Critical Transitions in Paleoclimate

    NASA Astrophysics Data System (ADS)

    Loskutov, E. M.; Mukhin, D.; Gavrilov, A.; Feigin, A.

    2016-12-01

    In this work we apply a new empirical method for the analysis of complex spatially distributed systems to the analysis of paleoclimate data. The method consists of two general parts: (i) revealing the optimal phase-space variables and (ii) construction the empirical prognostic model by observed time series. The method of phase space variables construction based on the data decomposition into nonlinear dynamical modes which was successfully applied to global SST field and allowed clearly separate time scales and reveal climate shift in the observed data interval [1]. The second part, the Bayesian approach to optimal evolution operator reconstruction by time series is based on representation of evolution operator in the form of nonlinear stochastic function represented by artificial neural networks [2,3]. In this work we are focused on the investigation of critical transitions - the abrupt changes in climate dynamics - in match longer time scale process. It is well known that there were number of critical transitions on different time scales in the past. In this work, we demonstrate the first results of applying our empirical methods to analysis of paleoclimate variability. In particular, we discuss the possibility of detecting, identifying and prediction such critical transitions by means of nonlinear empirical modeling using the paleoclimate record time series. The study is supported by Government of Russian Federation (agreement #14.Z50.31.0033 with the Institute of Applied Physics of RAS). 1. Mukhin, D., Gavrilov, A., Feigin, A., Loskutov, E., & Kurths, J. (2015). Principal nonlinear dynamical modes of climate variability. Scientific Reports, 5, 15510. http://doi.org/10.1038/srep155102. Ya. I. Molkov, D. N. Mukhin, E. M. Loskutov, A.M. Feigin, (2012) : Random dynamical models from time series. Phys. Rev. E, Vol. 85, n.3.3. Mukhin, D., Kondrashov, D., Loskutov, E., Gavrilov, A., Feigin, A., & Ghil, M. (2015). Predicting Critical Transitions in ENSO models. Part II: Spatially Dependent Models. Journal of Climate, 28(5), 1962-1976. http://doi.org/10.1175/JCLI-D-14-00240.1

  15. Examination of the reliability of the crash modification factors using empirical Bayes method with resampling technique.

    PubMed

    Wang, Jung-Han; Abdel-Aty, Mohamed; Wang, Ling

    2017-07-01

    There have been plenty of studies intended to use different methods, for example, empirical Bayes before-after methods, to get accurate estimation of CMFs. All of them have different assumptions toward crash count if there was no treatment. Additionally, another major assumption is that multiple sites share the same true CMF. Under this assumption, the CMF at an individual intersection is randomly drawn from a normally distributed population of CMFs at all intersections. Since CMFs are non-zero values, the population of all CMFs might not follow normal distributions, and even if it does, the true mean of CMFs at some intersections may be different from that at others. Therefore, a bootstrap method based on before-after empirical Bayes theory was proposed to estimate CMFs, but it did not make distributional assumptions. This bootstrap procedure has the added benefit of producing a measure of CMF stability. Furthermore, based on the bootstrapped CMF, a new CMF precision rating method was proposed to evaluate the reliability of CMFs. This study chose 29 urban four-legged intersections as treated sites, and their controls were changed from stop-controlled to signal-controlled. Meanwhile, 124 urban four-legged stop-controlled intersections were selected as reference sites. At first, different safety performance functions (SPFs) were applied to five crash categories, and it was found that each crash category had different optimal SPF form. Then, the CMFs of these five crash categories were estimated using the bootstrap empirical Bayes method. The results of the bootstrapped method showed that signalization significantly decreased Angle+Left-Turn crashes, and its CMF had the highest precision. While, the CMF for Rear-End crashes was unreliable. For KABCO, KABC, and KAB crashes, their CMFs were proved to be reliable for the majority of intersections, but the estimated effect of signalization may be not accurate at some sites. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Empirical entropic contributions in computational docking: evaluation in APS reductase complexes.

    PubMed

    Chang, Max W; Belew, Richard K; Carroll, Kate S; Olson, Arthur J; Goodsell, David S

    2008-08-01

    The results from reiterated docking experiments may be used to evaluate an empirical vibrational entropy of binding in ligand-protein complexes. We have tested several methods for evaluating the vibrational contribution to binding of 22 nucleotide analogues to the enzyme APS reductase. These include two cluster size methods that measure the probability of finding a particular conformation, a method that estimates the extent of the local energetic well by looking at the scatter of conformations within clustered results, and an RMSD-based method that uses the overall scatter and clustering of all conformations. We have also directly characterized the local energy landscape by randomly sampling around docked conformations. The simple cluster size method shows the best performance, improving the identification of correct conformations in multiple docking experiments. 2008 Wiley Periodicals, Inc.

  17. Sensor placement in nuclear reactors based on the generalized empirical interpolation method

    NASA Astrophysics Data System (ADS)

    Argaud, J.-P.; Bouriquet, B.; de Caso, F.; Gong, H.; Maday, Y.; Mula, O.

    2018-06-01

    In this paper, we apply the so-called generalized empirical interpolation method (GEIM) to address the problem of sensor placement in nuclear reactors. This task is challenging due to the accumulation of a number of difficulties like the complexity of the underlying physics and the constraints in the admissible sensor locations and their number. As a result, the placement, still today, strongly relies on the know-how and experience of engineers from different areas of expertise. The present methodology contributes to making this process become more systematic and, in turn, simplify and accelerate the procedure.

  18. Methods to achieve accurate projection of regional and global raster databases

    USGS Publications Warehouse

    Usery, E.L.; Seong, J.C.; Steinwand, D.R.; Finn, M.P.

    2002-01-01

    This research aims at building a decision support system (DSS) for selecting an optimum projection considering various factors, such as pixel size, areal extent, number of categories, spatial pattern of categories, resampling methods, and error correction methods. Specifically, this research will investigate three goals theoretically and empirically and, using the already developed empirical base of knowledge with these results, develop an expert system for map projection of raster data for regional and global database modeling. The three theoretical goals are as follows: (1) The development of a dynamic projection that adjusts projection formulas for latitude on the basis of raster cell size to maintain equal-sized cells. (2) The investigation of the relationships between the raster representation and the distortion of features, number of categories, and spatial pattern. (3) The development of an error correction and resampling procedure that is based on error analysis of raster projection.

  19. An epileptic seizures detection algorithm based on the empirical mode decomposition of EEG.

    PubMed

    Orosco, Lorena; Laciar, Eric; Correa, Agustina Garces; Torres, Abel; Graffigna, Juan P

    2009-01-01

    Epilepsy is a neurological disorder that affects around 50 million people worldwide. The seizure detection is an important component in the diagnosis of epilepsy. In this study, the Empirical Mode Decomposition (EMD) method was proposed on the development of an automatic epileptic seizure detection algorithm. The algorithm first computes the Intrinsic Mode Functions (IMFs) of EEG records, then calculates the energy of each IMF and performs the detection based on an energy threshold and a minimum duration decision. The algorithm was tested in 9 invasive EEG records provided and validated by the Epilepsy Center of the University Hospital of Freiburg. In 90 segments analyzed (39 with epileptic seizures) the sensitivity and specificity obtained with the method were of 56.41% and 75.86% respectively. It could be concluded that EMD is a promissory method for epileptic seizure detection in EEG records.

  20. Short-Circuit Fault Detection and Classification Using Empirical Wavelet Transform and Local Energy for Electric Transmission Line.

    PubMed

    Huang, Nantian; Qi, Jiajin; Li, Fuqing; Yang, Dongfeng; Cai, Guowei; Huang, Guilin; Zheng, Jian; Li, Zhenxin

    2017-09-16

    In order to improve the classification accuracy of recognizing short-circuit faults in electric transmission lines, a novel detection and diagnosis method based on empirical wavelet transform (EWT) and local energy (LE) is proposed. First, EWT is used to deal with the original short-circuit fault signals from photoelectric voltage transformers, before the amplitude modulated-frequency modulated (AM-FM) mode with a compactly supported Fourier spectrum is extracted. Subsequently, the fault occurrence time is detected according to the modulus maxima of intrinsic mode function (IMF₂) from three-phase voltage signals processed by EWT. After this process, the feature vectors are constructed by calculating the LE of the fundamental frequency based on the three-phase voltage signals of one period after the fault occurred. Finally, the classifier based on support vector machine (SVM) which was constructed with the LE feature vectors is used to classify 10 types of short-circuit fault signals. Compared with complementary ensemble empirical mode decomposition with adaptive noise (CEEMDAN) and improved CEEMDAN methods, the new method using EWT has a better ability to present the frequency in time. The difference in the characteristics of the energy distribution in the time domain between different types of short-circuit faults can be presented by the feature vectors of LE. Together, simulation and real signals experiment demonstrate the validity and effectiveness of the new approach.

  1. Short-Circuit Fault Detection and Classification Using Empirical Wavelet Transform and Local Energy for Electric Transmission Line

    PubMed Central

    Huang, Nantian; Qi, Jiajin; Li, Fuqing; Yang, Dongfeng; Cai, Guowei; Huang, Guilin; Zheng, Jian; Li, Zhenxin

    2017-01-01

    In order to improve the classification accuracy of recognizing short-circuit faults in electric transmission lines, a novel detection and diagnosis method based on empirical wavelet transform (EWT) and local energy (LE) is proposed. First, EWT is used to deal with the original short-circuit fault signals from photoelectric voltage transformers, before the amplitude modulated-frequency modulated (AM-FM) mode with a compactly supported Fourier spectrum is extracted. Subsequently, the fault occurrence time is detected according to the modulus maxima of intrinsic mode function (IMF2) from three-phase voltage signals processed by EWT. After this process, the feature vectors are constructed by calculating the LE of the fundamental frequency based on the three-phase voltage signals of one period after the fault occurred. Finally, the classifier based on support vector machine (SVM) which was constructed with the LE feature vectors is used to classify 10 types of short-circuit fault signals. Compared with complementary ensemble empirical mode decomposition with adaptive noise (CEEMDAN) and improved CEEMDAN methods, the new method using EWT has a better ability to present the frequency in time. The difference in the characteristics of the energy distribution in the time domain between different types of short-circuit faults can be presented by the feature vectors of LE. Together, simulation and real signals experiment demonstrate the validity and effectiveness of the new approach. PMID:28926953

  2. Validating an operational physical method to compute surface radiation from geostationary satellites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sengupta, Manajit; Dhere, Neelkanth G.; Wohlgemuth, John H.

    We developed models to compute global horizontal irradiance (GHI) and direct normal irradiance (DNI) over the last three decades. These models can be classified as empirical or physical based on the approach. Empirical models relate ground-based observations with satellite measurements and use these relations to compute surface radiation. Physical models consider the physics behind the radiation received at the satellite and create retrievals to estimate surface radiation. Furthermore, while empirical methods have been traditionally used for computing surface radiation for the solar energy industry, the advent of faster computing has made operational physical models viable. The Global Solar Insolation Projectmore » (GSIP) is a physical model that computes DNI and GHI using the visible and infrared channel measurements from a weather satellite. GSIP uses a two-stage scheme that first retrieves cloud properties and uses those properties in a radiative transfer model to calculate GHI and DNI. Developed for polar orbiting satellites, GSIP has been adapted to NOAA's Geostationary Operation Environmental Satellite series and can run operationally at high spatial resolutions. Our method holds the possibility of creating high quality datasets of GHI and DNI for use by the solar energy industry. We present an outline of the methodology and results from running the model as well as a validation study using ground-based instruments.« less

  3. Testing Differential Effects of Computer-Based, Web-Based and Paper-Based Administration of Questionnaire Research Instruments

    ERIC Educational Resources Information Center

    Hardre, Patricia L.; Crowson, H. Michael; Xie, Kui; Ly, Cong

    2007-01-01

    Translation of questionnaire instruments to digital administration systems, both self-contained and web-based, is widespread and increasing daily. However, the literature is lean on controlled empirical studies investigating the potential for differential effects of administrative methods. In this study, two university student samples were…

  4. Protein structure refinement using a quantum mechanics-based chemical shielding predictor† †Electronic supplementary information (ESI) available. See DOI: 10.1039/c6sc04344e Click here for additional data file.

    PubMed Central

    2017-01-01

    The accurate prediction of protein chemical shifts using a quantum mechanics (QM)-based method has been the subject of intense research for more than 20 years but so far empirical methods for chemical shift prediction have proven more accurate. In this paper we show that a QM-based predictor of a protein backbone and CB chemical shifts (ProCS15, PeerJ, 2016, 3, e1344) is of comparable accuracy to empirical chemical shift predictors after chemical shift-based structural refinement that removes small structural errors. We present a method by which quantum chemistry based predictions of isotropic chemical shielding values (ProCS15) can be used to refine protein structures using Markov Chain Monte Carlo (MCMC) simulations, relating the chemical shielding values to the experimental chemical shifts probabilistically. Two kinds of MCMC structural refinement simulations were performed using force field geometry optimized X-ray structures as starting points: simulated annealing of the starting structure and constant temperature MCMC simulation followed by simulated annealing of a representative ensemble structure. Annealing of the CHARMM structure changes the CA-RMSD by an average of 0.4 Å but lowers the chemical shift RMSD by 1.0 and 0.7 ppm for CA and N. Conformational averaging has a relatively small effect (0.1–0.2 ppm) on the overall agreement with carbon chemical shifts but lowers the error for nitrogen chemical shifts by 0.4 ppm. If an amino acid specific offset is included the ProCS15 predicted chemical shifts have RMSD values relative to experiments that are comparable to popular empirical chemical shift predictors. The annealed representative ensemble structures differ in CA-RMSD relative to the initial structures by an average of 2.0 Å, with >2.0 Å difference for six proteins. In four of the cases, the largest structural differences arise in structurally flexible regions of the protein as determined by NMR, and in the remaining two cases, the large structural change may be due to force field deficiencies. The overall accuracy of the empirical methods are slightly improved by annealing the CHARMM structure with ProCS15, which may suggest that the minor structural changes introduced by ProCS15-based annealing improves the accuracy of the protein structures. Having established that QM-based chemical shift prediction can deliver the same accuracy as empirical shift predictors we hope this can help increase the accuracy of related approaches such as QM/MM or linear scaling approaches or interpreting protein structural dynamics from QM-derived chemical shift. PMID:28451325

  5. Bi-dimensional empirical mode decomposition based fringe-like pattern suppression in polarization interference imaging spectrometer

    NASA Astrophysics Data System (ADS)

    Ren, Wenyi; Cao, Qizhi; Wu, Dan; Jiang, Jiangang; Yang, Guoan; Xie, Yingge; Wang, Guodong; Zhang, Sheqi

    2018-01-01

    Many observers using interference imaging spectrometer were plagued by the fringe-like pattern(FP) that occurs for optical wavelengths in red and near-infrared region. It brings us more difficulties in the data processing such as the spectrum calibration, information retrieval, and so on. An adaptive method based on the bi-dimensional empirical mode decomposition was developed to suppress the nonlinear FP in polarization interference imaging spectrometer. The FP and corrected interferogram were separated effectively. Meanwhile, the stripes introduced by CCD mosaic was suppressed. The nonlinear interferogram background removal and the spectrum distortion correction were implemented as well. It provides us an alternative method to adaptively suppress the nonlinear FP without prior experimental data and knowledge. This approach potentially is a powerful tool in the fields of Fourier transform spectroscopy, holographic imaging, optical measurement based on moire fringe, etc.

  6. Experiential Learning Methods, Simulation Complexity and Their Effects on Different Target Groups

    ERIC Educational Resources Information Center

    Kluge, Annette

    2007-01-01

    This article empirically supports the thesis that there is no clear and unequivocal argument in favor of simulations and experiential learning. Instead the effectiveness of simulation-based learning methods depends strongly on the target group's characteristics. Two methods of supporting experiential learning are compared in two different complex…

  7. Equating Scores from Adaptive to Linear Tests

    ERIC Educational Resources Information Center

    van der Linden, Wim J.

    2006-01-01

    Two local methods for observed-score equating are applied to the problem of equating an adaptive test to a linear test. In an empirical study, the methods were evaluated against a method based on the test characteristic function (TCF) of the linear test and traditional equipercentile equating applied to the ability estimates on the adaptive test…

  8. Local Linear Observed-Score Equating

    ERIC Educational Resources Information Center

    Wiberg, Marie; van der Linden, Wim J.

    2011-01-01

    Two methods of local linear observed-score equating for use with anchor-test and single-group designs are introduced. In an empirical study, the two methods were compared with the current traditional linear methods for observed-score equating. As a criterion, the bias in the equated scores relative to true equating based on Lord's (1980)…

  9. Parent Training: A Review of Methods for Children with Developmental Disabilities

    ERIC Educational Resources Information Center

    Matson, Johnny L.; Mahan, Sara; LoVullo, Santino V.

    2009-01-01

    Great strides have been made in the development of skills and procedures to aid children with developmental disabilities to establish maximum independence and quality of life. Paramount among the treatment methods that have empirical support are treatments based on applied behavior analysis. These methods are often very labor intensive. Thus,…

  10. The Problem of Empirical Redundancy of Constructs in Organizational Research: An Empirical Investigation

    ERIC Educational Resources Information Center

    Le, Huy; Schmidt, Frank L.; Harter, James K.; Lauver, Kristy J.

    2010-01-01

    Construct empirical redundancy may be a major problem in organizational research today. In this paper, we explain and empirically illustrate a method for investigating this potential problem. We applied the method to examine the empirical redundancy of job satisfaction (JS) and organizational commitment (OC), two well-established organizational…

  11. The ReaxFF reactive force-field: Development, applications, and future directions

    DOE PAGES

    Senftle, Thomas; Hong, Sungwook; Islam, Md Mahbubul; ...

    2016-03-04

    The reactive force-field (ReaxFF) interatomic potential is a powerful computational tool for exploring, developing and optimizing material properties. Methods based on the principles of quantum mechanics (QM), while offering valuable theoretical guidance at the electronic level, are often too computationally intense for simulations that consider the full dynamic evolution of a system. Alternatively, empirical interatomic potentials that are based on classical principles require significantly fewer computational resources, which enables simulations to better describe dynamic processes over longer timeframes and on larger scales. Such methods, however, typically require a predefined connectivity between atoms, precluding simulations that involve reactive events. The ReaxFFmore » method was developed to help bridge this gap. Approaching the gap from the classical side, ReaxFF casts the empirical interatomic potential within a bond-order formalism, thus implicitly describing chemical bonding without expensive QM calculations. As a result, this article provides an overview of the development, application, and future directions of the ReaxFF method.« less

  12. Direct Extraction of Tumor Response Based on Ensemble Empirical Mode Decomposition for Image Reconstruction of Early Breast Cancer Detection by UWB.

    PubMed

    Li, Qinwei; Xiao, Xia; Wang, Liang; Song, Hang; Kono, Hayato; Liu, Peifang; Lu, Hong; Kikkawa, Takamaro

    2015-10-01

    A direct extraction method of tumor response based on ensemble empirical mode decomposition (EEMD) is proposed for early breast cancer detection by ultra-wide band (UWB) microwave imaging. With this approach, the image reconstruction for the tumor detection can be realized with only extracted signals from as-detected waveforms. The calibration process executed in the previous research for obtaining reference waveforms which stand for signals detected from the tumor-free model is not required. The correctness of the method is testified by successfully detecting a 4 mm tumor located inside the glandular region in one breast model and by the model located at the interface between the gland and the fat, respectively. The reliability of the method is checked by distinguishing a tumor buried in the glandular tissue whose dielectric constant is 35. The feasibility of the method is confirmed by showing the correct tumor information in both simulation results and experimental results for the realistic 3-D printed breast phantom.

  13. [A Feature Extraction Method for Brain Computer Interface Based on Multivariate Empirical Mode Decomposition].

    PubMed

    Wang, Jinjia; Liu, Yuan

    2015-04-01

    This paper presents a feature extraction method based on multivariate empirical mode decomposition (MEMD) combining with the power spectrum feature, and the method aims at the non-stationary electroencephalogram (EEG) or magnetoencephalogram (MEG) signal in brain-computer interface (BCI) system. Firstly, we utilized MEMD algorithm to decompose multichannel brain signals into a series of multiple intrinsic mode function (IMF), which was proximate stationary and with multi-scale. Then we extracted and reduced the power characteristic from each IMF to a lower dimensions using principal component analysis (PCA). Finally, we classified the motor imagery tasks by linear discriminant analysis classifier. The experimental verification showed that the correct recognition rates of the two-class and four-class tasks of the BCI competition III and competition IV reached 92.0% and 46.2%, respectively, which were superior to the winner of the BCI competition. The experimental proved that the proposed method was reasonably effective and stable and it would provide a new way for feature extraction.

  14. Development of Quantum Chemical Method to Calculate Half Maximal Inhibitory Concentration (IC50 ).

    PubMed

    Bag, Arijit; Ghorai, Pradip Kr

    2016-05-01

    Till date theoretical calculation of the half maximal inhibitory concentration (IC50 ) of a compound is based on different Quantitative Structure Activity Relationship (QSAR) models which are empirical methods. By using the Cheng-Prusoff equation it may be possible to compute IC50 , but this will be computationally very expensive as it requires explicit calculation of binding free energy of an inhibitor with respective protein or enzyme. In this article, for the first time we report an ab initio method to compute IC50 of a compound based only on the inhibitor itself where the effect of the protein is reflected through a proportionality constant. By using basic enzyme inhibition kinetics and thermodynamic relations, we derive an expression of IC50 in terms of hydrophobicity, electric dipole moment (μ) and reactivity descriptor (ω) of an inhibitor. We implement this theory to compute IC50 of 15 HIV-1 capsid inhibitors and compared them with experimental results and available other QASR based empirical results. Calculated values using our method are in very good agreement with the experimental values compared to the values calculated using other methods. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. The birth of the empirical turn in bioethics.

    PubMed

    Borry, Pascal; Schotsmans, Paul; Dierickx, Kris

    2005-02-01

    Since its origin, bioethics has attracted the collaboration of few social scientists, and social scientific methods of gathering empirical data have remained unfamiliar to ethicists. Recently, however, the clouded relations between the empirical and normative perspectives on bioethics appear to be changing. Three reasons explain why there was no easy and consistent input of empirical evidence in bioethics. Firstly, interdisciplinary dialogue runs the risk of communication problems and divergent objectives. Secondly, the social sciences were absent partners since the beginning of bioethics. Thirdly, the meta-ethical distinction between 'is' and 'ought' created a 'natural' border between the disciplines. Now, bioethics tends to accommodate more empirical research. Three hypotheses explain this emergence. Firstly, dissatisfaction with a foundationalist interpretation of applied ethics created a stimulus to incorporate empirical research in bioethics. Secondly, clinical ethicists became engaged in empirical research due to their strong integration in the medical setting. Thirdly, the rise of the evidence-based paradigm had an influence on the practice of bioethics. However, a problematic relationship cannot simply and easily evolve into a perfect interaction. A new and positive climate for empirical approaches has arisen, but the original difficulties have not disappeared.

  16. An Empirical State Error Covariance Matrix for the Weighted Least Squares Estimation Method

    NASA Technical Reports Server (NTRS)

    Frisbee, Joseph H., Jr.

    2011-01-01

    State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the un-certainty in the estimated states. By a reinterpretation of the equations involved in the weighted least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. This proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. Results based on the proposed technique will be presented for a simple, two observer, measurement error only problem.

  17. Empirical likelihood-based confidence intervals for the sensitivity of a continuous-scale diagnostic test at a fixed level of specificity.

    PubMed

    Gengsheng Qin; Davis, Angela E; Jing, Bing-Yi

    2011-06-01

    For a continuous-scale diagnostic test, it is often of interest to find the range of the sensitivity of the test at the cut-off that yields a desired specificity. In this article, we first define a profile empirical likelihood ratio for the sensitivity of a continuous-scale diagnostic test and show that its limiting distribution is a scaled chi-square distribution. We then propose two new empirical likelihood-based confidence intervals for the sensitivity of the test at a fixed level of specificity by using the scaled chi-square distribution. Simulation studies are conducted to compare the finite sample performance of the newly proposed intervals with the existing intervals for the sensitivity in terms of coverage probability. A real example is used to illustrate the application of the recommended methods.

  18. Comparison of ensemble post-processing approaches, based on empirical and dynamical error modelisation of rainfall-runoff model forecasts

    NASA Astrophysics Data System (ADS)

    Chardon, J.; Mathevet, T.; Le Lay, M.; Gailhard, J.

    2012-04-01

    In the context of a national energy company (EDF : Electricité de France), hydro-meteorological forecasts are necessary to ensure safety and security of installations, meet environmental standards and improve water ressources management and decision making. Hydrological ensemble forecasts allow a better representation of meteorological and hydrological forecasts uncertainties and improve human expertise of hydrological forecasts, which is essential to synthesize available informations, coming from different meteorological and hydrological models and human experience. An operational hydrological ensemble forecasting chain has been developed at EDF since 2008 and is being used since 2010 on more than 30 watersheds in France. This ensemble forecasting chain is characterized ensemble pre-processing (rainfall and temperature) and post-processing (streamflow), where a large human expertise is solicited. The aim of this paper is to compare 2 hydrological ensemble post-processing methods developed at EDF in order improve ensemble forecasts reliability (similar to Monatanari &Brath, 2004; Schaefli et al., 2007). The aim of the post-processing methods is to dress hydrological ensemble forecasts with hydrological model uncertainties, based on perfect forecasts. The first method (called empirical approach) is based on a statistical modelisation of empirical error of perfect forecasts, by streamflow sub-samples of quantile class and lead-time. The second method (called dynamical approach) is based on streamflow sub-samples of quantile class and streamflow variation, and lead-time. On a set of 20 watersheds used for operational forecasts, results show that both approaches are necessary to ensure a good post-processing of hydrological ensemble, allowing a good improvement of reliability, skill and sharpness of ensemble forecasts. The comparison of the empirical and dynamical approaches shows the limits of the empirical approach which is not able to take into account hydrological dynamic and processes, i. e. sample heterogeneity. For a same streamflow range corresponds different processes such as rising limbs or recession, where uncertainties are different. The dynamical approach improves reliability, skills and sharpness of forecasts and globally reduces confidence intervals width. When compared in details, the dynamical approach allows a noticeable reduction of confidence intervals during recessions where uncertainty is relatively lower and a slight increase of confidence intervals during rising limbs or snowmelt where uncertainty is greater. The dynamic approach, validated by forecaster's experience that considered the empirical approach not discriminative enough, improved forecaster's confidence and communication of uncertainties. Montanari, A. and Brath, A., (2004). A stochastic approach for assessing the uncertainty of rainfall-runoff simulations. Water Resources Research, 40, W01106, doi:10.1029/2003WR002540. Schaefli, B., Balin Talamba, D. and Musy, A., (2007). Quantifying hydrological modeling errors through a mixture of normal distributions. Journal of Hydrology, 332, 303-315.

  19. Advances in variable selection methods II: Effect of variable selection method on classification of hydrologically similar watersheds in three Mid-Atlantic ecoregions

    EPA Science Inventory

    Hydrological flow predictions in ungauged and sparsely gauged watersheds use regionalization or classification of hydrologically similar watersheds to develop empirical relationships between hydrologic, climatic, and watershed variables. The watershed classifications may be based...

  20. Linking agent-based models and stochastic models of financial markets

    PubMed Central

    Feng, Ling; Li, Baowen; Podobnik, Boris; Preis, Tobias; Stanley, H. Eugene

    2012-01-01

    It is well-known that financial asset returns exhibit fat-tailed distributions and long-term memory. These empirical features are the main objectives of modeling efforts using (i) stochastic processes to quantitatively reproduce these features and (ii) agent-based simulations to understand the underlying microscopic interactions. After reviewing selected empirical and theoretical evidence documenting the behavior of traders, we construct an agent-based model to quantitatively demonstrate that “fat” tails in return distributions arise when traders share similar technical trading strategies and decisions. Extending our behavioral model to a stochastic model, we derive and explain a set of quantitative scaling relations of long-term memory from the empirical behavior of individual market participants. Our analysis provides a behavioral interpretation of the long-term memory of absolute and squared price returns: They are directly linked to the way investors evaluate their investments by applying technical strategies at different investment horizons, and this quantitative relationship is in agreement with empirical findings. Our approach provides a possible behavioral explanation for stochastic models for financial systems in general and provides a method to parameterize such models from market data rather than from statistical fitting. PMID:22586086

  1. Linking agent-based models and stochastic models of financial markets.

    PubMed

    Feng, Ling; Li, Baowen; Podobnik, Boris; Preis, Tobias; Stanley, H Eugene

    2012-05-29

    It is well-known that financial asset returns exhibit fat-tailed distributions and long-term memory. These empirical features are the main objectives of modeling efforts using (i) stochastic processes to quantitatively reproduce these features and (ii) agent-based simulations to understand the underlying microscopic interactions. After reviewing selected empirical and theoretical evidence documenting the behavior of traders, we construct an agent-based model to quantitatively demonstrate that "fat" tails in return distributions arise when traders share similar technical trading strategies and decisions. Extending our behavioral model to a stochastic model, we derive and explain a set of quantitative scaling relations of long-term memory from the empirical behavior of individual market participants. Our analysis provides a behavioral interpretation of the long-term memory of absolute and squared price returns: They are directly linked to the way investors evaluate their investments by applying technical strategies at different investment horizons, and this quantitative relationship is in agreement with empirical findings. Our approach provides a possible behavioral explanation for stochastic models for financial systems in general and provides a method to parameterize such models from market data rather than from statistical fitting.

  2. Holding-based network of nations based on listed energy companies: An empirical study on two-mode affiliation network of two sets of actors

    NASA Astrophysics Data System (ADS)

    Li, Huajiao; Fang, Wei; An, Haizhong; Gao, Xiangyun; Yan, Lili

    2016-05-01

    Economic networks in the real world are not homogeneous; therefore, it is important to study economic networks with heterogeneous nodes and edges to simulate a real network more precisely. In this paper, we present an empirical study of the one-mode derivative holding-based network constructed by the two-mode affiliation network of two sets of actors using the data of worldwide listed energy companies and their shareholders. First, we identify the primitive relationship in the two-mode affiliation network of the two sets of actors. Then, we present the method used to construct the derivative network based on the shareholding relationship between two sets of actors and the affiliation relationship between actors and events. After constructing the derivative network, we analyze different topological features on the node level, edge level and entire network level and explain the meanings of the different values of the topological features combining the empirical data. This study is helpful for expanding the usage of complex networks to heterogeneous economic networks. For empirical research on the worldwide listed energy stock market, this study is useful for discovering the inner relationships between the nations and regions from a new perspective.

  3. Combining Biomarkers Linearly and Nonlinearly for Classification Using the Area Under the ROC Curve

    PubMed Central

    Fong, Youyi; Yin, Shuxin; Huang, Ying

    2016-01-01

    In biomedical studies, it is often of interest to classify/predict a subject’s disease status based on a variety of biomarker measurements. A commonly used classification criterion is based on AUC - Area under the Receiver Operating Characteristic Curve. Many methods have been proposed to optimize approximated empirical AUC criteria, but there are two limitations to the existing methods. First, most methods are only designed to find the best linear combination of biomarkers, which may not perform well when there is strong nonlinearity in the data. Second, many existing linear combination methods use gradient-based algorithms to find the best marker combination, which often result in sub-optimal local solutions. In this paper, we address these two problems by proposing a new kernel-based AUC optimization method called Ramp AUC (RAUC). This method approximates the empirical AUC loss function with a ramp function, and finds the best combination by a difference of convex functions algorithm. We show that as a linear combination method, RAUC leads to a consistent and asymptotically normal estimator of the linear marker combination when the data is generated from a semiparametric generalized linear model, just as the Smoothed AUC method (SAUC). Through simulation studies and real data examples, we demonstrate that RAUC out-performs SAUC in finding the best linear marker combinations, and can successfully capture nonlinear pattern in the data to achieve better classification performance. We illustrate our method with a dataset from a recent HIV vaccine trial. PMID:27058981

  4. Comparison of estimation methods for creating small area rates of acute myocardial infarction among Medicare beneficiaries in California.

    PubMed

    Yasaitis, Laura C; Arcaya, Mariana C; Subramanian, S V

    2015-09-01

    Creating local population health measures from administrative data would be useful for health policy and public health monitoring purposes. While a wide range of options--from simple spatial smoothers to model-based methods--for estimating such rates exists, there are relatively few side-by-side comparisons, especially not with real-world data. In this paper, we compare methods for creating local estimates of acute myocardial infarction rates from Medicare claims data. A Bayesian Monte Carlo Markov Chain estimator that incorporated spatial and local random effects performed best, followed by a method-of-moments spatial Empirical Bayes estimator. As the former is more complicated and time-consuming, spatial linear Empirical Bayes methods may represent a good alternative for non-specialist investigators. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Peering inside the Clock: Using Success Case Method to Determine How and Why Practice-Based Educational Interventions Succeed

    ERIC Educational Resources Information Center

    Olson, Curtis A.; Shershneva, Marianna B.; Brownstein, Michelle Horowitz

    2011-01-01

    Introduction: No educational method or combination of methods will facilitate implementation of clinical practice guidelines in all clinical contexts. To develop an empirical basis for aligning methods to contexts, we need to move beyond "Does it work?" to also ask "What works for whom and under what conditions?" This study employed Success Case…

  6. Gibbs Sampler-Based λ-Dynamics and Rao-Blackwell Estimator for Alchemical Free Energy Calculation.

    PubMed

    Ding, Xinqiang; Vilseck, Jonah Z; Hayes, Ryan L; Brooks, Charles L

    2017-06-13

    λ-dynamics is a generalized ensemble method for alchemical free energy calculations. In traditional λ-dynamics, the alchemical switch variable λ is treated as a continuous variable ranging from 0 to 1 and an empirical estimator is utilized to approximate the free energy. In the present article, we describe an alternative formulation of λ-dynamics that utilizes the Gibbs sampler framework, which we call Gibbs sampler-based λ-dynamics (GSLD). GSLD, like traditional λ-dynamics, can be readily extended to calculate free energy differences between multiple ligands in one simulation. We also introduce a new free energy estimator, the Rao-Blackwell estimator (RBE), for use in conjunction with GSLD. Compared with the current empirical estimator, the advantage of RBE is that RBE is an unbiased estimator and its variance is usually smaller than the current empirical estimator. We also show that the multistate Bennett acceptance ratio equation or the unbinned weighted histogram analysis method equation can be derived using the RBE. We illustrate the use and performance of this new free energy computational framework by application to a simple harmonic system as well as relevant calculations of small molecule relative free energies of solvation and binding to a protein receptor. Our findings demonstrate consistent and improved performance compared with conventional alchemical free energy methods.

  7. Selecting a restoration technique to minimize OCR error.

    PubMed

    Cannon, M; Fugate, M; Hush, D R; Scovel, C

    2003-01-01

    This paper introduces a learning problem related to the task of converting printed documents to ASCII text files. The goal of the learning procedure is to produce a function that maps documents to restoration techniques in such a way that on average the restored documents have minimum optical character recognition error. We derive a general form for the optimal function and use it to motivate the development of a nonparametric method based on nearest neighbors. We also develop a direct method of solution based on empirical error minimization for which we prove a finite sample bound on estimation error that is independent of distribution. We show that this empirical error minimization problem is an extension of the empirical optimization problem for traditional M-class classification with general loss function and prove computational hardness for this problem. We then derive a simple iterative algorithm called generalized multiclass ratchet (GMR) and prove that it produces an optimal function asymptotically (with probability 1). To obtain the GMR algorithm we introduce a new data map that extends Kesler's construction for the multiclass problem and then apply an algorithm called Ratchet to this mapped data, where Ratchet is a modification of the Pocket algorithm . Finally, we apply these methods to a collection of documents and report on the experimental results.

  8. Selecting Measures to Evaluate Complex Sociotechnical Systems: An Empirical Comparison of a Task-based and Constraint-based Method

    DTIC Science & Technology

    2013-07-01

    experimental requirements of the research are described (See Appendix A for a full description of the development and testing ). 3.1.3 The Black...41 3. TEST SYSTEM USED FOR THE RESEARCH ...Chapter 3: Test system used for the research A complex socio-technical system is required to compare the methods. An emulation of a radar warning

  9. Construction of Optimally Reduced Empirical Model by Spatially Distributed Climate Data

    NASA Astrophysics Data System (ADS)

    Gavrilov, A.; Mukhin, D.; Loskutov, E.; Feigin, A.

    2016-12-01

    We present an approach to empirical reconstruction of the evolution operator in stochastic form by space-distributed time series. The main problem in empirical modeling consists in choosing appropriate phase variables which can efficiently reduce the dimension of the model at minimal loss of information about system's dynamics which consequently leads to more robust model and better quality of the reconstruction. For this purpose we incorporate in the model two key steps. The first step is standard preliminary reduction of observed time series dimension by decomposition via certain empirical basis (e. g. empirical orthogonal function basis or its nonlinear or spatio-temporal generalizations). The second step is construction of an evolution operator by principal components (PCs) - the time series obtained by the decomposition. In this step we introduce a new way of reducing the dimension of the embedding in which the evolution operator is constructed. It is based on choosing proper combinations of delayed PCs to take into account the most significant spatio-temporal couplings. The evolution operator is sought as nonlinear random mapping parameterized using artificial neural networks (ANN). Bayesian approach is used to learn the model and to find optimal hyperparameters: the number of PCs, the dimension of the embedding, the degree of the nonlinearity of ANN. The results of application of the method to climate data (sea surface temperature, sea level pressure) and their comparing with the same method based on non-reduced embedding are presented. The study is supported by Government of Russian Federation (agreement #14.Z50.31.0033 with the Institute of Applied Physics of RAS).

  10. An Exploration of Alternative Scoring Methods Using Curriculum-Based Measurement in Early Writing

    ERIC Educational Resources Information Center

    Allen, Abigail A.; Poch, Apryl L.; Lembke, Erica S.

    2018-01-01

    This manuscript describes two empirical studies of alternative scoring procedures used with curriculum-based measurement in writing (CBM-W). Study 1 explored the technical adequacy of a trait-based rubric in first grade. Study 2 explored the technical adequacy of a trait-based rubric, production-dependent, and production-independent scores in…

  11. Evidence-Based Administration for Decision Making in the Framework of Knowledge Strategic Management

    ERIC Educational Resources Information Center

    Del Junco, Julio Garcia; Zaballa, Rafael De Reyna; de Perea, Juan Garcia Alvarez

    2010-01-01

    Purpose: This paper seeks to present a model based on evidence-based administration (EBA), which aims to facilitate the creation, transformation and diffusion of knowledge in learning organizations. Design/methodology/approach: A theoretical framework is proposed based on EBA and the case method. Accordingly, an empirical study was carried out in…

  12. A Modified Empirical Wavelet Transform for Acoustic Emission Signal Decomposition in Structural Health Monitoring.

    PubMed

    Dong, Shaopeng; Yuan, Mei; Wang, Qiusheng; Liang, Zhiling

    2018-05-21

    The acoustic emission (AE) method is useful for structural health monitoring (SHM) of composite structures due to its high sensitivity and real-time capability. The main challenge, however, is how to classify the AE data into different failure mechanisms because the detected signals are affected by various factors. Empirical wavelet transform (EWT) is a solution for analyzing the multi-component signals and has been used to process the AE data. In order to solve the spectrum separation problem of the AE signals, this paper proposes a novel modified separation method based on local window maxima (LWM) algorithm. It searches the local maxima of the Fourier spectrum in a proper window, and automatically determines the boundaries of spectrum segmentations, which helps to eliminate the impact of noise interference or frequency dispersion in the detected signal and obtain the meaningful empirical modes that are more related to the damage characteristics. Additionally, both simulation signal and AE signal from the composite structures are used to verify the effectiveness of the proposed method. Finally, the experimental results indicate that the proposed method performs better than the original EWT method in identifying different damage mechanisms of composite structures.

  13. A Modified Empirical Wavelet Transform for Acoustic Emission Signal Decomposition in Structural Health Monitoring

    PubMed Central

    Dong, Shaopeng; Yuan, Mei; Wang, Qiusheng; Liang, Zhiling

    2018-01-01

    The acoustic emission (AE) method is useful for structural health monitoring (SHM) of composite structures due to its high sensitivity and real-time capability. The main challenge, however, is how to classify the AE data into different failure mechanisms because the detected signals are affected by various factors. Empirical wavelet transform (EWT) is a solution for analyzing the multi-component signals and has been used to process the AE data. In order to solve the spectrum separation problem of the AE signals, this paper proposes a novel modified separation method based on local window maxima (LWM) algorithm. It searches the local maxima of the Fourier spectrum in a proper window, and automatically determines the boundaries of spectrum segmentations, which helps to eliminate the impact of noise interference or frequency dispersion in the detected signal and obtain the meaningful empirical modes that are more related to the damage characteristics. Additionally, both simulation signal and AE signal from the composite structures are used to verify the effectiveness of the proposed method. Finally, the experimental results indicate that the proposed method performs better than the original EWT method in identifying different damage mechanisms of composite structures. PMID:29883411

  14. An empirical model for polarized and cross-polarized scattering from a vegetation layer

    NASA Technical Reports Server (NTRS)

    Liu, H. L.; Fung, A. K.

    1988-01-01

    An empirical model for scattering from a vegetation layer above an irregular ground surface is developed in terms of the first-order solution for like-polarized scattering and the second-order solution for cross-polarized scattering. The effects of multiple scattering within the layer and at the surface-volume boundary are compensated by using a correction factor based on the matrix doubling method. The major feature of this model is that all parameters in the model are physical parameters of the vegetation medium. There are no regression parameters. Comparisons of this empirical model with theoretical matrix-doubling method and radar measurements indicate good agreements in polarization, angular trends, and k sub a up to 4, where k is the wave number and a is the disk radius. The computational time is shortened by a factor of 8, relative to the theoretical model calculation.

  15. The Philosophy, Theoretical Bases, and Implementation of the AHAAH Model for Evaluation of Hazard from Exposure to Intense Sounds

    DTIC Science & Technology

    2018-04-01

    empirical, external energy-damage correlation methods for evaluating hearing damage risk associated with impulsive noise exposure. AHAAH applies the...is validated against the measured results of human exposures to impulsive sounds, and unlike wholly empirical correlation approaches, AHAAH’s...a measured level (LAEQ8 of 85 dB). The approach in MIL-STD-1474E is very different. Previous standards tried to find a correlation between some

  16. Ridit Analysis for Cooper-Harper and Other Ordinal Ratings for Sparse Data - A Distance-based Approach

    DTIC Science & Technology

    2016-09-01

    is to fit empirical Beta distributions to observed data, and then to use a randomization approach to make inferences on the difference between...a Ridit analysis on the often sparse data sets in many Flying Qualities applicationsi. The method of this paper is to fit empirical Beta ...One such measure is the discrete- probability-distribution version of the (squared) ‘Hellinger Distance’ (Yang & Le Cam , 2000) 2(, ) = 1

  17. In silico and experimental evaluation of DNA-based detection methods for the ability to discriminate almond from other Prunus spp.

    PubMed

    Brežná, Barbara; Šmíd, Jiří; Costa, Joana; Radvanszky, Jan; Mafra, Isabel; Kuchta, Tomáš

    2015-04-01

    Ten published DNA-based analytical methods aiming at detecting material of almond (Prunus dulcis) were in silico evaluated for potential cross-reactivity with other stone fruits (Prunus spp.), including peach, apricot, plum, cherry, sour cherry and Sargent cherry. For most assays, the analysis of nucleotide databases suggested none or insufficient discrimination of at least some stone fruits. On the other hand, the assay targeting non-specific lipid transfer protein (Röder et al., 2011, Anal Chim Acta 685:74-83) was sufficiently discriminative, judging from nucleotide alignments. Empirical evaluation was performed for three of the published methods, one modification of a commercial kit (SureFood allergen almond) and one attempted novel method targeting thaumatin-like protein gene. Samples of leaves and kernels were used in the experiments. The empirical results were favourable for the method from Röder et al. (2011) and a modification of SureFood allergen almond kit, both showing cross-reactivity <10(-3) compared to the model almond. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Time-frequency analysis of neuronal populations with instantaneous resolution based on noise-assisted multivariate empirical mode decomposition.

    PubMed

    Alegre-Cortés, J; Soto-Sánchez, C; Pizá, Á G; Albarracín, A L; Farfán, F D; Felice, C J; Fernández, E

    2016-07-15

    Linear analysis has classically provided powerful tools for understanding the behavior of neural populations, but the neuron responses to real-world stimulation are nonlinear under some conditions, and many neuronal components demonstrate strong nonlinear behavior. In spite of this, temporal and frequency dynamics of neural populations to sensory stimulation have been usually analyzed with linear approaches. In this paper, we propose the use of Noise-Assisted Multivariate Empirical Mode Decomposition (NA-MEMD), a data-driven template-free algorithm, plus the Hilbert transform as a suitable tool for analyzing population oscillatory dynamics in a multi-dimensional space with instantaneous frequency (IF) resolution. The proposed approach was able to extract oscillatory information of neurophysiological data of deep vibrissal nerve and visual cortex multiunit recordings that were not evidenced using linear approaches with fixed bases such as the Fourier analysis. Texture discrimination analysis performance was increased when Noise-Assisted Multivariate Empirical Mode plus Hilbert transform was implemented, compared to linear techniques. Cortical oscillatory population activity was analyzed with precise time-frequency resolution. Similarly, NA-MEMD provided increased time-frequency resolution of cortical oscillatory population activity. Noise-Assisted Multivariate Empirical Mode Decomposition plus Hilbert transform is an improved method to analyze neuronal population oscillatory dynamics overcoming linear and stationary assumptions of classical methods. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. An Empirical Comparison of DDF Detection Methods for Understanding the Causes of DIF in Multiple-Choice Items

    ERIC Educational Resources Information Center

    Suh, Youngsuk; Talley, Anna E.

    2015-01-01

    This study compared and illustrated four differential distractor functioning (DDF) detection methods for analyzing multiple-choice items. The log-linear approach, two item response theory-model-based approaches with likelihood ratio tests, and the odds ratio approach were compared to examine the congruence among the four DDF detection methods.…

  20. Wave processes in the human cardiovascular system: The measuring complex, computing models, and diagnostic analysis

    NASA Astrophysics Data System (ADS)

    Ganiev, R. F.; Reviznikov, D. L.; Rogoza, A. N.; Slastushenskiy, Yu. V.; Ukrainskiy, L. E.

    2017-03-01

    A description of a complex approach to investigation of nonlinear wave processes in the human cardiovascular system based on a combination of high-precision methods of measuring a pulse wave, mathematical methods of processing the empirical data, and methods of direct numerical modeling of hemodynamic processes in an arterial tree is given.

  1. Development of an Empirical Methods for Predicting Jet Mixing Noise of Cold Flow Rectangular Jets

    NASA Technical Reports Server (NTRS)

    Russell, James W.

    1999-01-01

    This report presents an empirical method for predicting the jet mixing noise levels of cold flow rectangular jets. The report presents a detailed analysis of the methodology used in development of the prediction method. The empirical correlations used are based on narrow band acoustic data for cold flow rectangular model nozzle tests conducted in the NASA Langley Jet Noise Laboratory. There were 20 separate nozzle test operating conditions. For each operating condition 60 Hz bandwidth microphone measurements were made over a frequency range from 0 to 60,000 Hz. Measurements were performed at 16 polar directivity angles ranging from 45 degrees to 157.5 degrees. At each polar directivity angle, measurements were made at 9 azimuth directivity angles. The report shows the methods employed to remove screech tones and shock noise from the data in order to obtain the jet mixing noise component. The jet mixing noise was defined in terms of one third octave band spectral content, polar and azimuth directivity, and overall power level. Empirical correlations were performed over the range of test conditions to define each of these jet mixing noise parameters as a function of aspect ratio, jet velocity, and polar and azimuth directivity angles. The report presents the method for predicting the overall power level, the average polar directivity, the azimuth directivity and the location and shape of the spectra for jet mixing noise of cold flow rectangular jets.

  2. A Critical Plane-energy Model for Multiaxial Fatigue Life Prediction of Homogeneous and Heterogeneous Materials

    NASA Astrophysics Data System (ADS)

    Wei, Haoyang

    A new critical plane-energy model is proposed in this thesis for multiaxial fatigue life prediction of homogeneous and heterogeneous materials. Brief review of existing methods, especially on the critical plane-based and energy-based methods, are given first. Special focus is on one critical plane approach which has been shown to work for both brittle and ductile metals. The key idea is to automatically change the critical plane orientation with respect to different materials and stress states. One potential drawback of the developed model is that it needs an empirical calibration parameter for non-proportional multiaxial loadings since only the strain terms are used and the out-of-phase hardening cannot be considered. The energy-based model using the critical plane concept is proposed with help of the Mroz-Garud hardening rule to explicitly include the effect of non-proportional hardening under fatigue cyclic loadings. Thus, the empirical calibration for non-proportional loading is not needed since the out-of-phase hardening is naturally included in the stress calculation. The model predictions are compared with experimental data from open literature and it is shown the proposed model can work for both proportional and non-proportional loadings without the empirical calibration. Next, the model is extended for the fatigue analysis of heterogeneous materials integrating with finite element method. Fatigue crack initiation of representative volume of heterogeneous materials is analyzed using the developed critical plane-energy model and special focus is on the microstructure effect on the multiaxial fatigue life predictions. Several conclusions and future work is drawn based on the proposed study.

  3. Simulation studies of chemical erosion on carbon based materials at elevated temperatures

    NASA Astrophysics Data System (ADS)

    Kenmotsu, T.; Kawamura, T.; Li, Zhijie; Ono, T.; Yamamura, Y.

    1999-06-01

    We simulated the fluence dependence of methane reaction yield in carbon with hydrogen bombardment using the ACAT-DIFFUSE code. The ACAT-DIFFUSE code is a simulation code based on a Monte Carlo method with a binary collision approximation and on solving diffusion equations. The chemical reaction model in carbon was studied by Roth or other researchers. Roth's model is suitable for the steady state methane reaction. But this model cannot estimate the fluence dependence of the methane reaction. Then, we derived an empirical formula based on Roth's model for methane reaction. In this empirical formula, we assumed the reaction region where chemical sputtering due to methane formation takes place. The reaction region corresponds to the peak range of incident hydrogen distribution in the target material. We adopted this empirical formula to the ACAT-DIFFUSE code. The simulation results indicate the similar fluence dependence compared with the experiment result. But, the fluence to achieve the steady state are different between experiment and simulation results.

  4. Apprehensions and Expectations of the Adoption of Systematically Planned, Outcome-Oriented Practice

    ERIC Educational Resources Information Center

    Savaya, Riki; Altschuler, Dorit; Melamed, Sharon

    2013-01-01

    Objectives: The study examined social workers' apprehensions and expectations of the impending adoption of systematically planned, empirically based, outcome-oriented practice (SEOP). Method: Employing a mixed methods longitudinal design, the study used concept mapping to identify and map workers' apprehensions and expectations and a self-reported…

  5. A Simple Estimation Method for Aggregate Government Outsourcing

    ERIC Educational Resources Information Center

    Minicucci, Stephen; Donahue, John D.

    2004-01-01

    The scholarly and popular debate on the delegation to the private sector of governmental tasks rests on an inadequate empirical foundation, as no systematic data are collected on direct versus indirect service delivery. We offer a simple method for approximating levels of service outsourcing, based on relatively straightforward combinations of and…

  6. Understanding similarity of groundwater systems with empirical copulas

    NASA Astrophysics Data System (ADS)

    Haaf, Ezra; Kumar, Rohini; Samaniego, Luis; Barthel, Roland

    2016-04-01

    Within the classification framework for groundwater systems that aims for identifying similarity of hydrogeological systems and transferring information from a well-observed to an ungauged system (Haaf and Barthel, 2015; Haaf and Barthel, 2016), we propose a copula-based method for describing groundwater-systems similarity. Copulas are an emerging method in hydrological sciences that make it possible to model the dependence structure of two groundwater level time series, independently of the effects of their marginal distributions. This study is based on Samaniego et al. (2010), which described an approach calculating dissimilarity measures from bivariate empirical copula densities of streamflow time series. Subsequently, streamflow is predicted in ungauged basins by transferring properties from similar catchments. The proposed approach is innovative because copula-based similarity has not yet been applied to groundwater systems. Here we estimate the pairwise dependence structure of 600 wells in Southern Germany using 10 years of weekly groundwater level observations. Based on these empirical copulas, dissimilarity measures are estimated, such as the copula's lower- and upper corner cumulated probability, copula-based Spearman's rank correlation - as proposed by Samaniego et al. (2010). For the characterization of groundwater systems, copula-based metrics are compared with dissimilarities obtained from precipitation signals corresponding to the presumed area of influence of each groundwater well. This promising approach provides a new tool for advancing similarity-based classification of groundwater system dynamics. Haaf, E., Barthel, R., 2015. Methods for assessing hydrogeological similarity and for classification of groundwater systems on the regional scale, EGU General Assembly 2015, Vienna, Austria. Haaf, E., Barthel, R., 2016. An approach for classification of hydrogeological systems at the regional scale based on groundwater hydrographs EGU General Assembly 2016, Vienna, Austria. Samaniego, L., Bardossy, A., Kumar, R., 2010. Streamflow prediction in ungauged catchments using copula-based dissimilarity measures. Water Resources Research, 46. DOI:10.1029/2008wr007695

  7. Prior robust empirical Bayes inference for large-scale data by conditioning on rank with application to microarray data

    PubMed Central

    Liao, J. G.; Mcmurry, Timothy; Berg, Arthur

    2014-01-01

    Empirical Bayes methods have been extensively used for microarray data analysis by modeling the large number of unknown parameters as random effects. Empirical Bayes allows borrowing information across genes and can automatically adjust for multiple testing and selection bias. However, the standard empirical Bayes model can perform poorly if the assumed working prior deviates from the true prior. This paper proposes a new rank-conditioned inference in which the shrinkage and confidence intervals are based on the distribution of the error conditioned on rank of the data. Our approach is in contrast to a Bayesian posterior, which conditions on the data themselves. The new method is almost as efficient as standard Bayesian methods when the working prior is close to the true prior, and it is much more robust when the working prior is not close. In addition, it allows a more accurate (but also more complex) non-parametric estimate of the prior to be easily incorporated, resulting in improved inference. The new method’s prior robustness is demonstrated via simulation experiments. Application to a breast cancer gene expression microarray dataset is presented. Our R package rank.Shrinkage provides a ready-to-use implementation of the proposed methodology. PMID:23934072

  8. Three dimensional empirical mode decomposition analysis apparatus, method and article manufacture

    NASA Technical Reports Server (NTRS)

    Gloersen, Per (Inventor)

    2004-01-01

    An apparatus and method of analysis for three-dimensional (3D) physical phenomena. The physical phenomena may include any varying 3D phenomena such as time varying polar ice flows. A repesentation of the 3D phenomena is passed through a Hilbert transform to convert the data into complex form. A spatial variable is separated from the complex representation by producing a time based covariance matrix. The temporal parts of the principal components are produced by applying Singular Value Decomposition (SVD). Based on the rapidity with which the eigenvalues decay, the first 3-10 complex principal components (CPC) are selected for Empirical Mode Decomposition into intrinsic modes. The intrinsic modes produced are filtered in order to reconstruct the spatial part of the CPC. Finally, a filtered time series may be reconstructed from the first 3-10 filtered complex principal components.

  9. An optimized time varying filtering based empirical mode decomposition method with grey wolf optimizer for machinery fault diagnosis

    NASA Astrophysics Data System (ADS)

    Zhang, Xin; Liu, Zhiwen; Miao, Qiang; Wang, Lei

    2018-03-01

    A time varying filtering based empirical mode decomposition (EMD) (TVF-EMD) method was proposed recently to solve the mode mixing problem of EMD method. Compared with the classical EMD, TVF-EMD was proven to improve the frequency separation performance and be robust to noise interference. However, the decomposition parameters (i.e., bandwidth threshold and B-spline order) significantly affect the decomposition results of this method. In original TVF-EMD method, the parameter values are assigned in advance, which makes it difficult to achieve satisfactory analysis results. To solve this problem, this paper develops an optimized TVF-EMD method based on grey wolf optimizer (GWO) algorithm for fault diagnosis of rotating machinery. Firstly, a measurement index termed weighted kurtosis index is constructed by using kurtosis index and correlation coefficient. Subsequently, the optimal TVF-EMD parameters that match with the input signal can be obtained by GWO algorithm using the maximum weighted kurtosis index as objective function. Finally, fault features can be extracted by analyzing the sensitive intrinsic mode function (IMF) owning the maximum weighted kurtosis index. Simulations and comparisons highlight the performance of TVF-EMD method for signal decomposition, and meanwhile verify the fact that bandwidth threshold and B-spline order are critical to the decomposition results. Two case studies on rotating machinery fault diagnosis demonstrate the effectiveness and advantages of the proposed method.

  10. Empirical Bayes estimation of proportions with application to cowbird parasitism rates

    USGS Publications Warehouse

    Link, W.A.; Hahn, D.C.

    1996-01-01

    Bayesian models provide a structure for studying collections of parameters such as are considered in the investigation of communities, ecosystems, and landscapes. This structure allows for improved estimation of individual parameters, by considering them in the context of a group of related parameters. Individual estimates are differentially adjusted toward an overall mean, with the magnitude of their adjustment based on their precision. Consequently, Bayesian estimation allows for a more credible identification of extreme values in a collection of estimates. Bayesian models regard individual parameters as values sampled from a specified probability distribution, called a prior. The requirement that the prior be known is often regarded as an unattractive feature of Bayesian analysis and may be the reason why Bayesian analyses are not frequently applied in ecological studies. Empirical Bayes methods provide an alternative approach that incorporates the structural advantages of Bayesian models while requiring a less stringent specification of prior knowledge. Rather than requiring that the prior distribution be known, empirical Bayes methods require only that it be in a certain family of distributions, indexed by hyperparameters that can be estimated from the available data. This structure is of interest per se, in addition to its value in allowing for improved estimation of individual parameters; for example, hypotheses regarding the existence of distinct subgroups in a collection of parameters can be considered under the empirical Bayes framework by allowing the hyperparameters to vary among subgroups. Though empirical Bayes methods have been applied in a variety of contexts, they have received little attention in the ecological literature. We describe the empirical Bayes approach in application to estimation of proportions, using data obtained in a community-wide study of cowbird parasitism rates for illustration. Since observed proportions based on small sample sizes are heavily adjusted toward the mean, extreme values among empirical Bayes estimates identify those species for which there is the greatest evidence of extreme parasitism rates. Applying a subgroup analysis to our data on cowbird parasitism rates, we conclude that parasitism rates for Neotropical Migrants as a group are no greater than those of Resident/Short-distance Migrant species in this forest community. Our data and analyses demonstrate that the parasitism rates for certain Neotropical Migrant species are remarkably low (Wood Thrush and Rose-breasted Grosbeak) while those for others are remarkably high (Ovenbird and Red-eyed Vireo).

  11. Improved inland water levels from SAR altimetry using novel empirical and physical retrackers

    NASA Astrophysics Data System (ADS)

    Villadsen, Heidi; Deng, Xiaoli; Andersen, Ole B.; Stenseng, Lars; Nielsen, Karina; Knudsen, Per

    2016-06-01

    Satellite altimetry has proven a valuable resource of information on river and lake levels where in situ data are sparse or non-existent. In this study several new methods for obtaining stable inland water levels from CryoSat-2 Synthetic Aperture Radar (SAR) altimetry are presented and evaluated. In addition, the possible benefits from combining physical and empirical retrackers are investigated. The retracking methods evaluated in this paper include the physical SAR Altimetry MOde Studies and Applications (SAMOSA3) model, a traditional subwaveform threshold retracker, the proposed Multiple Waveform Persistent Peak (MWaPP) retracker, and a method combining the physical and empirical retrackers. Using a physical SAR waveform retracker over inland water has not been attempted before but shows great promise in this study. The evaluation is performed for two medium-sized lakes (Lake Vänern in Sweden and Lake Okeechobee in Florida), and in the Amazon River in Brazil. Comparing with in situ data shows that using the SAMOSA3 retracker generally provides the lowest root-mean-squared-errors (RMSE), closely followed by the MWaPP retracker. For the empirical retrackers, the RMSE values obtained when comparing with in situ data in Lake Vänern and Lake Okeechobee are in the order of 2-5 cm for well-behaved waveforms. Combining the physical and empirical retrackers did not offer significantly improved mean track standard deviations or RMSEs. Based on these studies, it is suggested that future SAR derived water levels are obtained using the SAMOSA3 retracker whenever information about other physical properties apart from range is desired. Otherwise we suggest using the empirical MWaPP retracker described in this paper, which is both easy to implement, computationally efficient, and gives a height estimate for even the most contaminated waveforms.

  12. Empirical Likelihood-Based Estimation of the Treatment Effect in a Pretest-Posttest Study.

    PubMed

    Huang, Chiung-Yu; Qin, Jing; Follmann, Dean A

    2008-09-01

    The pretest-posttest study design is commonly used in medical and social science research to assess the effect of a treatment or an intervention. Recently, interest has been rising in developing inference procedures that improve efficiency while relaxing assumptions used in the pretest-posttest data analysis, especially when the posttest measurement might be missing. In this article we propose a semiparametric estimation procedure based on empirical likelihood (EL) that incorporates the common baseline covariate information to improve efficiency. The proposed method also yields an asymptotically unbiased estimate of the response distribution. Thus functions of the response distribution, such as the median, can be estimated straightforwardly, and the EL method can provide a more appealing estimate of the treatment effect for skewed data. We show that, compared with existing methods, the proposed EL estimator has appealing theoretical properties, especially when the working model for the underlying relationship between the pretest and posttest measurements is misspecified. A series of simulation studies demonstrates that the EL-based estimator outperforms its competitors when the working model is misspecified and the data are missing at random. We illustrate the methods by analyzing data from an AIDS clinical trial (ACTG 175).

  13. Empirical Likelihood-Based Estimation of the Treatment Effect in a Pretest–Posttest Study

    PubMed Central

    Huang, Chiung-Yu; Qin, Jing; Follmann, Dean A.

    2013-01-01

    The pretest–posttest study design is commonly used in medical and social science research to assess the effect of a treatment or an intervention. Recently, interest has been rising in developing inference procedures that improve efficiency while relaxing assumptions used in the pretest–posttest data analysis, especially when the posttest measurement might be missing. In this article we propose a semiparametric estimation procedure based on empirical likelihood (EL) that incorporates the common baseline covariate information to improve efficiency. The proposed method also yields an asymptotically unbiased estimate of the response distribution. Thus functions of the response distribution, such as the median, can be estimated straightforwardly, and the EL method can provide a more appealing estimate of the treatment effect for skewed data. We show that, compared with existing methods, the proposed EL estimator has appealing theoretical properties, especially when the working model for the underlying relationship between the pretest and posttest measurements is misspecified. A series of simulation studies demonstrates that the EL-based estimator outperforms its competitors when the working model is misspecified and the data are missing at random. We illustrate the methods by analyzing data from an AIDS clinical trial (ACTG 175). PMID:23729942

  14. Empirical Requirements Analysis for Mars Surface Operations Using the Flashline Mars Arctic Research Station

    NASA Technical Reports Server (NTRS)

    Clancey, William J.; Lee, Pascal; Sierhuis, Maarten; Norvig, Peter (Technical Monitor)

    2001-01-01

    Living and working on Mars will require model-based computer systems for maintaining and controlling complex life support, communication, transportation, and power systems. This technology must work properly on the first three-year mission, augmenting human autonomy, without adding-yet more complexity to be diagnosed and repaired. One design method is to work with scientists in analog (mars-like) setting to understand how they prefer to work, what constrains will be imposed by the Mars environment, and how to ameliorate difficulties. We describe how we are using empirical requirements analysis to prototype model-based tools at a research station in the High Canadian Arctic.

  15. Forensic discrimination of copper wire using trace element concentrations.

    PubMed

    Dettman, Joshua R; Cassabaum, Alyssa A; Saunders, Christopher P; Snyder, Deanna L; Buscaglia, JoAnn

    2014-08-19

    Copper may be recovered as evidence in high-profile cases such as thefts and improvised explosive device incidents; comparison of copper samples from the crime scene and those associated with the subject of an investigation can provide probative associative evidence and investigative support. A solution-based inductively coupled plasma mass spectrometry method for measuring trace element concentrations in high-purity copper was developed using standard reference materials. The method was evaluated for its ability to use trace element profiles to statistically discriminate between copper samples considering the precision of the measurement and manufacturing processes. The discriminating power was estimated by comparing samples chosen on the basis of the copper refining and production process to represent the within-source (samples expected to be similar) and between-source (samples expected to be different) variability using multivariate parametric- and empirical-based data simulation models with bootstrap resampling. If the false exclusion rate is set to 5%, >90% of the copper samples can be correctly determined to originate from different sources using a parametric-based model and >87% with an empirical-based approach. These results demonstrate the potential utility of the developed method for the comparison of copper samples encountered as forensic evidence.

  16. Recognition-Based Physical Response to Facilitate EFL Learning

    ERIC Educational Resources Information Center

    Hwang, Wu-Yuin; Shih, Timothy K.; Yeh, Shih-Ching; Chou, Ke-Chien; Ma, Zhao-Heng; Sommool, Worapot

    2014-01-01

    This study, based on total physical response and cognitive psychology, proposed a Kinesthetic English Learning System (KELS), which utilized Microsoft's Kinect technology to build kinesthetic interaction with life-related contexts in English. A subject test with 39 tenth-grade students was conducted following empirical research method in order to…

  17. Strength of single-pole utility structures

    Treesearch

    Ronald W. Wolfe

    2006-01-01

    This section presents three basic methods for deriving and documenting Rn as an LTL value along with the coefficient of variation (COVR) for single-pole structures. These include the following: 1. An empirical analysis based primarily on tests of full-sized poles. 2. A theoretical analysis of mechanics-based models used in...

  18. Module-Based Professional Development for Teachers: A Cost-Effective Philippine Experiment

    ERIC Educational Resources Information Center

    San Antonio, Diosdado M.; Morales, Nelson S.; Moral, Leo S.

    2011-01-01

    This article examines the impact of implementing module-based professional development for teachers (MBPDT) in the Philippines. A mixed-method study, experimental design with empirical surveys and an open-ended questionnaire revealed that the experimental group of teachers had greater professional content knowledge compared with the control group…

  19. The Effect of a Brief Training in Motivational Interviewing on Trainee Skill Development

    ERIC Educational Resources Information Center

    Young, Tabitha L.; Hagedorn, W. Bryce

    2012-01-01

    Motivational interviewing (MI) is an empirically based practice that provides counselors with methods for working with resistant and ambivalent clients. Whereas previous research has demonstrated the effectiveness of training current clinicians in this evidenced-based practice, no research has investigated the efficacy of teaching MI to…

  20. Statistical approaches for the definition of landslide rainfall thresholds and their uncertainty using rain gauge and satellite data

    NASA Astrophysics Data System (ADS)

    Rossi, M.; Luciani, S.; Valigi, D.; Kirschbaum, D.; Brunetti, M. T.; Peruccacci, S.; Guzzetti, F.

    2017-05-01

    Models for forecasting rainfall-induced landslides are mostly based on the identification of empirical rainfall thresholds obtained exploiting rain gauge data. Despite their increased availability, satellite rainfall estimates are scarcely used for this purpose. Satellite data should be useful in ungauged and remote areas, or should provide a significant spatial and temporal reference in gauged areas. In this paper, the analysis of the reliability of rainfall thresholds based on rainfall remote sensed and rain gauge data for the prediction of landslide occurrence is carried out. To date, the estimation of the uncertainty associated with the empirical rainfall thresholds is mostly based on a bootstrap resampling of the rainfall duration and the cumulated event rainfall pairs (D,E) characterizing rainfall events responsible for past failures. This estimation does not consider the measurement uncertainty associated with D and E. In the paper, we propose (i) a new automated procedure to reconstruct ED conditions responsible for the landslide triggering and their uncertainties, and (ii) three new methods to identify rainfall threshold for the possible landslide occurrence, exploiting rain gauge and satellite data. In particular, the proposed methods are based on Least Square (LS), Quantile Regression (QR) and Nonlinear Least Square (NLS) statistical approaches. We applied the new procedure and methods to define empirical rainfall thresholds and their associated uncertainties in the Umbria region (central Italy) using both rain-gauge measurements and satellite estimates. We finally validated the thresholds and tested the effectiveness of the different threshold definition methods with independent landslide information. The NLS method among the others performed better in calculating thresholds in the full range of rainfall durations. We found that the thresholds obtained from satellite data are lower than those obtained from rain gauge measurements. This is in agreement with the literature, where satellite rainfall data underestimate the "ground" rainfall registered by rain gauges.

  1. Statistical Approaches for the Definition of Landslide Rainfall Thresholds and their Uncertainty Using Rain Gauge and Satellite Data

    NASA Technical Reports Server (NTRS)

    Rossi, M.; Luciani, S.; Valigi, D.; Kirschbaum, D.; Brunetti, M. T.; Peruccacci, S.; Guzzetti, F.

    2017-01-01

    Models for forecasting rainfall-induced landslides are mostly based on the identification of empirical rainfall thresholds obtained exploiting rain gauge data. Despite their increased availability, satellite rainfall estimates are scarcely used for this purpose. Satellite data should be useful in ungauged and remote areas, or should provide a significant spatial and temporal reference in gauged areas. In this paper, the analysis of the reliability of rainfall thresholds based on rainfall remote sensed and rain gauge data for the prediction of landslide occurrence is carried out. To date, the estimation of the uncertainty associated with the empirical rainfall thresholds is mostly based on a bootstrap resampling of the rainfall duration and the cumulated event rainfall pairs (D,E) characterizing rainfall events responsible for past failures. This estimation does not consider the measurement uncertainty associated with D and E. In the paper, we propose (i) a new automated procedure to reconstruct ED conditions responsible for the landslide triggering and their uncertainties, and (ii) three new methods to identify rainfall threshold for the possible landslide occurrence, exploiting rain gauge and satellite data. In particular, the proposed methods are based on Least Square (LS), Quantile Regression (QR) and Nonlinear Least Square (NLS) statistical approaches. We applied the new procedure and methods to define empirical rainfall thresholds and their associated uncertainties in the Umbria region (central Italy) using both rain-gauge measurements and satellite estimates. We finally validated the thresholds and tested the effectiveness of the different threshold definition methods with independent landslide information. The NLS method among the others performed better in calculating thresholds in the full range of rainfall durations. We found that the thresholds obtained from satellite data are lower than those obtained from rain gauge measurements. This is in agreement with the literature, where satellite rainfall data underestimate the 'ground' rainfall registered by rain gauges.

  2. Quantum chemical calculations for polymers and organic compounds

    NASA Technical Reports Server (NTRS)

    Lopez, J.; Yang, C.

    1982-01-01

    The relativistic effects of the orbiting electrons on a model compound were calculated. The computational method used was based on 'Modified Neglect of Differential Overlap' (MNDO). The compound tetracyanoplatinate was used since empirical measurement and calculations along "classical" lines had yielded many known properties. The purpose was to show that for large molecules relativity effects could not be ignored and that these effects could be calculated and yield data in closer agreement to empirical measurements. Both the energy band structure and molecular orbitals are depicted.

  3. Fault feature analysis of cracked gear based on LOD and analytical-FE method

    NASA Astrophysics Data System (ADS)

    Wu, Jiateng; Yang, Yu; Yang, Xingkai; Cheng, Junsheng

    2018-01-01

    At present, there are two main ideas for gear fault diagnosis. One is the model-based gear dynamic analysis; the other is signal-based gear vibration diagnosis. In this paper, a method for fault feature analysis of gear crack is presented, which combines the advantages of dynamic modeling and signal processing. Firstly, a new time-frequency analysis method called local oscillatory-characteristic decomposition (LOD) is proposed, which has the attractive feature of extracting fault characteristic efficiently and accurately. Secondly, an analytical-finite element (analytical-FE) method which is called assist-stress intensity factor (assist-SIF) gear contact model, is put forward to calculate the time-varying mesh stiffness (TVMS) under different crack states. Based on the dynamic model of the gear system with 6 degrees of freedom, the dynamic simulation response was obtained for different tooth crack depths. For the dynamic model, the corresponding relation between the characteristic parameters and the degree of the tooth crack is established under a specific condition. On the basis of the methods mentioned above, a novel gear tooth root crack diagnosis method which combines the LOD with the analytical-FE is proposed. Furthermore, empirical mode decomposition (EMD) and ensemble empirical mode decomposition (EEMD) are contrasted with the LOD by gear crack fault vibration signals. The analysis results indicate that the proposed method performs effectively and feasibility for the tooth crack stiffness calculation and the gear tooth crack fault diagnosis.

  4. Improved Design of Tunnel Supports : Volume 5 : Empirical Methods in Rock Tunneling -- Review and Recommendations

    DOT National Transportation Integrated Search

    1980-06-01

    Volume 5 evaluates empirical methods in tunneling. Empirical methods that avoid the use of an explicit model by relating ground conditions to observed prototype behavior have played a major role in tunnel design. The main objective of this volume is ...

  5. Imaging the Material Properties of Bone Specimens using Reflection-Based Infrared Microspectroscopy

    PubMed Central

    Acerbo, Alvin S.; Carr, G. Lawrence; Judex, Stefan; Miller, Lisa M.

    2012-01-01

    Fourier Transform InfraRed Microspectroscopy (FTIRM) is a widely used method for mapping the material properties of bone and other mineralized tissues, including mineralization, crystallinity, carbonate substitution, and collagen cross-linking. This technique is traditionally performed in a transmission-based geometry, which requires the preparation of plastic-embedded thin sections, limiting its functionality. Here, we theoretically and empirically demonstrate the development of reflection-based FTIRM as an alternative to the widely adopted transmission-based FTIRM, which reduces specimen preparation time and broadens the range of specimens that can be imaged. In this study, mature mouse femurs were plastic-embedded and longitudinal sections were cut at a thickness of 4 μm for transmission-based FTIRM measurements. The remaining bone blocks were polished for specular reflectance-based FTIRM measurements on regions immediately adjacent to the transmission sections. Kramers-Kronig analysis of the reflectance data yielded the dielectric response from which the absorption coefficients were directly determined. The reflectance-derived absorbance was validated empirically using the transmission spectra from the thin sections. The spectral assignments for mineralization, carbonate substitution, and collagen cross-linking were indistinguishable in transmission and reflection geometries, while the stoichiometric/non-stoichiometric apatite crystallinity parameter shifted from 1032 / 1021 cm−1 in transmission-based to 1035 / 1025 cm−1 in reflection-based data. This theoretical demonstration and empirical validation of reflection-based FTIRM eliminates the need for thin sections of bone and more readily facilitates direct correlations with other methods such nanoindentation and quantitative backscatter electron imaging (qBSE) from the same specimen. It provides a unique framework for correlating bone’s material and mechanical properties. PMID:22455306

  6. Unpacking the Revised Bloom's Taxonomy: Developing Case-Based Learning Activities

    ERIC Educational Resources Information Center

    Nkhoma, Mathews Zanda; Lam, Tri Khai; Sriratanaviriyakul, Narumon; Richardson, Joan; Kam, Booi; Lau, Kwok Hung

    2017-01-01

    Purpose: The purpose of this paper is to propose the use of case studies in teaching an undergraduate course of Internet for Business in class, based on the revised Bloom's taxonomy. The study provides the empirical evidence about the effect of case-based teaching method integrated the revised Bloom's taxonomy on students' incremental learning,…

  7. What Do We Know and How Well Do We Know It? Identifying Practice-Based Insights in Education

    ERIC Educational Resources Information Center

    Miller, Barbara; Pasley, Joan

    2012-01-01

    Knowledge derived from practice forms a significant portion of the knowledge base in the education field, yet is not accessible using existing empirical research methods. This paper describes a systematic, rigorous, grounded approach to collecting and analysing practice-based knowledge using the authors' research in teacher leadership as an…

  8. Estimating and Identifying Unspecified Correlation Structure for Longitudinal Data

    PubMed Central

    Hu, Jianhua; Wang, Peng; Qu, Annie

    2014-01-01

    Identifying correlation structure is important to achieving estimation efficiency in analyzing longitudinal data, and is also crucial for drawing valid statistical inference for large size clustered data. In this paper, we propose a nonparametric method to estimate the correlation structure, which is applicable for discrete longitudinal data. We utilize eigenvector-based basis matrices to approximate the inverse of the empirical correlation matrix and determine the number of basis matrices via model selection. A penalized objective function based on the difference between the empirical and model approximation of the correlation matrices is adopted to select an informative structure for the correlation matrix. The eigenvector representation of the correlation estimation is capable of reducing the risk of model misspecification, and also provides useful information on the specific within-cluster correlation pattern of the data. We show that the proposed method possesses the oracle property and selects the true correlation structure consistently. The proposed method is illustrated through simulations and two data examples on air pollution and sonar signal studies. PMID:26361433

  9. Social Phenomenological Analysis as a Research Method in Art Education: Developing an Empirical Model for Understanding Gallery Talks

    ERIC Educational Resources Information Center

    Hofmann, Fabian

    2016-01-01

    Social phenomenological analysis is presented as a research method to study gallery talks or guided tours in art museums. The research method is based on the philosophical considerations of Edmund Husserl and sociological/social science concepts put forward by Max Weber and Alfred Schuetz. Its starting point is the everyday lifeworld; the…

  10. Prediction of the future number of wells in production (in Spanish)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coca, B.P.

    1981-01-01

    A method to predict the number of wells that will continue producing at a certain date in the future is presented. The method is applicable to reservoirs of the depletion type and is based on the survival probability concept. This is useful when forecasting by empirical methods. An example of a field in primary production is presented.

  11. Retooling Predictive Relations for non-volatile PM by Comparison to Measurements

    NASA Astrophysics Data System (ADS)

    Vander Wal, R. L.; Abrahamson, J. P.

    2015-12-01

    Non-volatile particulate matter (nvPM) emissions from jet aircraft at cruise altitude are of particular interest for climate and atmospheric processes but are difficult to measure and are normally approximated. To provide such inventory estimates the present approach is to use measured, ground-based values with scaling to cruise (engine operating) conditions. Several points are raised by this approach. First is what ground based values to use. Empirical and semi-empirical approaches, such as the revised first order approximation (FOA3) and formation-oxidation (FOX) methods, each with embedded assumptions are available to calculate a ground-based black carbon concentration, CBC. Second is the scaling relation that can depend upon the ratios of fuel-air equivalence, pressure, and combustor flame temperature. We are using measured ground-based values to evaluate the accuracy of present methods towards developing alternative methods for CBCby smoke number or via a semi-empirical kinetic method for the specific engine, CFM56-2C, representative of a rich-dome style combustor, and as one of the most prevalent engine families in commercial use. Applying scaling relations to measured ground based values and comparison to measurements at cruise evaluates the accuracy of current scaling formalism. In partnership with GE Aviation, performing engine cycle deck calculations enables critical comparison between estimated or predicted thermodynamic parameters and true (engine) operational values for the CFM56-2C engine. Such specific comparisons allow tracing differences between predictive estimates for, and measurements of nvPM to their origin - as either divergence of input parameters or in the functional form of the predictive relations. Such insights will lead to development of new predictive tools for jet aircraft nvPM emissions. Such validated relations can then be extended to alternative fuels with confidence in operational thermodynamic values and functional form. Comparisons will then be made between these new predictive relationships and measurements of nvPM from alternative fuels using ground and cruise data - as collected during NASA-led AAFEX and ACCESS field campaigns, respectively.

  12. Computational ligand-based rational design: Role of conformational sampling and force fields in model development.

    PubMed

    Shim, Jihyun; Mackerell, Alexander D

    2011-05-01

    A significant number of drug discovery efforts are based on natural products or high throughput screens from which compounds showing potential therapeutic effects are identified without knowledge of the target molecule or its 3D structure. In such cases computational ligand-based drug design (LBDD) can accelerate the drug discovery processes. LBDD is a general approach to elucidate the relationship of a compound's structure and physicochemical attributes to its biological activity. The resulting structure-activity relationship (SAR) may then act as the basis for the prediction of compounds with improved biological attributes. LBDD methods range from pharmacophore models identifying essential features of ligands responsible for their activity, quantitative structure-activity relationships (QSAR) yielding quantitative estimates of activities based on physiochemical properties, and to similarity searching, which explores compounds with similar properties as well as various combinations of the above. A number of recent LBDD approaches involve the use of multiple conformations of the ligands being studied. One of the basic components to generate multiple conformations in LBDD is molecular mechanics (MM), which apply an empirical energy function to relate conformation to energies and forces. The collection of conformations for ligands is then combined with functional data using methods ranging from regression analysis to neural networks, from which the SAR is determined. Accordingly, for effective application of LBDD for SAR determinations it is important that the compounds be accurately modelled such that the appropriate range of conformations accessible to the ligands is identified. Such accurate modelling is largely based on use of the appropriate empirical force field for the molecules being investigated and the approaches used to generate the conformations. The present chapter includes a brief overview of currently used SAR methods in LBDD followed by a more detailed presentation of issues and limitations associated with empirical energy functions and conformational sampling methods.

  13. Estimating topological properties of weighted networks from limited information

    NASA Astrophysics Data System (ADS)

    Gabrielli, Andrea; Cimini, Giulio; Garlaschelli, Diego; Squartini, Angelo

    A typical problem met when studying complex systems is the limited information available on their topology, which hinders our understanding of their structural and dynamical properties. A paramount example is provided by financial networks, whose data are privacy protected. Yet, the estimation of systemic risk strongly depends on the detailed structure of the interbank network. The resulting challenge is that of using aggregate information to statistically reconstruct a network and correctly predict its higher-order properties. Standard approaches either generate unrealistically dense networks, or fail to reproduce the observed topology by assigning homogeneous link weights. Here we develop a reconstruction method, based on statistical mechanics concepts, that exploits the empirical link density in a highly non-trivial way. Technically, our approach consists in the preliminary estimation of node degrees from empirical node strengths and link density, followed by a maximum-entropy inference based on a combination of empirical strengths and estimated degrees. Our method is successfully tested on the international trade network and the interbank money market, and represents a valuable tool for gaining insights on privacy-protected or partially accessible systems. Acknoweledgement to ``Growthcom'' ICT - EC project (Grant No: 611272) and ``Crisislab'' Italian Project.

  14. Earthquake Macro-zonation Based Peak Ground Acceleration, Modified Mercalli Intensity, And Type of Rocks around Matano Fault

    NASA Astrophysics Data System (ADS)

    Karnaen, Muh; Suriamihardja, D. A.; Maulana, A.; Jaya, A.

    2018-03-01

    This study aims to determine earthquake vulnerable zones. We conducted research on earthquake macro-zonation based on PGA, Modified Mercalli Intensity (MMI), and type of rocks around Matano Fault in the area of 1.60 S to 2.990 S and 120.50 E to 122.470 E. We have acquired Maximum PGA and Modified Mercalli Intensity (MMI) for each observation point on the ground from the four major earthquake events. The empirical model is used due to lack of acceleration data recorded. We tried some empirical methods, but the McGuire method is found to be acceptable for this area. The result gives the maximum variation of PGA which is ranged between 18.40 - 363.54 gals. While the variation of MMI using empirical Wald attenuation gives values ranging from 2.9 - 7.7 MMI. The most vulnerable zone is located around Sorowako city with PGA value of 326.55 gals and MMI value of 7.5 MMI. This area is between ultra-basic rock and metamorphic rock formation. The vulnerable zone is near largest earthquake 6.2 M on 15-02-2011.

  15. Empirical likelihood method for non-ignorable missing data problems.

    PubMed

    Guan, Zhong; Qin, Jing

    2017-01-01

    Missing response problem is ubiquitous in survey sampling, medical, social science and epidemiology studies. It is well known that non-ignorable missing is the most difficult missing data problem where the missing of a response depends on its own value. In statistical literature, unlike the ignorable missing data problem, not many papers on non-ignorable missing data are available except for the full parametric model based approach. In this paper we study a semiparametric model for non-ignorable missing data in which the missing probability is known up to some parameters, but the underlying distributions are not specified. By employing Owen (1988)'s empirical likelihood method we can obtain the constrained maximum empirical likelihood estimators of the parameters in the missing probability and the mean response which are shown to be asymptotically normal. Moreover the likelihood ratio statistic can be used to test whether the missing of the responses is non-ignorable or completely at random. The theoretical results are confirmed by a simulation study. As an illustration, the analysis of a real AIDS trial data shows that the missing of CD4 counts around two years are non-ignorable and the sample mean based on observed data only is biased.

  16. What Is Heartburn Worth?

    PubMed Central

    Heudebert, Gustavo R; Centor, Robert M; Klapow, Joshua C; Marks, Robert; Johnson, Lawrence; Wilcox, C Mel

    2000-01-01

    OBJECTIVE T o determine the best treatment strategy for the management of patients presenting with symptoms consistent with uncomplicated heartburn. METHODS We performed a cost-utility analysis of 4 alternatives: empirical proton pump inhibitor, empirical histamine2-receptor antagonist, and diagnostic strategies consisting of either esophagogastroduodenoscopy (EGD) or an upper gastrointestinal series before treatment. The time horizon of the model was 1 year. The base case analysis assumed a cohort of otherwise healthy 45-year-old individuals in a primary care practice. MAIN RESULTS Empirical treatment with a proton pump inhibitor was projected to provide the greatest quality-adjusted survival for the cohort. Empirical treatment with a histamine2receptor antagonist was projected to be the least costly of the alternatives. The marginal cost-effectiveness of using a proton pump inhibitor over a histamine2-receptor antagonist was approximately $10,400 per quality-adjusted life year (QALY) gained in the base case analysis and was less than $50,000 per QALY as long as the utility for heartburn was less than 0.95. Both diagnostic strategies were dominated by proton pump inhibitor alternative. CONCLUSIONS Empirical treatment seems to be the optimal initial management strategy for patients with heartburn, but the choice between a proton pump inhibitor or histamine2-receptor antagonist depends on the impact of heartburn on quality of life. PMID:10718898

  17. GPR random noise reduction using BPD and EMD

    NASA Astrophysics Data System (ADS)

    Ostoori, Roya; Goudarzi, Alireza; Oskooi, Behrooz

    2018-04-01

    Ground-penetrating radar (GPR) exploration is a new high-frequency technology that explores near-surface objects and structures accurately. The high-frequency antenna of the GPR system makes it a high-resolution method compared to other geophysical methods. The frequency range of recorded GPR is so wide that random noise recording is inevitable due to acquisition. This kind of noise comes from unknown sources and its correlation to the adjacent traces is nearly zero. This characteristic of random noise along with the higher accuracy of GPR system makes denoising very important for interpretable results. The main objective of this paper is to reduce GPR random noise based on pursuing denoising using empirical mode decomposition. Our results showed that empirical mode decomposition in combination with basis pursuit denoising (BPD) provides satisfactory outputs due to the sifting process compared to the time-domain implementation of the BPD method on both synthetic and real examples. Our results demonstrate that because of the high computational costs, the BPD-empirical mode decomposition technique should only be used for heavily noisy signals.

  18. An empirical comparative study on biological age estimation algorithms with an application of Work Ability Index (WAI).

    PubMed

    Cho, Il Haeng; Park, Kyung S; Lim, Chang Joo

    2010-02-01

    In this study, we described the characteristics of five different biological age (BA) estimation algorithms, including (i) multiple linear regression, (ii) principal component analysis, and somewhat unique methods developed by (iii) Hochschild, (iv) Klemera and Doubal, and (v) a variant of Klemera and Doubal's method. The objective of this study is to find the most appropriate method of BA estimation by examining the association between Work Ability Index (WAI) and the differences of each algorithm's estimates from chronological age (CA). The WAI was found to be a measure that reflects an individual's current health status rather than the deterioration caused by a serious dependency with the age. Experiments were conducted on 200 Korean male participants using a BA estimation system developed principally under the concept of non-invasive, simple to operate and human function-based. Using the empirical data, BA estimation as well as various analyses including correlation analysis and discriminant function analysis was performed. As a result, it had been confirmed by the empirical data that Klemera and Doubal's method with uncorrelated variables from principal component analysis produces relatively reliable and acceptable BA estimates. 2009 Elsevier Ireland Ltd. All rights reserved.

  19. Review of Thawing Time Prediction Models Depending
on Process Conditions and Product Characteristics

    PubMed Central

    Kluza, Franciszek; Spiess, Walter E. L.; Kozłowicz, Katarzyna

    2016-01-01

    Summary Determining thawing times of frozen foods is a challenging problem as the thermophysical properties of the product change during thawing. A number of calculation models and solutions have been developed. The proposed solutions range from relatively simple analytical equations based on a number of assumptions to a group of empirical approaches that sometimes require complex calculations. In this paper analytical, empirical and graphical models are presented and critically reviewed. The conditions of solution, limitations and possible applications of the models are discussed. The graphical and semi--graphical models are derived from numerical methods. Using the numerical methods is not always possible as running calculations takes time, whereas the specialized software and equipment are not always cheap. For these reasons, the application of analytical-empirical models is more useful for engineering. It is demonstrated that there is no simple, accurate and feasible analytical method for thawing time prediction. Consequently, simplified methods are needed for thawing time estimation of agricultural and food products. The review reveals the need for further improvement of the existing solutions or development of new ones that will enable accurate determination of thawing time within a wide range of practical conditions of heat transfer during processing. PMID:27904387

  20. An alternative method for centrifugal compressor loading factor modelling

    NASA Astrophysics Data System (ADS)

    Galerkin, Y.; Drozdov, A.; Rekstin, A.; Soldatova, K.

    2017-08-01

    The loading factor at design point is calculated by one or other empirical formula in classical design methods. Performance modelling as a whole is out of consideration. Test data of compressor stages demonstrates that loading factor versus flow coefficient at the impeller exit has a linear character independent of compressibility. Known Universal Modelling Method exploits this fact. Two points define the function - loading factor at design point and at zero flow rate. The proper formulae include empirical coefficients. A good modelling result is possible if the choice of coefficients is based on experience and close analogs. Earlier Y. Galerkin and K. Soldatova had proposed to define loading factor performance by the angle of its inclination to the ordinate axis and by the loading factor at zero flow rate. Simple and definite equations with four geometry parameters were proposed for loading factor performance calculated for inviscid flow. The authors of this publication have studied the test performance of thirteen stages of different types. The equations are proposed with universal empirical coefficients. The calculation error lies in the range of plus to minus 1,5%. The alternative model of a loading factor performance modelling is included in new versions of the Universal Modelling Method.

  1. Faculty Forum: HOMER as an Acronym for the Scientific Method

    ERIC Educational Resources Information Center

    Lakin, Jessica L.; Giesler, R. Brian; Morris, Kathryn A.; Vosmik, Jordan R.

    2007-01-01

    Mnemonic strategies, such as acronyms, effectively increase student retention of course material. We present an acronym based on a popular television character to help students remember the basic steps in the scientific method. Our empirical evaluation of the acronym revealed that students found it to be enjoyable, useful, and worthy of use in…

  2. Effects of Instruction-Supported Learning with Worked Examples in Quantitative Method Training

    ERIC Educational Resources Information Center

    Wagner, Kai; Klein, Martin; Klopp, Eric; Puhl, Thomas; Stark, Robin

    2013-01-01

    An experimental field study at a German university was conducted in order to test the effectiveness of an integrated learning environment to improve the acquisition of knowledge about empirical research methods. The integrated learning environment was based on the combination of instruction-oriented and problem-oriented design principles and…

  3. Validating Accelerometry and Skinfold Measures in Youth with Down Syndrome

    ERIC Educational Resources Information Center

    Esposito, Phil Michael

    2012-01-01

    Current methods for measuring quantity and intensity of physical activity based on accelerometer output have been studied and validated in youth. These methods have been applied to youth with Down syndrome (DS) with no empirical research done to validate these measures. Similarly, individuals with DS have unique body proportions not represented by…

  4. An Empirical Comparison of Heterogeneity Variance Estimators in 12,894 Meta-Analyses

    ERIC Educational Resources Information Center

    Langan, Dean; Higgins, Julian P. T.; Simmonds, Mark

    2015-01-01

    Heterogeneity in meta-analysis is most commonly estimated using a moment-based approach described by DerSimonian and Laird. However, this method has been shown to produce biased estimates. Alternative methods to estimate heterogeneity include the restricted maximum likelihood approach and those proposed by Paule and Mandel, Sidik and Jonkman, and…

  5. Seismic facies analysis based on self-organizing map and empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Du, Hao-kun; Cao, Jun-xing; Xue, Ya-juan; Wang, Xing-jian

    2015-01-01

    Seismic facies analysis plays an important role in seismic interpretation and reservoir model building by offering an effective way to identify the changes in geofacies inter wells. The selections of input seismic attributes and their time window have an obvious effect on the validity of classification and require iterative experimentation and prior knowledge. In general, it is sensitive to noise when waveform serves as the input data to cluster analysis, especially with a narrow window. To conquer this limitation, the Empirical Mode Decomposition (EMD) method is introduced into waveform classification based on SOM. We first de-noise the seismic data using EMD and then cluster the data using 1D grid SOM. The main advantages of this method are resolution enhancement and noise reduction. 3D seismic data from the western Sichuan basin, China, are collected for validation. The application results show that seismic facies analysis can be improved and better help the interpretation. The powerful tolerance for noise makes the proposed method to be a better seismic facies analysis tool than classical 1D grid SOM method, especially for waveform cluster with a narrow window.

  6. An empirical inferential method of estimating nitrogen deposition to Mediterranean-type ecosystems: the San Bernardino Mountains case study.

    PubMed

    Bytnerowicz, A; Johnson, R F; Zhang, L; Jenerette, G D; Fenn, M E; Schilling, S L; Gonzalez-Fernandez, I

    2015-08-01

    The empirical inferential method (EIM) allows for spatially and temporally-dense estimates of atmospheric nitrogen (N) deposition to Mediterranean ecosystems. This method, set within a GIS platform, is based on ambient concentrations of NH3, NO, NO2 and HNO3; surface conductance of NH4(+) and NO3(-); stomatal conductance of NH3, NO, NO2 and HNO3; and satellite-derived LAI. Estimated deposition is based on data collected during 2002-2006 in the San Bernardino Mountains (SBM) of southern California. Approximately 2/3 of dry N deposition was to plant surfaces and 1/3 as stomatal uptake. Summer-season N deposition ranged from <3 kg ha(-1) in the eastern SBM to ∼ 60 kg ha(-1) in the western SBM near the Los Angeles Basin and compared well with the throughfall and big-leaf micrometeorological inferential methods. Extrapolating summertime N deposition estimates to annual values showed large areas of the SBM exceeding critical loads for nutrient N in chaparral and mixed conifer forests. Published by Elsevier Ltd.

  7. Speech rhythm analysis with decomposition of the amplitude envelope: characterizing rhythmic patterns within and across languages.

    PubMed

    Tilsen, Sam; Arvaniti, Amalia

    2013-07-01

    This study presents a method for analyzing speech rhythm using empirical mode decomposition of the speech amplitude envelope, which allows for extraction and quantification of syllabic- and supra-syllabic time-scale components of the envelope. The method of empirical mode decomposition of a vocalic energy amplitude envelope is illustrated in detail, and several types of rhythm metrics derived from this method are presented. Spontaneous speech extracted from the Buckeye Corpus is used to assess the effect of utterance length on metrics, and it is shown how metrics representing variability in the supra-syllabic time-scale components of the envelope can be used to identify stretches of speech with targeted rhythmic characteristics. Furthermore, the envelope-based metrics are used to characterize cross-linguistic differences in speech rhythm in the UC San Diego Speech Lab corpus of English, German, Greek, Italian, Korean, and Spanish speech elicited in read sentences, read passages, and spontaneous speech. The envelope-based metrics exhibit significant effects of language and elicitation method that argue for a nuanced view of cross-linguistic rhythm patterns.

  8. The qualitative-quantitative debate: moving from positivism and confrontation to post-positivism and reconciliation.

    PubMed

    Clark, A M

    1998-06-01

    Critiques of logical positivism form the foundation for a significant number of nursing research papers, with the philosophy being inappropriately deemed synonymous with empirical method. Frequently, proposing an alternative method to those identified with the quantitative paradigm, these critiques are based on a poor foundation. This paper highlights an alternative philosophy to positivism which can also underpin empirical inquiry, that of post-positivism. Post-positivism is contrasted with positivism, which is presented as an outmoded and rejected philosophy which should cease to significantly shape inquiry. Though some acknowledgement of post-positivism has occurred in the nursing literature, this has yet to permeate into mainstream nursing research. Many still base their arguments on a positivistic view of science. Through achievement of a better understanding of post-positivism and greater focus on explicating the philosophical assumptions underpinning all research methods, the distinctions that have long been perceived to exist between qualitative and quantitative methodologies can be confined to the past. Rather methods will be selected solely on the nature of research questions.

  9. Systematic Interpolation Method Predicts Antibody Monomer-Dimer Separation by Gradient Elution Chromatography at High Protein Loads.

    PubMed

    Creasy, Arch; Reck, Jason; Pabst, Timothy; Hunter, Alan; Barker, Gregory; Carta, Giorgio

    2018-05-29

    A previously developed empirical interpolation (EI) method is extended to predict highly overloaded multicomponent elution behavior on a cation exchange (CEX) column based on batch isotherm data. Instead of a fully mechanistic model, the EI method employs an empirically modified multicomponent Langmuir equation to correlate two-component adsorption isotherm data at different salt concentrations. Piecewise cubic interpolating polynomials are then used to predict competitive binding at intermediate salt concentrations. The approach is tested for the separation of monoclonal antibody monomer and dimer mixtures by gradient elution on the cation exchange resin Nuvia HR-S. Adsorption isotherms are obtained over a range of salt concentrations with varying monomer and dimer concentrations. Coupled with a lumped kinetic model, the interpolated isotherms predict the column behavior for highly overloaded conditions. Predictions based on the EI method showed good agreement with experimental elution curves for protein loads up to 40 mg/mL column or about 50% of the column binding capacity. The approach can be extended to other chromatographic modalities and to more than two components. This article is protected by copyright. All rights reserved.

  10. Empirical Prediction of Aircraft Landing Gear Noise

    NASA Technical Reports Server (NTRS)

    Golub, Robert A. (Technical Monitor); Guo, Yue-Ping

    2005-01-01

    This report documents a semi-empirical/semi-analytical method for landing gear noise prediction. The method is based on scaling laws of the theory of aerodynamic noise generation and correlation of these scaling laws with current available test data. The former gives the method a sound theoretical foundation and the latter quantitatively determines the relations between the parameters of the landing gear assembly and the far field noise, enabling practical predictions of aircraft landing gear noise, both for parametric trends and for absolute noise levels. The prediction model is validated by wind tunnel test data for an isolated Boeing 737 landing gear and by flight data for the Boeing 777 airplane. In both cases, the predictions agree well with data, both in parametric trends and in absolute noise levels.

  11. Toward a bioethical framework for antibiotic use, antimicrobial resistance and for empirically designing ethically robust strategies to protect human health: a research protocol

    PubMed Central

    Martins Pereira, Sandra; de Sá Brandão, Patrícia Joana; Araújo, Joana; Carvalho, Ana Sofia

    2017-01-01

    Introduction Antimicrobial resistance (AMR) is a challenging global and public health issue, raising bioethical challenges, considerations and strategies. Objectives This research protocol presents a conceptual model leading to formulating an empirically based bioethics framework for antibiotic use, AMR and designing ethically robust strategies to protect human health. Methods Mixed methods research will be used and operationalized into five substudies. The bioethical framework will encompass and integrate two theoretical models: global bioethics and ethical decision-making. Results Being a study protocol, this article reports on planned and ongoing research. Conclusions Based on data collection, future findings and using a comprehensive, integrative, evidence-based approach, a step-by-step bioethical framework will be developed for (i) responsible use of antibiotics in healthcare and (ii) design of strategies to decrease AMR. This will entail the analysis and interpretation of approaches from several bioethical theories, including deontological and consequentialist approaches, and the implications of uncertainty to these approaches. PMID:28459355

  12. Evaluating the utility of two gestural discomfort evaluation methods

    PubMed Central

    Son, Minseok; Jung, Jaemoon; Park, Woojin

    2017-01-01

    Evaluating physical discomfort of designed gestures is important for creating safe and usable gesture-based interaction systems; yet, gestural discomfort evaluation has not been extensively studied in HCI, and few evaluation methods seem currently available whose utility has been experimentally confirmed. To address this, this study empirically demonstrated the utility of the subjective rating method after a small number of gesture repetitions (a maximum of four repetitions) in evaluating designed gestures in terms of physical discomfort resulting from prolonged, repetitive gesture use. The subjective rating method has been widely used in previous gesture studies but without empirical evidence on its utility. This study also proposed a gesture discomfort evaluation method based on an existing ergonomics posture evaluation tool (Rapid Upper Limb Assessment) and demonstrated its utility in evaluating designed gestures in terms of physical discomfort resulting from prolonged, repetitive gesture use. Rapid Upper Limb Assessment is an ergonomics postural analysis tool that quantifies the work-related musculoskeletal disorders risks for manual tasks, and has been hypothesized to be capable of correctly determining discomfort resulting from prolonged, repetitive gesture use. The two methods were evaluated through comparisons against a baseline method involving discomfort rating after actual prolonged, repetitive gesture use. Correlation analyses indicated that both methods were in good agreement with the baseline. The methods proposed in this study seem useful for predicting discomfort resulting from prolonged, repetitive gesture use, and are expected to help interaction designers create safe and usable gesture-based interaction systems. PMID:28423016

  13. Research on Sustainable Development Level Evaluation of Resource-based Cities Based on Shapely Entropy and Chouqet Integral

    NASA Astrophysics Data System (ADS)

    Zhao, Hui; Qu, Weilu; Qiu, Weiting

    2018-03-01

    In order to evaluate sustainable development level of resource-based cities, an evaluation method with Shapely entropy and Choquet integral is proposed. First of all, a systematic index system is constructed, the importance of each attribute is calculated based on the maximum Shapely entropy principle, and then the Choquet integral is introduced to calculate the comprehensive evaluation value of each city from the bottom up, finally apply this method to 10 typical resource-based cities in China. The empirical results show that the evaluation method is scientific and reasonable, which provides theoretical support for the sustainable development path and reform direction of resource-based cities.

  14. Empirical validation of a real options theory based method for optimizing evacuation decisions within chemical plants.

    PubMed

    Reniers, G L L; Audenaert, A; Pauwels, N; Soudan, K

    2011-02-15

    This article empirically assesses and validates a methodology to make evacuation decisions in case of major fire accidents in chemical clusters. In this paper, a number of empirical results are presented, processed and discussed with respect to the implications and management of evacuation decisions in chemical companies. It has been shown in this article that in realistic industrial settings, suboptimal interventions may result in case the prospect to obtain additional information at later stages of the decision process is ignored. Empirical results also show that implications of interventions, as well as the required time and workforce to complete particular shutdown activities, may be very different from one company to another. Therefore, to be optimal from an economic viewpoint, it is essential that precautionary evacuation decisions are tailor-made per company. Copyright © 2010 Elsevier B.V. All rights reserved.

  15. Computer implemented empirical mode decomposition method, apparatus, and article of manufacture for two-dimensional signals

    NASA Technical Reports Server (NTRS)

    Huang, Norden E. (Inventor)

    2001-01-01

    A computer implemented method of processing two-dimensional physical signals includes five basic components and the associated presentation techniques of the results. The first component decomposes the two-dimensional signal into one-dimensional profiles. The second component is a computer implemented Empirical Mode Decomposition that extracts a collection of Intrinsic Mode Functions (IMF's) from each profile based on local extrema and/or curvature extrema. The decomposition is based on the direct extraction of the energy associated with various intrinsic time scales in the profiles. In the third component, the IMF's of each profile are then subjected to a Hilbert Transform. The fourth component collates the Hilbert transformed IMF's of the profiles to form a two-dimensional Hilbert Spectrum. A fifth component manipulates the IMF's by, for example, filtering the two-dimensional signal by reconstructing the two-dimensional signal from selected IMF(s).

  16. Improved model predictive control of resistive wall modes by error field estimator in EXTRAP T2R

    NASA Astrophysics Data System (ADS)

    Setiadi, A. C.; Brunsell, P. R.; Frassinetti, L.

    2016-12-01

    Many implementations of a model-based approach for toroidal plasma have shown better control performance compared to the conventional type of feedback controller. One prerequisite of model-based control is the availability of a control oriented model. This model can be obtained empirically through a systematic procedure called system identification. Such a model is used in this work to design a model predictive controller to stabilize multiple resistive wall modes in EXTRAP T2R reversed-field pinch. Model predictive control is an advanced control method that can optimize the future behaviour of a system. Furthermore, this paper will discuss an additional use of the empirical model which is to estimate the error field in EXTRAP T2R. Two potential methods are discussed that can estimate the error field. The error field estimator is then combined with the model predictive control and yields better radial magnetic field suppression.

  17. DNA Barcoding of Recently Diverged Species: Relative Performance of Matching Methods

    PubMed Central

    van Velzen, Robin; Weitschek, Emanuel; Felici, Giovanni; Bakker, Freek T.

    2012-01-01

    Recently diverged species are challenging for identification, yet they are frequently of special interest scientifically as well as from a regulatory perspective. DNA barcoding has proven instrumental in species identification, especially in insects and vertebrates, but for the identification of recently diverged species it has been reported to be problematic in some cases. Problems are mostly due to incomplete lineage sorting or simply lack of a ‘barcode gap’ and probably related to large effective population size and/or low mutation rate. Our objective was to compare six methods in their ability to correctly identify recently diverged species with DNA barcodes: neighbor joining and parsimony (both tree-based), nearest neighbor and BLAST (similarity-based), and the diagnostic methods DNA-BAR, and BLOG. We analyzed simulated data assuming three different effective population sizes as well as three selected empirical data sets from published studies. Results show, as expected, that success rates are significantly lower for recently diverged species (∼75%) than for older species (∼97%) (P<0.00001). Similarity-based and diagnostic methods significantly outperform tree-based methods, when applied to simulated DNA barcode data (P<0.00001). The diagnostic method BLOG had highest correct query identification rate based on simulated (86.2%) as well as empirical data (93.1%), indicating that it is a consistently better method overall. Another advantage of BLOG is that it offers species-level information that can be used outside the realm of DNA barcoding, for instance in species description or molecular detection assays. Even though we can confirm that identification success based on DNA barcoding is generally high in our data, recently diverged species remain difficult to identify. Nevertheless, our results contribute to improved solutions for their accurate identification. PMID:22272356

  18. DNA barcoding of recently diverged species: relative performance of matching methods.

    PubMed

    van Velzen, Robin; Weitschek, Emanuel; Felici, Giovanni; Bakker, Freek T

    2012-01-01

    Recently diverged species are challenging for identification, yet they are frequently of special interest scientifically as well as from a regulatory perspective. DNA barcoding has proven instrumental in species identification, especially in insects and vertebrates, but for the identification of recently diverged species it has been reported to be problematic in some cases. Problems are mostly due to incomplete lineage sorting or simply lack of a 'barcode gap' and probably related to large effective population size and/or low mutation rate. Our objective was to compare six methods in their ability to correctly identify recently diverged species with DNA barcodes: neighbor joining and parsimony (both tree-based), nearest neighbor and BLAST (similarity-based), and the diagnostic methods DNA-BAR, and BLOG. We analyzed simulated data assuming three different effective population sizes as well as three selected empirical data sets from published studies. Results show, as expected, that success rates are significantly lower for recently diverged species (∼75%) than for older species (∼97%) (P<0.00001). Similarity-based and diagnostic methods significantly outperform tree-based methods, when applied to simulated DNA barcode data (P<0.00001). The diagnostic method BLOG had highest correct query identification rate based on simulated (86.2%) as well as empirical data (93.1%), indicating that it is a consistently better method overall. Another advantage of BLOG is that it offers species-level information that can be used outside the realm of DNA barcoding, for instance in species description or molecular detection assays. Even though we can confirm that identification success based on DNA barcoding is generally high in our data, recently diverged species remain difficult to identify. Nevertheless, our results contribute to improved solutions for their accurate identification.

  19. How rational should bioethics be? The value of empirical approaches.

    PubMed

    Alvarez, A A

    2001-10-01

    Rational justification of claims with empirical content calls for empirical and not only normative philosophical investigation. Empirical approaches to bioethics are epistemically valuable, i.e., such methods may be necessary in providing and verifying basic knowledge about cultural values and norms. Our assumptions in moral reasoning can be verified or corrected using these methods. Moral arguments can be initiated or adjudicated by data drawn from empirical investigation. One may argue that individualistic informed consent, for example, is not compatible with the Asian communitarian orientation. But this normative claim uses an empirical assumption that may be contrary to the fact that some Asians do value and argue for informed consent. Is it necessary and factual to neatly characterize some cultures as individualistic and some as communitarian? Empirical investigation can provide a reasonable way to inform such generalizations. In a multi-cultural context, such as in the Philippines, there is a need to investigate the nature of the local ethos before making any appeal to authenticity. Otherwise we may succumb to the same ethical imperialism we are trying hard to resist. Normative claims that involve empirical premises cannot be reasonable verified or evaluated without utilizing empirical methods along with philosophical reflection. The integration of empirical methods to the standard normative approach to moral reasoning should be reasonably guided by the epistemic demands of claims arising from cross-cultural discourse in bioethics.

  20. The scale-dependent market trend: Empirical evidences using the lagged DFA method

    NASA Astrophysics Data System (ADS)

    Li, Daye; Kou, Zhun; Sun, Qiankun

    2015-09-01

    In this paper we make an empirical research and test the efficiency of 44 important market indexes in multiple scales. A modified method based on the lagged detrended fluctuation analysis is utilized to maximize the information of long-term correlations from the non-zero lags and keep the margin of errors small when measuring the local Hurst exponent. Our empirical result illustrates that a common pattern can be found in the majority of the measured market indexes which tend to be persistent (with the local Hurst exponent > 0.5) in the small time scale, whereas it displays significant anti-persistent characteristics in large time scales. Moreover, not only the stock markets but also the foreign exchange markets share this pattern. Considering that the exchange markets are only weakly synchronized with the economic cycles, it can be concluded that the economic cycles can cause anti-persistence in the large time scale but there are also other factors at work. The empirical result supports the view that financial markets are multi-fractal and it indicates that deviations from efficiency and the type of model to describe the trend of market price are dependent on the forecasting horizon.

  1. A method of reflexive balancing in a pragmatic, interdisciplinary and reflexive bioethics.

    PubMed

    Ives, Jonathan

    2014-07-01

    In recent years there has been a wealth of literature arguing the need for empirical and interdisciplinary approaches to bioethics, based on the premise that an empirically informed ethical analysis is more grounded, contextually sensitive and therefore more relevant to clinical practice than an 'abstract' philosophical analysis. Bioethics has (arguably) always been an interdisciplinary field, and the rise of 'empirical' (bio)ethics need not be seen as an attempt to give a new name to the longstanding practice of interdisciplinary collaboration, but can perhaps best be understood as a substantive attempt to engage with the nature of that interdisciplinarity and to articulate the relationship between the many different disciplines (some of them empirical) that contribute to the field. It can also be described as an endeavour to explain how different disciplinary approaches can be integrated to effectively answer normative questions in bioethics, and fundamental to that endeavour is the need to think about how a robust methodology can be articulated that successfully marries apparently divergent epistemological and metaethical perspectives with method. This paper proposes 'Reflexive Bioethics' (RB) as a methodology for interdisciplinary and empirical bioethics, which utilizes a method of 'Reflexive Balancing' (RBL). RBL has been developed in response to criticisms of various forms of reflective equilibrium, and is built upon a pragmatic characterization of Bioethics and a 'quasi-moral foundationalism', which allows RBL to avoid some of the difficulties associated with RE and yet retain the flexible egalitarianism that makes it intuitively appealing to many. © 2013 John Wiley & Sons Ltd.

  2. Beyond Blood Culture and Gram Stain Analysis: A Review of Molecular Techniques for the Early Detection of Bacteremia in Surgical Patients.

    PubMed

    Scerbo, Michelle H; Kaplan, Heidi B; Dua, Anahita; Litwin, Douglas B; Ambrose, Catherine G; Moore, Laura J; Murray, Col Clinton K; Wade, Charles E; Holcomb, John B

    2016-06-01

    Sepsis from bacteremia occurs in 250,000 cases annually in the United States, has a mortality rate as high as 60%, and is associated with a poorer prognosis than localized infection. Because of these high figures, empiric antibiotic administration for patients with systemic inflammatory response syndrome (SIRS) and suspected infection is the second most common indication for antibiotic administration in intensive care units (ICU)s. However, overuse of empiric antibiotics contributes to the development of opportunistic infections, antibiotic resistance, and the increase in multi-drug-resistant bacterial strains. The current method of diagnosing and ruling out bacteremia is via blood culture (BC) and Gram stain (GS) analysis. Conventional and molecular methods for diagnosing bacteremia were reviewed and compared. The clinical implications, use, and current clinical trials of polymerase chain reaction (PCR)-based methods to detect bacterial pathogens in the blood stream were detailed. BC/GS has several disadvantages. These include: some bacteria do not grow in culture media; others do not GS appropriately; and cultures can require up to 5 d to guide or discontinue antibiotic treatment. PCR-based methods can be potentially applied to detect rapidly, accurately, and directly microbes in human blood samples. Compared with the conventional BC/GS, particular advantages to molecular methods (specifically, PCR-based methods) include faster results, leading to possible improved antibiotic stewardship when bacteremia is not present.

  3. Entering the Historical Problem Space: Whole-Class Text-Based Discussion in History Class

    ERIC Educational Resources Information Center

    Reisman, Abby

    2015-01-01

    Background/Context: The Common Core State Standards Initiative reveals how little we understand about the components of effective discussion-based instruction in disciplinary history. Although the case for classroom discussion as a core method for subject matter learning stands on stable theoretical and empirical ground, to date, none of the…

  4. Efficacy of the Technological/Engineering Design Approach: Imposed Cognitive Demands within Design-Based Biotechnology Instruction

    ERIC Educational Resources Information Center

    Wells, John G.

    2016-01-01

    Though not empirically established as an efficacious pedagogy for promoting higher order thinking skills, technological/engineering design-based learning in K-12 STEM education is increasingly embraced as a core instructional method for integrative STEM learning that promotes the development of student critical thinking skills (Honey, Pearson,…

  5. Dewey's Concept of Experience for Inquiry-Based Landscape Drawing during Field Studies

    ERIC Educational Resources Information Center

    Tillmann, Alexander; Albrecht, Volker; Wunderlich, Jürgen

    2017-01-01

    The epistemological and educational philosophy of John Dewey is used as a theoretical basis to analyze processes of knowledge construction during geographical field studies. The experience of landscape drawing as a method of inquiry and a starting point for research-based learning is empirically evaluated. The basic drawing skills are acquired…

  6. Child and Adolescent Behaviorally Based Disorders: A Critical Review of Reliability and Validity

    ERIC Educational Resources Information Center

    Mallett, Christopher A.

    2014-01-01

    Objectives: The purpose of this study was to investigate the historical construction and empirical support of two child and adolescent behaviorally based mental health disorders: oppositional defiant and conduct disorders. Method: The study utilized a historiography methodology to review, from 1880 to 2012, these disorders' inclusion in…

  7. ACT for Leadership: Using Acceptance and Commitment Training to Develop Crisis-Resilient Change Managers

    ERIC Educational Resources Information Center

    Moran, Daniel J.; Consulting, Pickslyde

    2010-01-01

    The evidence-based executive coaching movement suggests translating empirical research into practical methods to help leaders develop a repertoire of crisis resiliency and value-directed change management skills. Acceptance and Commitment Therapy (ACT) is an evidence-based modern cognitive-behavior therapy approach that has been and applied to…

  8. Computer-Based Methods for Collecting Peer Nomination Data: Utility, Practice, and Empirical Support

    ERIC Educational Resources Information Center

    van den Berg, Yvonne H. M.; Gommans, Rob

    2017-01-01

    New technologies have led to several major advances in psychological research over the past few decades. Peer nomination research is no exception. Thanks to these technological innovations, computerized data collection is becoming more common in peer nomination research. However, computer-based assessment is more than simply programming the…

  9. An Empirical Typology of Residential Care/Assisted Living Based on a Four-State Study

    ERIC Educational Resources Information Center

    Park, Nan Sook; Zimmerman, Sheryl; Sloane, Philip D.; Gruber-Baldini, Ann L.; Eckert, J. Kevin

    2006-01-01

    Purpose: Residential care/assisted living describes diverse facilities providing non-nursing home care to a heterogeneous group of primarily elderly residents. This article derives typologies of assisted living based on theoretically and practically grounded evidence. Design and Methods: We obtained data from the Collaborative Studies of Long-Term…

  10. Minimal residual method provides optimal regularization parameter for diffuse optical tomography

    NASA Astrophysics Data System (ADS)

    Jagannath, Ravi Prasad K.; Yalavarthy, Phaneendra K.

    2012-10-01

    The inverse problem in the diffuse optical tomography is known to be nonlinear, ill-posed, and sometimes under-determined, requiring regularization to obtain meaningful results, with Tikhonov-type regularization being the most popular one. The choice of this regularization parameter dictates the reconstructed optical image quality and is typically chosen empirically or based on prior experience. An automated method for optimal selection of regularization parameter that is based on regularized minimal residual method (MRM) is proposed and is compared with the traditional generalized cross-validation method. The results obtained using numerical and gelatin phantom data indicate that the MRM-based method is capable of providing the optimal regularization parameter.

  11. Minimal residual method provides optimal regularization parameter for diffuse optical tomography.

    PubMed

    Jagannath, Ravi Prasad K; Yalavarthy, Phaneendra K

    2012-10-01

    The inverse problem in the diffuse optical tomography is known to be nonlinear, ill-posed, and sometimes under-determined, requiring regularization to obtain meaningful results, with Tikhonov-type regularization being the most popular one. The choice of this regularization parameter dictates the reconstructed optical image quality and is typically chosen empirically or based on prior experience. An automated method for optimal selection of regularization parameter that is based on regularized minimal residual method (MRM) is proposed and is compared with the traditional generalized cross-validation method. The results obtained using numerical and gelatin phantom data indicate that the MRM-based method is capable of providing the optimal regularization parameter.

  12. VORSTAB: A computer program for calculating lateral-directional stability derivatives with vortex flow effect

    NASA Technical Reports Server (NTRS)

    Lan, C. Edward

    1985-01-01

    A computer program based on the Quasi-Vortex-Lattice Method of Lan is presented for calculating longitudinal and lateral-directional aerodynamic characteristics of nonplanar wing-body combination. The method is based on the assumption of inviscid subsonic flow. Both attached and vortex-separated flows are treated. For the vortex-separated flow, the calculation is based on the method of suction analogy. The effect of vortex breakdown is accounted for by an empirical method. A summary of the theoretical method, program capabilities, input format, output variables and program job control set-up are described. Three test cases are presented as guides for potential users of the code.

  13. A Parameter Identification Method for Helicopter Noise Source Identification and Physics-Based Semi-Empirical Modeling

    NASA Technical Reports Server (NTRS)

    Greenwood, Eric, II; Schmitz, Fredric H.

    2010-01-01

    A new physics-based parameter identification method for rotor harmonic noise sources is developed using an acoustic inverse simulation technique. This new method allows for the identification of individual rotor harmonic noise sources and allows them to be characterized in terms of their individual non-dimensional governing parameters. This new method is applied to both wind tunnel measurements and ground noise measurements of two-bladed rotors. The method is shown to match the parametric trends of main rotor Blade-Vortex Interaction (BVI) noise, allowing accurate estimates of BVI noise to be made for operating conditions based on a small number of measurements taken at different operating conditions.

  14. Intelligent diagnosis of short hydraulic signal based on improved EEMD and SVM with few low-dimensional training samples

    NASA Astrophysics Data System (ADS)

    Zhang, Meijun; Tang, Jian; Zhang, Xiaoming; Zhang, Jiaojiao

    2016-03-01

    The high accurate classification ability of an intelligent diagnosis method often needs a large amount of training samples with high-dimensional eigenvectors, however the characteristics of the signal need to be extracted accurately. Although the existing EMD(empirical mode decomposition) and EEMD(ensemble empirical mode decomposition) are suitable for processing non-stationary and non-linear signals, but when a short signal, such as a hydraulic impact signal, is concerned, their decomposition accuracy become very poor. An improve EEMD is proposed specifically for short hydraulic impact signals. The improvements of this new EEMD are mainly reflected in four aspects, including self-adaptive de-noising based on EEMD, signal extension based on SVM(support vector machine), extreme center fitting based on cubic spline interpolation, and pseudo component exclusion based on cross-correlation analysis. After the energy eigenvector is extracted from the result of the improved EEMD, the fault pattern recognition based on SVM with small amount of low-dimensional training samples is studied. At last, the diagnosis ability of improved EEMD+SVM method is compared with the EEMD+SVM and EMD+SVM methods, and its diagnosis accuracy is distinctly higher than the other two methods no matter the dimension of the eigenvectors are low or high. The improved EEMD is very propitious for the decomposition of short signal, such as hydraulic impact signal, and its combination with SVM has high ability for the diagnosis of hydraulic impact faults.

  15. Coupling of ab initio density functional theory and molecular dynamics for the multiscale modeling of carbon nanotubes

    NASA Astrophysics Data System (ADS)

    Ng, T. Y.; Yeak, S. H.; Liew, K. M.

    2008-02-01

    A multiscale technique is developed that couples empirical molecular dynamics (MD) and ab initio density functional theory (DFT). An overlap handshaking region between the empirical MD and ab initio DFT regions is formulated and the interaction forces between the carbon atoms are calculated based on the second-generation reactive empirical bond order potential, the long-range Lennard-Jones potential as well as the quantum-mechanical DFT derived forces. A density of point algorithm is also developed to track all interatomic distances in the system, and to activate and establish the DFT and handshaking regions. Through parallel computing, this multiscale method is used here to study the dynamic behavior of single-walled carbon nanotubes (SWCNTs) under asymmetrical axial compression. The detection of sideways buckling due to the asymmetrical axial compression is reported and discussed. It is noted from this study on SWCNTs that the MD results may be stiffer compared to those with electron density considerations, i.e. first-principle ab initio methods.

  16. Distribution of the two-sample t-test statistic following blinded sample size re-estimation.

    PubMed

    Lu, Kaifeng

    2016-05-01

    We consider the blinded sample size re-estimation based on the simple one-sample variance estimator at an interim analysis. We characterize the exact distribution of the standard two-sample t-test statistic at the final analysis. We describe a simulation algorithm for the evaluation of the probability of rejecting the null hypothesis at given treatment effect. We compare the blinded sample size re-estimation method with two unblinded methods with respect to the empirical type I error, the empirical power, and the empirical distribution of the standard deviation estimator and final sample size. We characterize the type I error inflation across the range of standardized non-inferiority margin for non-inferiority trials, and derive the adjusted significance level to ensure type I error control for given sample size of the internal pilot study. We show that the adjusted significance level increases as the sample size of the internal pilot study increases. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  17. Benford's law and the FSD distribution of economic behavioral micro data

    NASA Astrophysics Data System (ADS)

    Villas-Boas, Sofia B.; Fu, Qiuzi; Judge, George

    2017-11-01

    In this paper, we focus on the first significant digit (FSD) distribution of European micro income data and use information theoretic-entropy based methods to investigate the degree to which Benford's FSD law is consistent with the nature of these economic behavioral systems. We demonstrate that Benford's law is not an empirical phenomenon that occurs only in important distributions in physical statistics, but that it also arises in self-organizing dynamic economic behavioral systems. The empirical likelihood member of the minimum divergence-entropy family, is used to recover country based income FSD probability density functions and to demonstrate the implications of using a Benford prior reference distribution in economic behavioral system information recovery.

  18. Signal enhancement based on complex curvelet transform and complementary ensemble empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Dong, Lieqian; Wang, Deying; Zhang, Yimeng; Zhou, Datong

    2017-09-01

    Signal enhancement is a necessary step in seismic data processing. In this paper we utilize the complementary ensemble empirical mode decomposition (CEEMD) and complex curvelet transform (CCT) methods to separate signal from random noise further to improve the signal to noise (S/N) ratio. Firstly, the original data with noise is decomposed into a series of intrinsic mode function (IMF) profiles with the aid of CEEMD. Then the IMFs with noise are transformed into CCT domain. By choosing different thresholds which are based on the noise level difference of each IMF profile, the noise in original data can be suppressed. Finally, we illustrate the effectiveness of the approach by simulated and field datasets.

  19. Technical note: The calibration of {sup 90}Y-labeled SIR-Spheresusing a nondestructive spectroscopic assay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Selwyn, R.; Micka, J.; DeWerd, L.

    2008-04-15

    {sup 90}Y-labeled SIR-Spheres are currently used to treat patients with hepatic metastases secondary to colorectal adenocarcinoma. In general, the prescribed activity is based on empirical data collected during clinical trials. The activity of the source vial is labeled by the manufacturer as 3.0 GBq{+-}10% and is not independently verified by the end user. This technical note shows that the results of a nondestructive spectroscopic assay of a SIR-Spheressample was 26% higher than the activity stated by the manufacturer. This difference should not impact the current empirical prescription method but may be problematic for patient-specific dosimetry applications, such as image-based dosimetry.

  20. Small field detector correction factors kQclin,Qmsr (fclin,fmsr) for silicon-diode and diamond detectors with circular 6 MV fields derived using both empirical and numerical methods.

    PubMed

    O'Brien, D J; León-Vintró, L; McClean, B

    2016-01-01

    The use of radiotherapy fields smaller than 3 cm in diameter has resulted in the need for accurate detector correction factors for small field dosimetry. However, published factors do not always agree and errors introduced by biased reference detectors, inaccurate Monte Carlo models, or experimental errors can be difficult to distinguish. The aim of this study was to provide a robust set of detector-correction factors for a range of detectors using numerical, empirical, and semiempirical techniques under the same conditions and to examine the consistency of these factors between techniques. Empirical detector correction factors were derived based on small field output factor measurements for circular field sizes from 3.1 to 0.3 cm in diameter performed with a 6 MV beam. A PTW 60019 microDiamond detector was used as the reference dosimeter. Numerical detector correction factors for the same fields were derived based on calculations from a geant4 Monte Carlo model of the detectors and the Linac treatment head. Semiempirical detector correction factors were derived from the empirical output factors and the numerical dose-to-water calculations. The PTW 60019 microDiamond was found to over-respond at small field sizes resulting in a bias in the empirical detector correction factors. The over-response was similar in magnitude to that of the unshielded diode. Good agreement was generally found between semiempirical and numerical detector correction factors except for the PTW 60016 Diode P, where the numerical values showed a greater over-response than the semiempirical values by a factor of 3.7% for a 1.1 cm diameter field and higher for smaller fields. Detector correction factors based solely on empirical measurement or numerical calculation are subject to potential bias. A semiempirical approach, combining both empirical and numerical data, provided the most reliable results.

  1. Empirically-based performance assessment & simulation of pedestrian behavior at unsignalized crossings.

    DOT National Transportation Integrated Search

    2014-09-01

    The objective of this research was to provide an improved understanding of pedestrian-vehicle interaction : at mid-block pedestrian crossings and develop methods that can be used in traffic operational analysis and : microsimulation packages. Models ...

  2. A literature review on fatigue and creep interaction

    NASA Technical Reports Server (NTRS)

    Chen, W. C.

    1978-01-01

    Life-time prediction methods, which are based on a number of empirical and phenomenological relationships, are presented. Three aspects are reviewed: effects of testing parameters on high temperature fatigue, life-time prediction, and high temperature fatigue crack growth.

  3. Reflections on discrimination and health in India.

    PubMed

    Srivatsan, R

    2015-01-01

    This is a speculative paper on the structure of caste-based discrimination in India. It sketches the field by a) proposing four empirical and historical examples of discrimination in different medical situations; b) suggesting an analytical framework composed of domain, register, temporality and intensity of discrimination; c) proposing that in the Indian historical context, discrimination masks itself, hiding its character behind the veneer of secular ideas; d) arguing that discrimination is not some unfortunate residue of backwardness in modern society that will go away, but is the force of social hierarchy transforming itself into a fully modern capitalist culture. The paper then arrives at the understanding that discrimination is pandemic across India. The conclusion suggests that in India today, we need proposals, hypotheses and arguments that help us establish the ethical framework for meaningful empirical research that sociological studies of medical ethics and the epidemiology of discrimination can pursue. Its method is that of logical and speculative argument based on experience, with examples of different forms of discrimination to clarify the point being made. No specific research was undertaken for this purpose since the paper is not empirically based.

  4. A Note on Procrustean Rotation in Exploratory Factor Analysis: A Computer Intensive Approach to Goodness-of-Fit Evaluation.

    ERIC Educational Resources Information Center

    Raykov, Tenko; Little, Todd D.

    1999-01-01

    Describes a method for evaluating results of Procrustean rotation to a target factor pattern matrix in exploratory factor analysis. The approach, based on the bootstrap method, yields empirical approximations of the sampling distributions of: (1) differences between target elements and rotated factor pattern matrices; and (2) the overall…

  5. Assessing Risk for Sexual Offenders in New Zealand: Development and Validation of a Computer-Scored Risk Measure

    ERIC Educational Resources Information Center

    Skelton, Alexander; Riley, David; Wales, David; Vess, James

    2006-01-01

    A growing research base supports the predictive validity of actuarial methods of risk assessment with sexual offenders. These methods use clearly defined variables with demonstrated empirical association with re-offending. The advantages of actuarial measures for screening large numbers of offenders quickly and economically are further enhanced…

  6. A Unifying Framework for Causal Analysis in Set-Theoretic Multimethod Research

    ERIC Educational Resources Information Center

    Rohlfing, Ingo; Schneider, Carsten Q.

    2018-01-01

    The combination of Qualitative Comparative Analysis (QCA) with process tracing, which we call set-theoretic multimethod research (MMR), is steadily becoming more popular in empirical research. Despite the fact that both methods have an elected affinity based on set theory, it is not obvious how a within-case method operating in a single case and a…

  7. Selecting and Using Information Sources: Source Preferences and Information Pathways of Israeli Library and Information Science Students

    ERIC Educational Resources Information Center

    Bronstein, Jenny

    2010-01-01

    Introduction: The study investigated the source preference criteria of library and information science students for their academic and personal information needs. Method: The empirical study was based on two methods of data collection. Eighteen participants wrote a personal diary for four months in which they recorded search episodes and answered…

  8. The Effect of Achievement Test Selection on Identification of Learning Disabilities within a Patterns of Strengths and Weaknesses Framework

    ERIC Educational Resources Information Center

    Miciak, Jeremy; Taylor, W. Pat; Denton, Carolyn A.; Fletcher, Jack M.

    2015-01-01

    Few empirical investigations have evaluated learning disabilities (LD) identification methods based on a pattern of cognitive strengths and weaknesses (PSW). This study investigated the reliability of LD classification decisions of the concordance/discordance method (C/DM) across different psychoeducational assessment batteries. C/DM criteria were…

  9. Agreement and Coverage of Indicators of Response to Intervention: A Multimethod Comparison and Simulation

    ERIC Educational Resources Information Center

    Fletcher, Jack M.; Stuebing, Karla K.; Barth, Amy E.; Miciak, Jeremy; Francis, David J.; Denton, Carolyn

    2014-01-01

    Purpose: Agreement across methods for identifying students as inadequate responders or as learning disabled is often poor. We report (1) an empirical examination of final status (postintervention benchmarks) and dual-discrepancy growth methods based on growth during the intervention and final status for assessing response to intervention and (2) a…

  10. Alignment of Standards and Assessment: A Theoretical and Empirical Study of Methods for Alignment

    ERIC Educational Resources Information Center

    Nasstrom, Gunilla; Henriksson, Widar

    2008-01-01

    Introduction: In a standards-based school-system alignment of policy documents with standards and assessment is important. To be able to evaluate whether schools and students have reached the standards, the assessment should focus on the standards. Different models and methods can be used for measuring alignment, i.e. the correspondence between…

  11. Empirical and Clinical Methods in the Assessment of Personality and Psychopathology: An Integrative Approach for Training

    ERIC Educational Resources Information Center

    Flanagan, Rosemary; Esquivel, Giselle B.

    2006-01-01

    School psychologists have a critical role in identifying social-emotional problems and psychopathology in youth based on a set of personality-assessment competencies. The development of competencies in assessing personality and psychopathology is complex, requiring a variety of integrated methods and approaches. Given the limited extent and scope…

  12. A methodology for reduced order modeling and calibration of the upper atmosphere

    NASA Astrophysics Data System (ADS)

    Mehta, Piyush M.; Linares, Richard

    2017-10-01

    Atmospheric drag is the largest source of uncertainty in accurately predicting the orbit of satellites in low Earth orbit (LEO). Accurately predicting drag for objects that traverse LEO is critical to space situational awareness. Atmospheric models used for orbital drag calculations can be characterized either as empirical or physics-based (first principles based). Empirical models are fast to evaluate but offer limited real-time predictive/forecasting ability, while physics based models offer greater predictive/forecasting ability but require dedicated parallel computational resources. Also, calibration with accurate data is required for either type of models. This paper presents a new methodology based on proper orthogonal decomposition toward development of a quasi-physical, predictive, reduced order model that combines the speed of empirical and the predictive/forecasting capabilities of physics-based models. The methodology is developed to reduce the high dimensionality of physics-based models while maintaining its capabilities. We develop the methodology using the Naval Research Lab's Mass Spectrometer Incoherent Scatter model and show that the diurnal and seasonal variations can be captured using a small number of modes and parameters. We also present calibration of the reduced order model using the CHAMP and GRACE accelerometer-derived densities. Results show that the method performs well for modeling and calibration of the upper atmosphere.

  13. efficient association study design via power-optimized tag SNP selection

    PubMed Central

    HAN, BUHM; KANG, HYUN MIN; SEO, MYEONG SEONG; ZAITLEN, NOAH; ESKIN, ELEAZAR

    2008-01-01

    Discovering statistical correlation between causal genetic variation and clinical traits through association studies is an important method for identifying the genetic basis of human diseases. Since fully resequencing a cohort is prohibitively costly, genetic association studies take advantage of local correlation structure (or linkage disequilibrium) between single nucleotide polymorphisms (SNPs) by selecting a subset of SNPs to be genotyped (tag SNPs). While many current association studies are performed using commercially available high-throughput genotyping products that define a set of tag SNPs, choosing tag SNPs remains an important problem for both custom follow-up studies as well as designing the high-throughput genotyping products themselves. The most widely used tag SNP selection method optimizes over the correlation between SNPs (r2). However, tag SNPs chosen based on an r2 criterion do not necessarily maximize the statistical power of an association study. We propose a study design framework that chooses SNPs to maximize power and efficiently measures the power through empirical simulation. Empirical results based on the HapMap data show that our method gains considerable power over a widely used r2-based method, or equivalently reduces the number of tag SNPs required to attain the desired power of a study. Our power-optimized 100k whole genome tag set provides equivalent power to the Affymetrix 500k chip for the CEU population. For the design of custom follow-up studies, our method provides up to twice the power increase using the same number of tag SNPs as r2-based methods. Our method is publicly available via web server at http://design.cs.ucla.edu. PMID:18702637

  14. Concept Analysis of Spirituality: An Evolutionary Approach.

    PubMed

    Weathers, Elizabeth; McCarthy, Geraldine; Coffey, Alice

    2016-04-01

    The aim of this article is to clarify the concept of spirituality for future nursing research. Previous concept analyses of spirituality have mostly reviewed the conceptual literature with little consideration of the empirical literature. The literature reviewed in prior concept analyses extends from 1972 to 2005, with no analysis conducted in the past 9 years. Rodgers' evolutionary framework was used to review both the theoretical and empirical literature pertaining to spirituality. Evolutionary concept analysis is a formal method of philosophical inquiry, in which papers are analyzed to identify attributes, antecedents, and consequences of the concept. Empirical and conceptual literature. Three defining attributes of spirituality were identified: connectedness, transcendence, and meaning in life. A conceptual definition of spirituality was proposed based on the findings. Also, four antecedents and five primary consequences of spirituality were identified. Spirituality is a complex concept. This concept analysis adds some clarification by proposing a definition of spirituality that is underpinned by both conceptual and empirical research. Furthermore, exemplars of spirituality, based on prior qualitative research, are presented to support the findings. Hence, the findings of this analysis could guide future nursing research on spirituality. © 2015 Wiley Periodicals, Inc.

  15. [An anti-Taylor approach: the invention of a method for the cogovernance of health care institutions in order to produce freedom and compromise].

    PubMed

    Campos, G W

    1998-01-01

    This paper describes a new health care management method. A triangular confrontation system was constructed, based on a theoretical review, empirical facts observed from health services, and the researcher's knowledge, jointly analyzed. This new management model was termed 'health-team-focused collegiate management', entailing several original organizational concepts: production unity, matrix-based reference team, collegiate management system, cogovernance, and product/production interface.

  16. A Cutting Pattern Recognition Method for Shearers Based on Improved Ensemble Empirical Mode Decomposition and a Probabilistic Neural Network

    PubMed Central

    Xu, Jing; Wang, Zhongbin; Tan, Chao; Si, Lei; Liu, Xinhua

    2015-01-01

    In order to guarantee the stable operation of shearers and promote construction of an automatic coal mining working face, an online cutting pattern recognition method with high accuracy and speed based on Improved Ensemble Empirical Mode Decomposition (IEEMD) and Probabilistic Neural Network (PNN) is proposed. An industrial microphone is installed on the shearer and the cutting sound is collected as the recognition criterion to overcome the disadvantages of giant size, contact measurement and low identification rate of traditional detectors. To avoid end-point effects and get rid of undesirable intrinsic mode function (IMF) components in the initial signal, IEEMD is conducted on the sound. The end-point continuation based on the practical storage data is performed first to overcome the end-point effect. Next the average correlation coefficient, which is calculated by the correlation of the first IMF with others, is introduced to select essential IMFs. Then the energy and standard deviation of the reminder IMFs are extracted as features and PNN is applied to classify the cutting patterns. Finally, a simulation example, with an accuracy of 92.67%, and an industrial application prove the efficiency and correctness of the proposed method. PMID:26528985

  17. Comparison of the lifting-line free vortex wake method and the blade-element-momentum theory regarding the simulated loads of multi-MW wind turbines

    NASA Astrophysics Data System (ADS)

    Hauptmann, S.; Bülk, M.; Schön, L.; Erbslöh, S.; Boorsma, K.; Grasso, F.; Kühn, M.; Cheng, P. W.

    2014-12-01

    Design load simulations for wind turbines are traditionally based on the blade- element-momentum theory (BEM). The BEM approach is derived from a simplified representation of the rotor aerodynamics and several semi-empirical correction models. A more sophisticated approach to account for the complex flow phenomena on wind turbine rotors can be found in the lifting-line free vortex wake method. This approach is based on a more physics based representation, especially for global flow effects. This theory relies on empirical correction models only for the local flow effects, which are associated with the boundary layer of the rotor blades. In this paper the lifting-line free vortex wake method is compared to a state- of-the-art BEM formulation with regard to aerodynamic and aeroelastic load simulations of the 5MW UpWind reference wind turbine. Different aerodynamic load situations as well as standardised design load cases that are sensitive to the aeroelastic modelling are evaluated in detail. This benchmark makes use of the AeroModule developed by ECN, which has been coupled to the multibody simulation code SIMPACK.

  18. Development of probabilistic emission inventories of air toxics for Jacksonville, Florida, USA.

    PubMed

    Zhao, Yuchao; Frey, H Christopher

    2004-11-01

    Probabilistic emission inventories were developed for 1,3-butadiene, mercury (Hg), arsenic (As), benzene, formaldehyde, and lead for Jacksonville, FL. To quantify inter-unit variability in empirical emission factor data, the Maximum Likelihood Estimation (MLE) method or the Method of Matching Moments was used to fit parametric distributions. For data sets that contain nondetected measurements, a method based upon MLE was used for parameter estimation. To quantify the uncertainty in urban air toxic emission factors, parametric bootstrap simulation and empirical bootstrap simulation were applied to uncensored and censored data, respectively. The probabilistic emission inventories were developed based on the product of the uncertainties in the emission factors and in the activity factors. The uncertainties in the urban air toxics emission inventories range from as small as -25 to +30% for Hg to as large as -83 to +243% for As. The key sources of uncertainty in the emission inventory for each toxic are identified based upon sensitivity analysis. Typically, uncertainty in the inventory of a given pollutant can be attributed primarily to a small number of source categories. Priorities for improving the inventories and for refining the probabilistic analysis are discussed.

  19. Rainfall Prediction of Indian Peninsula: Comparison of Time Series Based Approach and Predictor Based Approach using Machine Learning Techniques

    NASA Astrophysics Data System (ADS)

    Dash, Y.; Mishra, S. K.; Panigrahi, B. K.

    2017-12-01

    Prediction of northeast/post monsoon rainfall which occur during October, November and December (OND) over Indian peninsula is a challenging task due to the dynamic nature of uncertain chaotic climate. It is imperative to elucidate this issue by examining performance of different machine leaning (ML) approaches. The prime objective of this research is to compare between a) statistical prediction using historical rainfall observations and global atmosphere-ocean predictors like Sea Surface Temperature (SST) and Sea Level Pressure (SLP) and b) empirical prediction based on a time series analysis of past rainfall data without using any other predictors. Initially, ML techniques have been applied on SST and SLP data (1948-2014) obtained from NCEP/NCAR reanalysis monthly mean provided by the NOAA ESRL PSD. Later, this study investigated the applicability of ML methods using OND rainfall time series for 1948-2014 and forecasted up to 2018. The predicted values of aforementioned methods were verified using observed time series data collected from Indian Institute of Tropical Meteorology and the result revealed good performance of ML algorithms with minimal error scores. Thus, it is found that both statistical and empirical methods are useful for long range climatic projections.

  20. Time Domain Strain/Stress Reconstruction Based on Empirical Mode Decomposition: Numerical Study and Experimental Validation.

    PubMed

    He, Jingjing; Zhou, Yibin; Guan, Xuefei; Zhang, Wei; Zhang, Weifang; Liu, Yongming

    2016-08-16

    Structural health monitoring has been studied by a number of researchers as well as various industries to keep up with the increasing demand for preventive maintenance routines. This work presents a novel method for reconstruct prompt, informed strain/stress responses at the hot spots of the structures based on strain measurements at remote locations. The structural responses measured from usage monitoring system at available locations are decomposed into modal responses using empirical mode decomposition. Transformation equations based on finite element modeling are derived to extrapolate the modal responses from the measured locations to critical locations where direct sensor measurements are not available. Then, two numerical examples (a two-span beam and a 19956-degree of freedom simplified airfoil) are used to demonstrate the overall reconstruction method. Finally, the present work investigates the effectiveness and accuracy of the method through a set of experiments conducted on an aluminium alloy cantilever beam commonly used in air vehicle and spacecraft. The experiments collect the vibration strain signals of the beam via optical fiber sensors. Reconstruction results are compared with theoretical solutions and a detailed error analysis is also provided.

  1. Adaptive Filtration of Physiological Artifacts in EEG Signals in Humans Using Empirical Mode Decomposition

    NASA Astrophysics Data System (ADS)

    Grubov, V. V.; Runnova, A. E.; Hramov, A. E.

    2018-05-01

    A new method for adaptive filtration of experimental EEG signals in humans and for removal of different physiological artifacts has been proposed. The algorithm of the method includes empirical mode decomposition of EEG, determination of the number of empirical modes that are considered, analysis of the empirical modes and search for modes that contains artifacts, removal of these modes, and reconstruction of the EEG signal. The method was tested on experimental human EEG signals and demonstrated high efficiency in the removal of different types of physiological EEG artifacts.

  2. A Comparison of Full and Empirical Bayes Techniques for Inferring Sea Level Changes from Tide Gauge Records

    NASA Astrophysics Data System (ADS)

    Piecuch, C. G.; Huybers, P. J.; Tingley, M.

    2016-12-01

    Sea level observations from coastal tide gauges are some of the longest instrumental records of the ocean. However, these data can be noisy, biased, and gappy, featuring missing values, and reflecting land motion and local effects. Coping with these issues in a formal manner is a challenging task. Some studies use Bayesian approaches to estimate sea level from tide gauge records, making inference probabilistically. Such methods are typically empirically Bayesian in nature: model parameters are treated as known and assigned point values. But, in reality, parameters are not perfectly known. Empirical Bayes methods thus neglect a potentially important source of uncertainty, and so may overestimate the precision (i.e., underestimate the uncertainty) of sea level estimates. We consider whether empirical Bayes methods underestimate uncertainty in sea level from tide gauge data, comparing to a full Bayes method that treats parameters as unknowns to be solved for along with the sea level field. We develop a hierarchical algorithm that we apply to tide gauge data on the North American northeast coast over 1893-2015. The algorithm is run in full Bayes mode, solving for the sea level process and parameters, and in empirical mode, solving only for the process using fixed parameter values. Error bars on sea level from the empirical method are smaller than from the full Bayes method, and the relative discrepancies increase with time; the 95% credible interval on sea level values from the empirical Bayes method in 1910 and 2010 is 23% and 56% narrower, respectively, than from the full Bayes approach. To evaluate the representativeness of the credible intervals, empirical Bayes and full Bayes methods are applied to corrupted data of a known surrogate field. Using rank histograms to evaluate the solutions, we find that the full Bayes method produces generally reliable error bars, whereas the empirical Bayes method gives too-narrow error bars, such that the 90% credible interval only encompasses 70% of true process values. Results demonstrate that parameter uncertainty is an important source of process uncertainty, and advocate for the fully Bayesian treatment of tide gauge records in ocean circulation and climate studies.

  3. Beyond Blood Culture and Gram Stain Analysis: A Review of Molecular Techniques for the Early Detection of Bacteremia in Surgical Patients

    PubMed Central

    Kaplan, Heidi B.; Dua, Anahita; Litwin, Douglas B.; Ambrose, Catherine G.; Moore, Laura J.; Murray, COL Clinton K.; Wade, Charles E.; Holcomb, John B.

    2016-01-01

    Abstract Background: Sepsis from bacteremia occurs in 250,000 cases annually in the United States, has a mortality rate as high as 60%, and is associated with a poorer prognosis than localized infection. Because of these high figures, empiric antibiotic administration for patients with systemic inflammatory response syndrome (SIRS) and suspected infection is the second most common indication for antibiotic administration in intensive care units (ICU)s. However, overuse of empiric antibiotics contributes to the development of opportunistic infections, antibiotic resistance, and the increase in multi-drug-resistant bacterial strains. The current method of diagnosing and ruling out bacteremia is via blood culture (BC) and Gram stain (GS) analysis. Methods: Conventional and molecular methods for diagnosing bacteremia were reviewed and compared. The clinical implications, use, and current clinical trials of polymerase chain reaction (PCR)-based methods to detect bacterial pathogens in the blood stream were detailed. Results: BC/GS has several disadvantages. These include: some bacteria do not grow in culture media; others do not GS appropriately; and cultures can require up to 5 d to guide or discontinue antibiotic treatment. PCR-based methods can be potentially applied to detect rapidly, accurately, and directly microbes in human blood samples. Conclusions: Compared with the conventional BC/GS, particular advantages to molecular methods (specifically, PCR-based methods) include faster results, leading to possible improved antibiotic stewardship when bacteremia is not present. PMID:26918696

  4. Advances in visual representation of molecular potentials.

    PubMed

    Du, Qi-Shi; Huang, Ri-Bo; Chou, Kuo-Chen

    2010-06-01

    The recent advances in visual representations of molecular properties in 3D space are summarized, and their applications in molecular modeling study and rational drug design are introduced. The visual representation methods provide us with detailed insights into protein-ligand interactions, and hence can play a major role in elucidating the structure or reactivity of a biomolecular system. Three newly developed computation and visualization methods for studying the physical and chemical properties of molecules are introduced, including their electrostatic potential, lipophilicity potential and excess chemical potential. The newest application examples of visual representations in structure-based rational drug are presented. The 3D electrostatic potentials, calculated using the empirical method (EM-ESP), in which the classical Coulomb equation and traditional atomic partial changes are discarded, are highly consistent with the results by the higher level quantum chemical method. The 3D lipophilicity potentials, computed by the heuristic molecular lipophilicity potential method based on the principles of quantum mechanics and statistical mechanics, are more accurate and reliable than those by using the traditional empirical methods. The 3D excess chemical potentials, derived by the reference interaction site model-hypernetted chain theory, provide a new tool for computational chemistry and molecular modeling. For structure-based drug design, the visual representations of molecular properties will play a significant role in practical applications. It is anticipated that the new advances in computational chemistry will stimulate the development of molecular modeling methods, further enriching the visual representation techniques for rational drug design, as well as other relevant fields in life science.

  5. Enhancement of lung sounds based on empirical mode decomposition and Fourier transform algorithm.

    PubMed

    Mondal, Ashok; Banerjee, Poulami; Somkuwar, Ajay

    2017-02-01

    There is always heart sound (HS) signal interfering during the recording of lung sound (LS) signals. This obscures the features of LS signals and creates confusion on pathological states, if any, of the lungs. In this work, a new method is proposed for reduction of heart sound interference which is based on empirical mode decomposition (EMD) technique and prediction algorithm. In this approach, first the mixed signal is split into several components in terms of intrinsic mode functions (IMFs). Thereafter, HS-included segments are localized and removed from them. The missing values of the gap thus produced, is predicted by a new Fast Fourier Transform (FFT) based prediction algorithm and the time domain LS signal is reconstructed by taking an inverse FFT of the estimated missing values. The experiments have been conducted on simulated and recorded HS corrupted LS signals at three different flow rates and various SNR levels. The performance of the proposed method is evaluated by qualitative and quantitative analysis of the results. It is found that the proposed method is superior to the baseline method in terms of quantitative and qualitative measurement. The developed method gives better results compared to baseline method for different SNR levels. Our method gives cross correlation index (CCI) of 0.9488, signal to deviation ratio (SDR) of 9.8262, and normalized maximum amplitude error (NMAE) of 26.94 for 0 dB SNR value. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  6. Fatigue crack propagation behavior of stainless steel welds

    NASA Astrophysics Data System (ADS)

    Kusko, Chad S.

    The fatigue crack propagation behavior of austenitic and duplex stainless steel base and weld metals has been investigated using various fatigue crack growth test procedures, ferrite measurement techniques, light optical microscopy, stereomicroscopy, scanning electron microscopy, and optical profilometry. The compliance offset method has been incorporated to measure crack closure during testing in order to determine a stress ratio at which such closure is overcome. Based on this method, an empirically determined stress ratio of 0.60 has been shown to be very successful in overcoming crack closure for all da/dN for gas metal arc and laser welds. This empirically-determined stress ratio of 0.60 has been applied to testing of stainless steel base metal and weld metal to understand the influence of microstructure. Regarding the base metal investigation, for 316L and AL6XN base metals, grain size and grain plus twin size have been shown to influence resulting crack growth behavior. The cyclic plastic zone size model has been applied to accurately model crack growth behavior for austenitic stainless steels when the average grain plus twin size is considered. Additionally, the effect of the tortuous crack paths observed for the larger grain size base metals can be explained by a literature model for crack deflection. Constant Delta K testing has been used to characterize the crack growth behavior across various regions of the gas metal arc and laser welds at the empirically determined stress ratio of 0.60. Despite an extensive range of stainless steel weld metal FN and delta-ferrite morphologies, neither delta-ferrite morphology significantly influence the room temperature crack growth behavior. However, variations in weld metal da/dN can be explained by local surface roughness resulting from large columnar grains and tortuous crack paths in the weld metal.

  7. Rollover risk prediction of heavy vehicles by reliability index and empirical modelling

    NASA Astrophysics Data System (ADS)

    Sellami, Yamine; Imine, Hocine; Boubezoul, Abderrahmane; Cadiou, Jean-Charles

    2018-03-01

    This paper focuses on a combination of a reliability-based approach and an empirical modelling approach for rollover risk assessment of heavy vehicles. A reliability-based warning system is developed to alert the driver to a potential rollover before entering into a bend. The idea behind the proposed methodology is to estimate the rollover risk by the probability that the vehicle load transfer ratio (LTR) exceeds a critical threshold. Accordingly, a so-called reliability index may be used as a measure to assess the vehicle safe functioning. In the reliability method, computing the maximum of LTR requires to predict the vehicle dynamics over the bend which can be in some cases an intractable problem or time-consuming. With the aim of improving the reliability computation time, an empirical model is developed to substitute the vehicle dynamics and rollover models. This is done by using the SVM (Support Vector Machines) algorithm. The preliminary obtained results demonstrate the effectiveness of the proposed approach.

  8. Measuring the self-similarity exponent in Lévy stable processes of financial time series

    NASA Astrophysics Data System (ADS)

    Fernández-Martínez, M.; Sánchez-Granero, M. A.; Trinidad Segovia, J. E.

    2013-11-01

    Geometric method-based procedures, which will be called GM algorithms herein, were introduced in [M.A. Sánchez Granero, J.E. Trinidad Segovia, J. García Pérez, Some comments on Hurst exponent and the long memory processes on capital markets, Phys. A 387 (2008) 5543-5551], to efficiently calculate the self-similarity exponent of a time series. In that paper, the authors showed empirically that these algorithms, based on a geometrical approach, are more accurate than the classical algorithms, especially with short length time series. The authors checked that GM algorithms are good when working with (fractional) Brownian motions. Moreover, in [J.E. Trinidad Segovia, M. Fernández-Martínez, M.A. Sánchez-Granero, A note on geometric method-based procedures to calculate the Hurst exponent, Phys. A 391 (2012) 2209-2214], a mathematical background for the validity of such procedures to estimate the self-similarity index of any random process with stationary and self-affine increments was provided. In particular, they proved theoretically that GM algorithms are also valid to explore long-memory in (fractional) Lévy stable motions. In this paper, we prove empirically by Monte Carlo simulation that GM algorithms are able to calculate accurately the self-similarity index in Lévy stable motions and find empirical evidence that they are more precise than the absolute value exponent (denoted by AVE onwards) and the multifractal detrended fluctuation analysis (MF-DFA) algorithms, especially with a short length time series. We also compare them with the generalized Hurst exponent (GHE) algorithm and conclude that both GM2 and GHE algorithms are the most accurate to study financial series. In addition to that, we provide empirical evidence, based on the accuracy of GM algorithms to estimate the self-similarity index in Lévy motions, that the evolution of the stocks of some international market indices, such as U.S. Small Cap and Nasdaq100, cannot be modelized by means of a Brownian motion.

  9. Empirical projection-based basis-component decomposition method

    NASA Astrophysics Data System (ADS)

    Brendel, Bernhard; Roessl, Ewald; Schlomka, Jens-Peter; Proksa, Roland

    2009-02-01

    Advances in the development of semiconductor based, photon-counting x-ray detectors stimulate research in the domain of energy-resolving pre-clinical and clinical computed tomography (CT). For counting detectors acquiring x-ray attenuation in at least three different energy windows, an extended basis component decomposition can be performed in which in addition to the conventional approach of Alvarez and Macovski a third basis component is introduced, e.g., a gadolinium based CT contrast material. After the decomposition of the measured projection data into the basis component projections, conventional filtered-backprojection reconstruction is performed to obtain the basis-component images. In recent work, this basis component decomposition was obtained by maximizing the likelihood-function of the measurements. This procedure is time consuming and often unstable for excessively noisy data or low intrinsic energy resolution of the detector. Therefore, alternative procedures are of interest. Here, we introduce a generalization of the idea of empirical dual-energy processing published by Stenner et al. to multi-energy, photon-counting CT raw data. Instead of working in the image-domain, we use prior spectral knowledge about the acquisition system (tube spectra, bin sensitivities) to parameterize the line-integrals of the basis component decomposition directly in the projection domain. We compare this empirical approach with the maximum-likelihood (ML) approach considering image noise and image bias (artifacts) and see that only moderate noise increase is to be expected for small bias in the empirical approach. Given the drastic reduction of pre-processing time, the empirical approach is considered a viable alternative to the ML approach.

  10. Sparsity guided empirical wavelet transform for fault diagnosis of rolling element bearings

    NASA Astrophysics Data System (ADS)

    Wang, Dong; Zhao, Yang; Yi, Cai; Tsui, Kwok-Leung; Lin, Jianhui

    2018-02-01

    Rolling element bearings are widely used in various industrial machines, such as electric motors, generators, pumps, gearboxes, railway axles, turbines, and helicopter transmissions. Fault diagnosis of rolling element bearings is beneficial to preventing any unexpected accident and reducing economic loss. In the past years, many bearing fault detection methods have been developed. Recently, a new adaptive signal processing method called empirical wavelet transform attracts much attention from readers and engineers and its applications to bearing fault diagnosis have been reported. The main problem of empirical wavelet transform is that Fourier segments required in empirical wavelet transform are strongly dependent on the local maxima of the amplitudes of the Fourier spectrum of a signal, which connotes that Fourier segments are not always reliable and effective if the Fourier spectrum of the signal is complicated and overwhelmed by heavy noises and other strong vibration components. In this paper, sparsity guided empirical wavelet transform is proposed to automatically establish Fourier segments required in empirical wavelet transform for fault diagnosis of rolling element bearings. Industrial bearing fault signals caused by single and multiple railway axle bearing defects are used to verify the effectiveness of the proposed sparsity guided empirical wavelet transform. Results show that the proposed method can automatically discover Fourier segments required in empirical wavelet transform and reveal single and multiple railway axle bearing defects. Besides, some comparisons with three popular signal processing methods including ensemble empirical mode decomposition, the fast kurtogram and the fast spectral correlation are conducted to highlight the superiority of the proposed method.

  11. Long-Term Stability of Membership in a Wechsler Intelligence Scale for Children--Third Edition (WISC-III) Subtest Core Profile Taxonomy

    ERIC Educational Resources Information Center

    Borsuk, Ellen R.; Watkins, Marley W.; Canivez, Gary L.

    2006-01-01

    Although often applied in practice, clinically based cognitive subtest profile analysis has failed to achieve empirical support. Nonlinear multivariate subtest profile analysis may have benefits over clinically based techniques, but the psychometric properties of these methods must be studied prior to their implementation and interpretation. The…

  12. A Rational Examination of Integrating "Classics" into University General Education Curriculum: An Empirical Survey Based on N University

    ERIC Educational Resources Information Center

    Zhong, Zhenshan; Sun, Mengyao

    2018-01-01

    The power of general education curriculum comes from the enduring classics. The authors apply research methods such as questionnaire survey, interview, and observation to investigate the state of general education curriculum implementation at N University and analyze problems faced by incorporating classics. Based on this, the authors propose that…

  13. Declarative and Dynamic Pedagogical Content Knowledge as Elicited through Two Video-Based Interview Methods

    ERIC Educational Resources Information Center

    Alonzo, Alicia C.; Kim, Jiwon

    2016-01-01

    Although pedagogical content knowledge (PCK) has become widely recognized as an essential part of the knowledge base for teaching, empirical evidence demonstrating a connection between PCK and teaching practice or student learning outcomes is mixed. In response, we argue for further attention to the measurement of dynamic (spontaneous or flexible,…

  14. Selection of Variables in Cluster Analysis: An Empirical Comparison of Eight Procedures

    ERIC Educational Resources Information Center

    Steinley, Douglas; Brusco, Michael J.

    2008-01-01

    Eight different variable selection techniques for model-based and non-model-based clustering are evaluated across a wide range of cluster structures. It is shown that several methods have difficulties when non-informative variables (i.e., random noise) are included in the model. Furthermore, the distribution of the random noise greatly impacts the…

  15. Computer-Based Assessment of Collaborative Problem Solving: Exploring the Feasibility of Human-to-Agent Approach

    ERIC Educational Resources Information Center

    Rosen, Yigal

    2015-01-01

    How can activities in which collaborative skills of an individual are measured be standardized? In order to understand how students perform on collaborative problem solving (CPS) computer-based assessment, it is necessary to examine empirically the multi-faceted performance that may be distributed across collaboration methods. The aim of this…

  16. Estimating topological properties of weighted networks from limited information.

    PubMed

    Cimini, Giulio; Squartini, Tiziano; Gabrielli, Andrea; Garlaschelli, Diego

    2015-10-01

    A problem typically encountered when studying complex systems is the limitedness of the information available on their topology, which hinders our understanding of their structure and of the dynamical processes taking place on them. A paramount example is provided by financial networks, whose data are privacy protected: Banks publicly disclose only their aggregate exposure towards other banks, keeping individual exposures towards each single bank secret. Yet, the estimation of systemic risk strongly depends on the detailed structure of the interbank network. The resulting challenge is that of using aggregate information to statistically reconstruct a network and correctly predict its higher-order properties. Standard approaches either generate unrealistically dense networks, or fail to reproduce the observed topology by assigning homogeneous link weights. Here, we develop a reconstruction method, based on statistical mechanics concepts, that makes use of the empirical link density in a highly nontrivial way. Technically, our approach consists in the preliminary estimation of node degrees from empirical node strengths and link density, followed by a maximum-entropy inference based on a combination of empirical strengths and estimated degrees. Our method is successfully tested on the international trade network and the interbank money market, and represents a valuable tool for gaining insights on privacy-protected or partially accessible systems.

  17. Estimating topological properties of weighted networks from limited information

    NASA Astrophysics Data System (ADS)

    Cimini, Giulio; Squartini, Tiziano; Gabrielli, Andrea; Garlaschelli, Diego

    2015-10-01

    A problem typically encountered when studying complex systems is the limitedness of the information available on their topology, which hinders our understanding of their structure and of the dynamical processes taking place on them. A paramount example is provided by financial networks, whose data are privacy protected: Banks publicly disclose only their aggregate exposure towards other banks, keeping individual exposures towards each single bank secret. Yet, the estimation of systemic risk strongly depends on the detailed structure of the interbank network. The resulting challenge is that of using aggregate information to statistically reconstruct a network and correctly predict its higher-order properties. Standard approaches either generate unrealistically dense networks, or fail to reproduce the observed topology by assigning homogeneous link weights. Here, we develop a reconstruction method, based on statistical mechanics concepts, that makes use of the empirical link density in a highly nontrivial way. Technically, our approach consists in the preliminary estimation of node degrees from empirical node strengths and link density, followed by a maximum-entropy inference based on a combination of empirical strengths and estimated degrees. Our method is successfully tested on the international trade network and the interbank money market, and represents a valuable tool for gaining insights on privacy-protected or partially accessible systems.

  18. In-service teachers' perceptions of project-based learning.

    PubMed

    Habók, Anita; Nagy, Judit

    2016-01-01

    The study analyses teachers' perceptions of methods, teacher roles, success and evaluation in PBL and traditional classroom instruction. The analysis is based on empirical data collected in primary schools and vocational secondary schools. An analysis of 109 questionnaires revealed numerous differences based on degree of experience and type of school. In general, project-based methods were preferred among teachers, who mostly perceived themselves as facilitators and considered motivation and transmission of values central to their work. Teachers appeared not to capitalize on the use of ICT tools or emotions. Students actively participated in the evaluation process via oral evaluation.

  19. Patient-Specific Seizure Detection in Long-Term EEG Using Signal-Derived Empirical Mode Decomposition (EMD)-based Dictionary Approach.

    PubMed

    Kaleem, Muhammad; Gurve, Dharmendra; Guergachi, Aziz; Krishnan, Sridhar

    2018-06-25

    The objective of the work described in this paper is development of a computationally efficient methodology for patient-specific automatic seizure detection in long-term multi-channel EEG recordings. Approach: A novel patient-specific seizure detection approach based on signal-derived Empirical Mode Decomposition (EMD)-based dictionary approach is proposed. For this purpose, we use an empirical framework for EMD-based dictionary creation and learning, inspired by traditional dictionary learning methods, in which the EMD-based dictionary is learned from the multi-channel EEG data being analyzed for automatic seizure detection. We present the algorithm for dictionary creation and learning, whose purpose is to learn dictionaries with a small number of atoms. Using training signals belonging to seizure and non-seizure classes, an initial dictionary, termed as the raw dictionary, is formed. The atoms of the raw dictionary are composed of intrinsic mode functions obtained after decomposition of the training signals using the empirical mode decomposition algorithm. The raw dictionary is then trained using a learning algorithm, resulting in a substantial decrease in the number of atoms in the trained dictionary. The trained dictionary is then used for automatic seizure detection, such that coefficients of orthogonal projections of test signals against the trained dictionary form the features used for classification of test signals into seizure and non-seizure classes. Thus no hand-engineered features have to be extracted from the data as in traditional seizure detection approaches. Main results: The performance of the proposed approach is validated using the CHB-MIT benchmark database, and averaged accuracy, sensitivity and specificity values of 92.9%, 94.3% and 91.5%, respectively, are obtained using support vector machine classifier and five-fold cross-validation method. These results are compared with other approaches using the same database, and the suitability of the approach for seizure detection in long-term multi-channel EEG recordings is discussed. Significance: The proposed approach describes a computationally efficient method for automatic seizure detection in long-term multi-channel EEG recordings. The method does not rely on hand-engineered features, as are required in traditional approaches. Furthermore, the approach is suitable for scenarios where the dictionary once formed and trained can be used for automatic seizure detection of newly recorded data, making the approach suitable for long-term multi-channel EEG recordings. © 2018 IOP Publishing Ltd.

  20. Parametric study on single shot peening by dimensional analysis method incorporated with finite element method

    NASA Astrophysics Data System (ADS)

    Wu, Xian-Qian; Wang, Xi; Wei, Yan-Peng; Song, Hong-Wei; Huang, Chen-Guang

    2012-06-01

    Shot peening is a widely used surface treatment method by generating compressive residual stress near the surface of metallic materials to increase fatigue life and resistance to corrosion fatigue, cracking, etc. Compressive residual stress and dent profile are important factors to evaluate the effectiveness of shot peening process. In this paper, the influence of dimensionless parameters on maximum compressive residual stress and maximum depth of the dent were investigated. Firstly, dimensionless relations of processing parameters that affect the maximum compressive residual stress and the maximum depth of the dent were deduced by dimensional analysis method. Secondly, the influence of each dimensionless parameter on dimensionless variables was investigated by the finite element method. Furthermore, related empirical formulas were given for each dimensionless parameter based on the simulation results. Finally, comparison was made and good agreement was found between the simulation results and the empirical formula, which shows that a useful approach is provided in this paper for analyzing the influence of each individual parameter.

  1. Fluorescence background removal method for biological Raman spectroscopy based on empirical mode decomposition.

    PubMed

    Leon-Bejarano, Maritza; Dorantes-Mendez, Guadalupe; Ramirez-Elias, Miguel; Mendez, Martin O; Alba, Alfonso; Rodriguez-Leyva, Ildefonso; Jimenez, M

    2016-08-01

    Raman spectroscopy of biological tissue presents fluorescence background, an undesirable effect that generates false Raman intensities. This paper proposes the application of the Empirical Mode Decomposition (EMD) method to baseline correction. EMD is a suitable approach since it is an adaptive signal processing method for nonlinear and non-stationary signal analysis that does not require parameters selection such as polynomial methods. EMD performance was assessed through synthetic Raman spectra with different signal to noise ratio (SNR). The correlation coefficient between synthetic Raman spectra and the recovered one after EMD denoising was higher than 0.92. Additionally, twenty Raman spectra from skin were used to evaluate EMD performance and the results were compared with Vancouver Raman algorithm (VRA). The comparison resulted in a mean square error (MSE) of 0.001554. High correlation coefficient using synthetic spectra and low MSE in the comparison between EMD and VRA suggest that EMD could be an effective method to remove fluorescence background in biological Raman spectra.

  2. What gets measured gets managed: A new method of measuring household food waste.

    PubMed

    Elimelech, Efrat; Ayalon, Ofira; Ert, Eyal

    2018-03-22

    The quantification of household food waste is an essential part of setting policies and waste reduction goals, but it is very difficult to estimate. Current methods include either direct measurements (physical waste surveys) or measurements based on self-reports (diaries, interviews, and questionnaires). The main limitation of the first method is that it cannot always trace the waste source, i.e., an individual household, whereas the second method lacks objectivity. This article presents a new measurement method that offers a solution to these challenges by measuring daily produced food waste at the household level. This method is based on four main principles: (1) capturing waste as it enters the stream, (2) collecting waste samples at the doorstep, (3) using the individual household as the sampling unit, and (4) collecting and sorting waste daily. We tested the feasibility of the new method with an empirical study of 192 households, measuring the actual amounts of food waste from households as well as its composition. Household food waste accounted for 45% of total waste (573 g/day per capita), of which 54% was identified as avoidable. Approximately two thirds of avoidable waste consisted of vegetables and fruit. These results are similar to previous findings from waste surveys, yet the new method showed a higher level of accuracy. The feasibility test suggests that the proposed method provides a practical tool for policy makers for setting policy based on reliable empirical data and monitoring the effectiveness of different policies over time. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. Forecasting outpatient visits using empirical mode decomposition coupled with back-propagation artificial neural networks optimized by particle swarm optimization

    PubMed Central

    Huang, Daizheng; Wu, Zhihui

    2017-01-01

    Accurately predicting the trend of outpatient visits by mathematical modeling can help policy makers manage hospitals effectively, reasonably organize schedules for human resources and finances, and appropriately distribute hospital material resources. In this study, a hybrid method based on empirical mode decomposition and back-propagation artificial neural networks optimized by particle swarm optimization is developed to forecast outpatient visits on the basis of monthly numbers. The data outpatient visits are retrieved from January 2005 to December 2013 and first obtained as the original time series. Second, the original time series is decomposed into a finite and often small number of intrinsic mode functions by the empirical mode decomposition technique. Third, a three-layer back-propagation artificial neural network is constructed to forecast each intrinsic mode functions. To improve network performance and avoid falling into a local minimum, particle swarm optimization is employed to optimize the weights and thresholds of back-propagation artificial neural networks. Finally, the superposition of forecasting results of the intrinsic mode functions is regarded as the ultimate forecasting value. Simulation indicates that the proposed method attains a better performance index than the other four methods. PMID:28222194

  4. Forecasting outpatient visits using empirical mode decomposition coupled with back-propagation artificial neural networks optimized by particle swarm optimization.

    PubMed

    Huang, Daizheng; Wu, Zhihui

    2017-01-01

    Accurately predicting the trend of outpatient visits by mathematical modeling can help policy makers manage hospitals effectively, reasonably organize schedules for human resources and finances, and appropriately distribute hospital material resources. In this study, a hybrid method based on empirical mode decomposition and back-propagation artificial neural networks optimized by particle swarm optimization is developed to forecast outpatient visits on the basis of monthly numbers. The data outpatient visits are retrieved from January 2005 to December 2013 and first obtained as the original time series. Second, the original time series is decomposed into a finite and often small number of intrinsic mode functions by the empirical mode decomposition technique. Third, a three-layer back-propagation artificial neural network is constructed to forecast each intrinsic mode functions. To improve network performance and avoid falling into a local minimum, particle swarm optimization is employed to optimize the weights and thresholds of back-propagation artificial neural networks. Finally, the superposition of forecasting results of the intrinsic mode functions is regarded as the ultimate forecasting value. Simulation indicates that the proposed method attains a better performance index than the other four methods.

  5. Automated Diagnosis of Glaucoma Using Empirical Wavelet Transform and Correntropy Features Extracted From Fundus Images.

    PubMed

    Maheshwari, Shishir; Pachori, Ram Bilas; Acharya, U Rajendra

    2017-05-01

    Glaucoma is an ocular disorder caused due to increased fluid pressure in the optic nerve. It damages the optic nerve and subsequently causes loss of vision. The available scanning methods are Heidelberg retinal tomography, scanning laser polarimetry, and optical coherence tomography. These methods are expensive and require experienced clinicians to use them. So, there is a need to diagnose glaucoma accurately with low cost. Hence, in this paper, we have presented a new methodology for an automated diagnosis of glaucoma using digital fundus images based on empirical wavelet transform (EWT). The EWT is used to decompose the image, and correntropy features are obtained from decomposed EWT components. These extracted features are ranked based on t value feature selection algorithm. Then, these features are used for the classification of normal and glaucoma images using least-squares support vector machine (LS-SVM) classifier. The LS-SVM is employed for classification with radial basis function, Morlet wavelet, and Mexican-hat wavelet kernels. The classification accuracy of the proposed method is 98.33% and 96.67% using threefold and tenfold cross validation, respectively.

  6. Building an evidence-base for the training of evidence-based treatments in community settings: Use of an expert-informed approach.

    PubMed

    Scudder, Ashley; Herschell, Amy D

    2015-08-01

    In order to make EBTs available to a large number of children and families, developers and expert therapists have used their experience and expertise to train community-based therapists in EBTs. Understanding current training practices of treatment experts may be one method for establishing best practices for training community-based therapists prior to comprehensive empirical examinations of training practices. A qualitative study was conducted using surveys and phone interviews to identify the specific procedures used by treatment experts to train and implement an evidence-based treatment in community settings. Twenty-three doctoral-level, clinical psychologists were identified to participate because of their expertise in conducting and training Parent-Child Interaction Therapy. Semi-structured qualitative interviews were completed by phone, later transcribed verbatim, and analyzed using thematic coding. The de-identified data were coded by two independent qualitative data researchers and then compared for consistency of interpretation. The themes that emerged following the final coding were used to construct a training protocol to be empirically tested. The goal of this paper is to not only understand the current state of training practices for training therapists in a particular EBT, Parent-Child Interaction Therapy, but to illustrate the use of expert opinion as the best available evidence in preparation for empirical evaluation.

  7. Prognostics of lithium-ion batteries based on Dempster-Shafer theory and the Bayesian Monte Carlo method

    NASA Astrophysics Data System (ADS)

    He, Wei; Williard, Nicholas; Osterman, Michael; Pecht, Michael

    A new method for state of health (SOH) and remaining useful life (RUL) estimations for lithium-ion batteries using Dempster-Shafer theory (DST) and the Bayesian Monte Carlo (BMC) method is proposed. In this work, an empirical model based on the physical degradation behavior of lithium-ion batteries is developed. Model parameters are initialized by combining sets of training data based on DST. BMC is then used to update the model parameters and predict the RUL based on available data through battery capacity monitoring. As more data become available, the accuracy of the model in predicting RUL improves. Two case studies demonstrating this approach are presented.

  8. An evidential link prediction method and link predictability based on Shannon entropy

    NASA Astrophysics Data System (ADS)

    Yin, Likang; Zheng, Haoyang; Bian, Tian; Deng, Yong

    2017-09-01

    Predicting missing links is of both theoretical value and practical interest in network science. In this paper, we empirically investigate a new link prediction method base on similarity and compare nine well-known local similarity measures on nine real networks. Most of the previous studies focus on the accuracy, however, it is crucial to consider the link predictability as an initial property of networks itself. Hence, this paper has proposed a new link prediction approach called evidential measure (EM) based on Dempster-Shafer theory. Moreover, this paper proposed a new method to measure link predictability via local information and Shannon entropy.

  9. A systematic review of sensory-based treatments for children with disabilities.

    PubMed

    Barton, Erin E; Reichow, Brian; Schnitz, Alana; Smith, Isaac C; Sherlock, Daniel

    2015-02-01

    Sensory-based therapies are designed to address sensory processing difficulties by helping to organize and control the regulation of environmental sensory inputs. These treatments are increasingly popular, particularly with children with behavioral and developmental disabilities. However, empirical support for sensory-based treatments is limited. The purpose of this review was to conduct a comprehensive and methodologically sound evaluation of the efficacy of sensory-based treatments for children with disabilities. Methods for this review were registered with PROSPERO (CRD42012003243). Thirty studies involving 856 participants met our inclusion criteria and were included in this review. Considerable heterogeneity was noted across studies in implementation, measurement, and study rigor. The research on sensory-based treatments is limited due to insubstantial treatment outcomes, weak experimental designs, or high risk of bias. Although many people use and advocate for the use of sensory-based treatments and there is a substantial empirical literature on sensory-based treatments for children with disabilities, insufficient evidence exists to support their use. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Clinical characteristics of ceftriaxone plus metronidazole in complicated intra-abdominal infection

    PubMed Central

    2015-01-01

    Purpose Empirical antibiotics in complicated intra-abdominal infection (c-IAI), such as secondary peritonitis are a first step of treatment. Empirical antibiotic regimen is very diverse. Ceftriaxone plus metronidazole regimen (CMR) is one of the empirical antibiotic regimens used in treatment of c-IAI. However, although CMR is a widely used empirical antibiotic regimen, study regarding success, failure or efficacy of CMR has been poorly understood. This retrospective study is conducted to compare the clinical efficacy of this regimen in c-IAI according to clinical characteristics. Methods The subjects were patients in this hospital who were diagnosed as secondary peritonitis between 2009 and 2013. Retrospective analysis was performed based on the records made after surgery regarding clinical characteristics including albumin level, blood pressure, pulse rate, respiration rate, smoking, age, sex, body mass index, hemoglobin, coexisting disease, leukocytosis, and APACHE (acute physiology and chronic health evaluation) II score. Results A total of 114 patients were enrolled. In univariated analysis, the success and failure of CMR showed significant association with preoperative low albumin, old age, and preoperative tachycardia. In multivariated analysis, low albumin and preoperative tachycardia were significant. Conclusion It is thought that an additional antibiotic treatment plan is necessary in patients with low albumin and tachycardia when the empirical antibiotic regimen is CMR in c-IAI. Conduct of research through well-designed prospective randomized clinical study is also necessary in order to evaluate the appropriateness of CMR and decide on a proper empirical antibiotic regimen between many regimens in c-IAI based on our country. PMID:26131444

  11. Basics of Bayesian methods.

    PubMed

    Ghosh, Sujit K

    2010-01-01

    Bayesian methods are rapidly becoming popular tools for making statistical inference in various fields of science including biology, engineering, finance, and genetics. One of the key aspects of Bayesian inferential method is its logical foundation that provides a coherent framework to utilize not only empirical but also scientific information available to a researcher. Prior knowledge arising from scientific background, expert judgment, or previously collected data is used to build a prior distribution which is then combined with current data via the likelihood function to characterize the current state of knowledge using the so-called posterior distribution. Bayesian methods allow the use of models of complex physical phenomena that were previously too difficult to estimate (e.g., using asymptotic approximations). Bayesian methods offer a means of more fully understanding issues that are central to many practical problems by allowing researchers to build integrated models based on hierarchical conditional distributions that can be estimated even with limited amounts of data. Furthermore, advances in numerical integration methods, particularly those based on Monte Carlo methods, have made it possible to compute the optimal Bayes estimators. However, there is a reasonably wide gap between the background of the empirically trained scientists and the full weight of Bayesian statistical inference. Hence, one of the goals of this chapter is to bridge the gap by offering elementary to advanced concepts that emphasize linkages between standard approaches and full probability modeling via Bayesian methods.

  12. New Simulation Methods to Facilitate Achieving a Mechanistic Understanding of Basic Pharmacology Principles in the Classroom

    ERIC Educational Resources Information Center

    Grover, Anita; Lam, Tai Ning; Hunt, C. Anthony

    2008-01-01

    We present a simulation tool to aid the study of basic pharmacology principles. By taking advantage of the properties of agent-based modeling, the tool facilitates taking a mechanistic approach to learning basic concepts, in contrast to the traditional empirical methods. Pharmacodynamics is a particular aspect of pharmacology that can benefit from…

  13. Estimating Individual Influences of Behavioral Intentions: An Application of Random-Effects Modeling to the Theory of Reasoned Action.

    ERIC Educational Resources Information Center

    Hedeker, Donald; And Others

    1996-01-01

    Methods are proposed and described for estimating the degree to which relations among variables vary at the individual level. As an example, M. Fishbein and I. Ajzen's theory of reasoned action is examined. This article illustrates the use of empirical Bayes methods based on a random-effects regression model to estimate individual influences…

  14. An Empirical Comparison of Five Linear Equating Methods for the NEAT Design

    ERIC Educational Resources Information Center

    Suh, Youngsuk; Mroch, Andrew A.; Kane, Michael T.; Ripkey, Douglas R.

    2009-01-01

    In this study, a data base containing the responses of 40,000 candidates to 90 multiple-choice questions was used to mimic data sets for 50-item tests under the "nonequivalent groups with anchor test" (NEAT) design. Using these smaller data sets, we evaluated the performance of five linear equating methods for the NEAT design with five levels of…

  15. Mathematics Curriculum Based Measurement to Predict State Test Performance: A Comparison of Measures and Methods

    ERIC Educational Resources Information Center

    Stevens, Olinger; Leigh, Erika

    2012-01-01

    Scope and Method of Study: The purpose of the study is to use an empirical approach to identify a simple, economical, efficient, and technically adequate performance measure that teachers can use to assess student growth in mathematics. The current study has been designed to expand the body of research for math CBM to further examine technical…

  16. Use of empirical likelihood to calibrate auxiliary information in partly linear monotone regression models.

    PubMed

    Chen, Baojiang; Qin, Jing

    2014-05-10

    In statistical analysis, a regression model is needed if one is interested in finding the relationship between a response variable and covariates. When the response depends on the covariate, then it may also depend on the function of this covariate. If one has no knowledge of this functional form but expect for monotonic increasing or decreasing, then the isotonic regression model is preferable. Estimation of parameters for isotonic regression models is based on the pool-adjacent-violators algorithm (PAVA), where the monotonicity constraints are built in. With missing data, people often employ the augmented estimating method to improve estimation efficiency by incorporating auxiliary information through a working regression model. However, under the framework of the isotonic regression model, the PAVA does not work as the monotonicity constraints are violated. In this paper, we develop an empirical likelihood-based method for isotonic regression model to incorporate the auxiliary information. Because the monotonicity constraints still hold, the PAVA can be used for parameter estimation. Simulation studies demonstrate that the proposed method can yield more efficient estimates, and in some situations, the efficiency improvement is substantial. We apply this method to a dementia study. Copyright © 2013 John Wiley & Sons, Ltd.

  17. Quantum mechanics implementation in drug-design workflows: does it really help?

    PubMed

    Arodola, Olayide A; Soliman, Mahmoud Es

    2017-01-01

    The pharmaceutical industry is progressively operating in an era where development costs are constantly under pressure, higher percentages of drugs are demanded, and the drug-discovery process is a trial-and-error run. The profit that flows in with the discovery of new drugs has always been the motivation for the industry to keep up the pace and keep abreast with the endless demand for medicines. The process of finding a molecule that binds to the target protein using in silico tools has made computational chemistry a valuable tool in drug discovery in both academic research and pharmaceutical industry. However, the complexity of many protein-ligand interactions challenges the accuracy and efficiency of the commonly used empirical methods. The usefulness of quantum mechanics (QM) in drug-protein interaction cannot be overemphasized; however, this approach has little significance in some empirical methods. In this review, we discuss recent developments in, and application of, QM to medically relevant biomolecules. We critically discuss the different types of QM-based methods and their proposed application to incorporating them into drug-design and -discovery workflows while trying to answer a critical question: are QM-based methods of real help in drug-design and -discovery research and industry?

  18. Computer implemented empirical mode decomposition method, apparatus and article of manufacture

    NASA Technical Reports Server (NTRS)

    Huang, Norden E. (Inventor)

    1999-01-01

    A computer implemented physical signal analysis method is invented. This method includes two essential steps and the associated presentation techniques of the results. All the steps exist only in a computer: there are no analytic expressions resulting from the method. The first step is a computer implemented Empirical Mode Decomposition to extract a collection of Intrinsic Mode Functions (IMF) from nonlinear, nonstationary physical signals. The decomposition is based on the direct extraction of the energy associated with various intrinsic time scales in the physical signal. Expressed in the IMF's, they have well-behaved Hilbert Transforms from which instantaneous frequencies can be calculated. The second step is the Hilbert Transform. The final result is the Hilbert Spectrum. Thus, the invention can localize any event on the time as well as the frequency axis. The decomposition can also be viewed as an expansion of the data in terms of the IMF's. Then, these IMF's, based on and derived from the data, can serve as the basis of that expansion. The local energy and the instantaneous frequency derived from the IMF's through the Hilbert transform give a full energy-frequency-time distribution of the data which is designated as the Hilbert Spectrum.

  19. Robustness of fit indices to outliers and leverage observations in structural equation modeling.

    PubMed

    Yuan, Ke-Hai; Zhong, Xiaoling

    2013-06-01

    Normal-distribution-based maximum likelihood (NML) is the most widely used method in structural equation modeling (SEM), although practical data tend to be nonnormally distributed. The effect of nonnormally distributed data or data contamination on the normal-distribution-based likelihood ratio (LR) statistic is well understood due to many analytical and empirical studies. In SEM, fit indices are used as widely as the LR statistic. In addition to NML, robust procedures have been developed for more efficient and less biased parameter estimates with practical data. This article studies the effect of outliers and leverage observations on fit indices following NML and two robust methods. Analysis and empirical results indicate that good leverage observations following NML and one of the robust methods lead most fit indices to give more support to the substantive model. While outliers tend to make a good model superficially bad according to many fit indices following NML, they have little effect on those following the two robust procedures. Implications of the results to data analysis are discussed, and recommendations are provided regarding the use of estimation methods and interpretation of fit indices. (PsycINFO Database Record (c) 2013 APA, all rights reserved).

  20. Mapping Diffuse Seismicity Using Empirical Matched Field Processing Techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, J; Templeton, D C; Harris, D B

    The objective of this project is to detect and locate more microearthquakes using the empirical matched field processing (MFP) method than can be detected using only conventional earthquake detection techniques. We propose that empirical MFP can complement existing catalogs and techniques. We test our method on continuous seismic data collected at the Salton Sea Geothermal Field during November 2009 and January 2010. In the Southern California Earthquake Data Center (SCEDC) earthquake catalog, 619 events were identified in our study area during this time frame and our MFP technique identified 1094 events. Therefore, we believe that the empirical MFP method combinedmore » with conventional methods significantly improves the network detection ability in an efficient matter.« less

  1. Cryptic diversity and discordance in single-locus species delimitation methods within horned lizards (Phrynosomatidae: Phrynosoma).

    PubMed

    Blair, Christopher; Bryson, Robert W

    2017-11-01

    Biodiversity reduction and loss continues to progress at an alarming rate, and thus, there is widespread interest in utilizing rapid and efficient methods for quantifying and delimiting taxonomic diversity. Single-locus species delimitation methods have become popular, in part due to the adoption of the DNA barcoding paradigm. These techniques can be broadly classified into tree-based and distance-based methods depending on whether species are delimited based on a constructed genealogy. Although the relative performance of these methods has been tested repeatedly with simulations, additional studies are needed to assess congruence with empirical data. We compiled a large data set of mitochondrial ND4 sequences from horned lizards (Phrynosoma) to elucidate congruence using four tree-based (single-threshold GMYC, multiple-threshold GMYC, bPTP, mPTP) and one distance-based (ABGD) species delimitation models. We were particularly interested in cases with highly uneven sampling and/or large differences in intraspecific diversity. Results showed a high degree of discordance among methods, with multiple-threshold GMYC and bPTP suggesting an unrealistically high number of species (29 and 26 species within the P. douglasii complex alone). The single-threshold GMYC model was the most conservative, likely a result of difficulty in locating the inflection point in the genealogies. mPTP and ABGD appeared to be the most stable across sampling regimes and suggested the presence of additional cryptic species that warrant further investigation. These results suggest that the mPTP model may be preferable in empirical data sets with highly uneven sampling or large differences in effective population sizes of species. © 2017 John Wiley & Sons Ltd.

  2. Bacterial Etiology and Antibiotic Resistance Profile of Community-Acquired Urinary Tract Infections in a Cameroonian City.

    PubMed

    Nzalie, Rolf Nyah-Tuku; Gonsu, Hortense Kamga; Koulla-Shiro, Sinata

    2016-01-01

    Introduction. Community-acquired urinary tract infections (CAUTIs) are usually treated empirically. Geographical variations in etiologic agents and their antibiotic sensitivity patterns are common. Knowledge of antibiotic resistance trends is important for improving evidence-based recommendations for empirical treatment of UTIs. Our aim was to determine the major bacterial etiologies of CAUTIs and their antibiotic resistance patterns in a cosmopolitan area of Cameroon for comparison with prescription practices of local physicians. Methods. We performed a cross-sectional descriptive study at two main hospitals in Yaoundé, collecting a clean-catch mid-stream urine sample from 92 patients having a clinical diagnosis of UTI. The empirical antibiotherapy was noted, and identification of bacterial species was done on CLED agar; antibiotic susceptibility testing was performed using the Kirby-Bauer disc diffusion method. Results. A total of 55 patients had samples positive for a UTI. Ciprofloxacin and amoxicillin/clavulanic acid were the most empirically prescribed antibiotics (30.9% and 23.6%, resp.); bacterial isolates showed high prevalence of resistance to both compounds. Escherichia coli (50.9%) was the most common pathogen, followed by Klebsiella pneumoniae (16.4%). Prevalence of resistance for ciprofloxacin was higher compared to newer quinolones. Conclusions. E. coli and K. pneumoniae were the predominant bacterial etiologies; the prevalence of resistance to commonly prescribed antibiotics was high.

  3. Data envelopment analysis in service quality evaluation: an empirical study

    NASA Astrophysics Data System (ADS)

    Najafi, Seyedvahid; Saati, Saber; Tavana, Madjid

    2015-09-01

    Service quality is often conceptualized as the comparison between service expectations and the actual performance perceptions. It enhances customer satisfaction, decreases customer defection, and promotes customer loyalty. Substantial literature has examined the concept of service quality, its dimensions, and measurement methods. We introduce the perceived service quality index (PSQI) as a single measure for evaluating the multiple-item service quality construct based on the SERVQUAL model. A slack-based measure (SBM) of efficiency with constant inputs is used to calculate the PSQI. In addition, a non-linear programming model based on the SBM is proposed to delineate an improvement guideline and improve service quality. An empirical study is conducted to assess the applicability of the method proposed in this study. A large number of studies have used DEA as a benchmarking tool to measure service quality. These models do not propose a coherent performance evaluation construct and consequently fail to deliver improvement guidelines for improving service quality. The DEA models proposed in this study are designed to evaluate and improve service quality within a comprehensive framework and without any dependency on external data.

  4. An Empirical Orthogonal Function-Based Algorithm for Estimating Terrestrial Latent Heat Flux from Eddy Covariance, Meteorological and Satellite Observations

    PubMed Central

    Feng, Fei; Li, Xianglan; Yao, Yunjun; Liang, Shunlin; Chen, Jiquan; Zhao, Xiang; Jia, Kun; Pintér, Krisztina; McCaughey, J. Harry

    2016-01-01

    Accurate estimation of latent heat flux (LE) based on remote sensing data is critical in characterizing terrestrial ecosystems and modeling land surface processes. Many LE products were released during the past few decades, but their quality might not meet the requirements in terms of data consistency and estimation accuracy. Merging multiple algorithms could be an effective way to improve the quality of existing LE products. In this paper, we present a data integration method based on modified empirical orthogonal function (EOF) analysis to integrate the Moderate Resolution Imaging Spectroradiometer (MODIS) LE product (MOD16) and the Priestley-Taylor LE algorithm of Jet Propulsion Laboratory (PT-JPL) estimate. Twenty-two eddy covariance (EC) sites with LE observation were chosen to evaluate our algorithm, showing that the proposed EOF fusion method was capable of integrating the two satellite data sets with improved consistency and reduced uncertainties. Further efforts were needed to evaluate and improve the proposed algorithm at larger spatial scales and time periods, and over different land cover types. PMID:27472383

  5. Optical remote sensing and correlation of office equipment functional state and stress levels via power quality disturbances inefficiencies

    NASA Astrophysics Data System (ADS)

    Sternberg, Oren; Bednarski, Valerie R.; Perez, Israel; Wheeland, Sara; Rockway, John D.

    2016-09-01

    Non-invasive optical techniques pertaining to the remote sensing of power quality disturbances (PQD) are part of an emerging technology field typically dominated by radio frequency (RF) and invasive-based techniques. Algorithms and methods to analyze and address PQD such as probabilistic neural networks and fully informed particle swarms have been explored in industry and academia. Such methods are tuned to work with RF equipment and electronics in existing power grids. As both commercial and defense assets are heavily power-dependent, understanding electrical transients and failure events using non-invasive detection techniques is crucial. In this paper we correlate power quality empirical models to the observed optical response. We also empirically demonstrate a first-order approach to map household, office and commercial equipment PQD to user functions and stress levels. We employ a physics-based image and signal processing approach, which demonstrates measured non-invasive (remote sensing) techniques to detect and map the base frequency associated with the power source to the various PQD on a calibrated source.

  6. An Empirical Orthogonal Function-Based Algorithm for Estimating Terrestrial Latent Heat Flux from Eddy Covariance, Meteorological and Satellite Observations.

    PubMed

    Feng, Fei; Li, Xianglan; Yao, Yunjun; Liang, Shunlin; Chen, Jiquan; Zhao, Xiang; Jia, Kun; Pintér, Krisztina; McCaughey, J Harry

    2016-01-01

    Accurate estimation of latent heat flux (LE) based on remote sensing data is critical in characterizing terrestrial ecosystems and modeling land surface processes. Many LE products were released during the past few decades, but their quality might not meet the requirements in terms of data consistency and estimation accuracy. Merging multiple algorithms could be an effective way to improve the quality of existing LE products. In this paper, we present a data integration method based on modified empirical orthogonal function (EOF) analysis to integrate the Moderate Resolution Imaging Spectroradiometer (MODIS) LE product (MOD16) and the Priestley-Taylor LE algorithm of Jet Propulsion Laboratory (PT-JPL) estimate. Twenty-two eddy covariance (EC) sites with LE observation were chosen to evaluate our algorithm, showing that the proposed EOF fusion method was capable of integrating the two satellite data sets with improved consistency and reduced uncertainties. Further efforts were needed to evaluate and improve the proposed algorithm at larger spatial scales and time periods, and over different land cover types.

  7. Electrocardiogram signal denoising based on empirical mode decomposition technique: an overview

    NASA Astrophysics Data System (ADS)

    Han, G.; Lin, B.; Xu, Z.

    2017-03-01

    Electrocardiogram (ECG) signal is nonlinear and non-stationary weak signal which reflects whether the heart is functioning normally or abnormally. ECG signal is susceptible to various kinds of noises such as high/low frequency noises, powerline interference and baseline wander. Hence, the removal of noises from ECG signal becomes a vital link in the ECG signal processing and plays a significant role in the detection and diagnosis of heart diseases. The review will describe the recent developments of ECG signal denoising based on Empirical Mode Decomposition (EMD) technique including high frequency noise removal, powerline interference separation, baseline wander correction, the combining of EMD and Other Methods, EEMD technique. EMD technique is a quite potential and prospective but not perfect method in the application of processing nonlinear and non-stationary signal like ECG signal. The EMD combined with other algorithms is a good solution to improve the performance of noise cancellation. The pros and cons of EMD technique in ECG signal denoising are discussed in detail. Finally, the future work and challenges in ECG signal denoising based on EMD technique are clarified.

  8. Localization of diffusion sources in complex networks with sparse observations

    NASA Astrophysics Data System (ADS)

    Hu, Zhao-Long; Shen, Zhesi; Tang, Chang-Bing; Xie, Bin-Bin; Lu, Jian-Feng

    2018-04-01

    Locating sources in a large network is of paramount importance to reduce the spreading of disruptive behavior. Based on the backward diffusion-based method and integer programming, we propose an efficient approach to locate sources in complex networks with limited observers. The results on model networks and empirical networks demonstrate that, for a certain fraction of observers, the accuracy of our method for source localization will improve as the increase of network size. Besides, compared with the previous method (the maximum-minimum method), the performance of our method is much better with a small fraction of observers, especially in heterogeneous networks. Furthermore, our method is more robust against noise environments and strategies of choosing observers.

  9. Estimating the Octanol/Water Partition Coefficient for Aliphatic Organic Compounds Using Semi-Empirical Electrotopological Index

    PubMed Central

    Souza, Erica Silva; Zaramello, Laize; Kuhnen, Carlos Alberto; Junkes, Berenice da Silva; Yunes, Rosendo Augusto; Heinzen, Vilma Edite Fonseca

    2011-01-01

    A new possibility for estimating the octanol/water coefficient (log P) was investigated using only one descriptor, the semi-empirical electrotopological index (ISET). The predictability of four octanol/water partition coefficient (log P) calculation models was compared using a set of 131 aliphatic organic compounds from five different classes. Log P values were calculated employing atomic-contribution methods, as in the Ghose/Crippen approach and its later refinement, AlogP; using fragmental methods through the ClogP method; and employing an approach considering the whole molecule using topological indices with the MlogP method. The efficiency and the applicability of the ISET in terms of calculating log P were demonstrated through good statistical quality (r > 0.99; s < 0.18), high internal stability and good predictive ability for an external group of compounds in the same order as the widely used models based on the fragmental method, ClogP, and the atomic contribution method, AlogP, which are among the most used methods of predicting log P. PMID:22072945

  10. Estimating the octanol/water partition coefficient for aliphatic organic compounds using semi-empirical electrotopological index.

    PubMed

    Souza, Erica Silva; Zaramello, Laize; Kuhnen, Carlos Alberto; Junkes, Berenice da Silva; Yunes, Rosendo Augusto; Heinzen, Vilma Edite Fonseca

    2011-01-01

    A new possibility for estimating the octanol/water coefficient (log P) was investigated using only one descriptor, the semi-empirical electrotopological index (I(SET)). The predictability of four octanol/water partition coefficient (log P) calculation models was compared using a set of 131 aliphatic organic compounds from five different classes. Log P values were calculated employing atomic-contribution methods, as in the Ghose/Crippen approach and its later refinement, AlogP; using fragmental methods through the ClogP method; and employing an approach considering the whole molecule using topological indices with the MlogP method. The efficiency and the applicability of the I(SET) in terms of calculating log P were demonstrated through good statistical quality (r > 0.99; s < 0.18), high internal stability and good predictive ability for an external group of compounds in the same order as the widely used models based on the fragmental method, ClogP, and the atomic contribution method, AlogP, which are among the most used methods of predicting log P.

  11. Artifact interactions retard technological improvement: An empirical study

    PubMed Central

    Magee, Christopher L.

    2017-01-01

    Empirical research has shown performance improvement of many different technological domains occurs exponentially but with widely varying improvement rates. What causes some technologies to improve faster than others do? Previous quantitative modeling research has identified artifact interactions, where a design change in one component influences others, as an important determinant of improvement rates. The models predict that improvement rate for a domain is proportional to the inverse of the domain’s interaction parameter. However, no empirical research has previously studied and tested the dependence of improvement rates on artifact interactions. A challenge to testing the dependence is that any method for measuring interactions has to be applicable to a wide variety of technologies. Here we propose a novel patent-based method that is both technology domain-agnostic and less costly than alternative methods. We use textual content from patent sets in 27 domains to find the influence of interactions on improvement rates. Qualitative analysis identified six specific keywords that signal artifact interactions. Patent sets from each domain were then examined to determine the total count of these 6 keywords in each domain, giving an estimate of artifact interactions in each domain. It is found that improvement rates are positively correlated with the inverse of the total count of keywords with Pearson correlation coefficient of +0.56 with a p-value of 0.002. The results agree with model predictions, and provide, for the first time, empirical evidence that artifact interactions have a retarding effect on improvement rates of technological domains. PMID:28777798

  12. Global model of zenith tropospheric delay proposed based on EOF analysis

    NASA Astrophysics Data System (ADS)

    Sun, Langlang; Chen, Peng; Wei, Erhu; Li, Qinzheng

    2017-07-01

    Tropospheric delay is one of the main error budgets in Global Navigation Satellite System (GNSS) measurements. Many empirical correction models have been developed to compensate this delay, and models which do not require meteorological parameters have received the most attention. This study established a global troposphere zenith total delay (ZTD) model, called Global Empirical Orthogonal Function Troposphere (GEOFT), based on the empirical orthogonal function (EOF, also known as geographically weighted PCAs) analysis method and the Global Geodetic Observing System (GGOS) Atmosphere data from 2012 to 2015. The results showed that ZTD variation could be well represented by the characteristics of the EOF base function Ek and associated coefficients Pk. Here, E1 mainly signifies the equatorial anomaly; E2 represents north-south asymmetry, and E3 and E4 reflects regional variation. Moreover, P1 mainly reflects annual and semiannual variation components; P2 and P3 mainly contains annual variation components, and P4 displays semiannual variation components. We validated the proposed GEOFT model using tropospheric delay data of GGOS ZTD grid data and the tropospheric product of the International GNSS Service (IGS) over the year 2016. The results showed that GEOFT model has high accuracy with bias and RMS of -0.3 and 3.9 cm, respectively, with respect to the GGOS ZTD data, and of -0.8 and 4.1 cm, respectively, with respect to the global IGS tropospheric product. The accuracy of GEOFT demonstrating that the use of the EOF analysis method to characterize ZTD variation is reasonable.

  13. An Empirically Derived Taxonomy for Personality Diagnosis: Bridging Science and Practice in Conceptualizing Personality

    PubMed Central

    Westen, Drew; Shedler, Jonathan; Bradley, Bekh; DeFife, Jared A.

    2013-01-01

    Objective The authors describe a system for diagnosing personality pathology that is empirically derived, clinically relevant, and practical for day-to-day use. Method A random national sample of psychiatrists and clinical psychologists (N=1,201) described a randomly selected current patient with any degree of personality dysfunction (from minimal to severe) using the descriptors in the Shedler-Westen Assessment Procedure–II and completed additional research forms. Results The authors applied factor analysis to identify naturally occurring diagnostic groupings within the patient sample. The analysis yielded 10 clinically coherent personality diagnoses organized into three higher-order clusters: internalizing, externalizing, and borderline-dysregulated. The authors selected the most highly rated descriptors to construct a diagnostic prototype for each personality syndrome. In a second, independent sample, research interviewers and patients’ treating clinicians were able to diagnose the personality syndromes with high agreement and minimal comorbidity among diagnoses. Conclusions The empirically derived personality prototypes described here provide a framework for personality diagnosis that is both empirically based and clinically relevant. PMID:22193534

  14. Empirical Equation Based Chirality (n, m) Assignment of Semiconducting Single Wall Carbon Nanotubes from Resonant Raman Scattering Data

    PubMed Central

    Arefin, Md Shamsul

    2012-01-01

    This work presents a technique for the chirality (n, m) assignment of semiconducting single wall carbon nanotubes by solving a set of empirical equations of the tight binding model parameters. The empirical equations of the nearest neighbor hopping parameters, relating the term (2n− m) with the first and second optical transition energies of the semiconducting single wall carbon nanotubes, are also proposed. They provide almost the same level of accuracy for lower and higher diameter nanotubes. An algorithm is presented to determine the chiral index (n, m) of any unknown semiconducting tube by solving these empirical equations using values of radial breathing mode frequency and the first or second optical transition energy from resonant Raman spectroscopy. In this paper, the chirality of 55 semiconducting nanotubes is assigned using the first and second optical transition energies. Unlike the existing methods of chirality assignment, this technique does not require graphical comparison or pattern recognition between existing experimental and theoretical Kataura plot. PMID:28348319

  15. Near-Infrared Spectrum Detection of Wheat Gluten Protein Content Based on a Combined Filtering Method.

    PubMed

    Cai, Jian-Hua

    2017-09-01

    To eliminate the random error of the derivative near-IR (NIR) spectrum and to improve model stability and the prediction accuracy of the gluten protein content, a combined method is proposed for pretreatment of the NIR spectrum based on both empirical mode decomposition and the wavelet soft-threshold method. The principle and the steps of the method are introduced and the denoising effect is evaluated. The wheat gluten protein content is calculated based on the denoised spectrum, and the results are compared with those of the nine-point smoothing method and the wavelet soft-threshold method. Experimental results show that the proposed combined method is effective in completing pretreatment of the NIR spectrum, and the proposed method improves the accuracy of detection of wheat gluten protein content from the NIR spectrum.

  16. Stem mortality in surface fires: Part II, experimental methods for characterizing the thermal response of tree stems to heating by fires

    Treesearch

    D. M. Jimenez; B. W. Butler; J. Reardon

    2003-01-01

    Current methods for predicting fire-induced plant mortality in shrubs and trees are largely empirical. These methods are not readily linked to duff burning, soil heating, and surface fire behavior models. In response to the need for a physics-based model of this process, a detailed model for predicting the temperature distribution through a tree stem as a function of...

  17. Heuristic approach to capillary pressures averaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coca, B.P.

    1980-10-01

    Several methods are available to average capillary pressure curves. Among these are the J-curve and regression equations of the wetting-fluid saturation in porosity and permeability (capillary pressure held constant). While the regression equation seem completely empiric, the J-curve method seems to be theoretically sound due to its expression based on a relation between the average capillary radius and the permeability-porosity ratio. An analysis is given of each of these methods.

  18. Recent Progress in Treating Protein-Ligand Interactions with Quantum-Mechanical Methods.

    PubMed

    Yilmazer, Nusret Duygu; Korth, Martin

    2016-05-16

    We review the first successes and failures of a "new wave" of quantum chemistry-based approaches to the treatment of protein/ligand interactions. These approaches share the use of "enhanced", dispersion (D), and/or hydrogen-bond (H) corrected density functional theory (DFT) or semi-empirical quantum mechanical (SQM) methods, in combination with ensemble weighting techniques of some form to capture entropic effects. Benchmark and model system calculations in comparison to high-level theoretical as well as experimental references have shown that both DFT-D (dispersion-corrected density functional theory) and SQM-DH (dispersion and hydrogen bond-corrected semi-empirical quantum mechanical) perform much more accurately than older DFT and SQM approaches and also standard docking methods. In addition, DFT-D might soon become and SQM-DH already is fast enough to compute a large number of binding modes of comparably large protein/ligand complexes, thus allowing for a more accurate assessment of entropic effects.

  19. Estimating discharge measurement uncertainty using the interpolated variance estimator

    USGS Publications Warehouse

    Cohn, T.; Kiang, J.; Mason, R.

    2012-01-01

    Methods for quantifying the uncertainty in discharge measurements typically identify various sources of uncertainty and then estimate the uncertainty from each of these sources by applying the results of empirical or laboratory studies. If actual measurement conditions are not consistent with those encountered in the empirical or laboratory studies, these methods may give poor estimates of discharge uncertainty. This paper presents an alternative method for estimating discharge measurement uncertainty that uses statistical techniques and at-site observations. This Interpolated Variance Estimator (IVE) estimates uncertainty based on the data collected during the streamflow measurement and therefore reflects the conditions encountered at the site. The IVE has the additional advantage of capturing all sources of random uncertainty in the velocity and depth measurements. It can be applied to velocity-area discharge measurements that use a velocity meter to measure point velocities at multiple vertical sections in a channel cross section.

  20. Towards estimation of respiratory muscle effort with respiratory inductance plethysmography signals and complementary ensemble empirical mode decomposition.

    PubMed

    Chen, Ya-Chen; Hsiao, Tzu-Chien

    2018-07-01

    Respiratory inductance plethysmography (RIP) sensor is an inexpensive, non-invasive, easy-to-use transducer for collecting respiratory movement data. Studies have reported that the RIP signal's amplitude and frequency can be used to discriminate respiratory diseases. However, with the conventional approach of RIP data analysis, respiratory muscle effort cannot be estimated. In this paper, the estimation of the respiratory muscle effort through RIP signal was proposed. A complementary ensemble empirical mode decomposition method was used, to extract hidden signals from the RIP signals based on the frequency bands of the activities of different respiratory muscles. To validate the proposed method, an experiment to collect subjects' RIP signal under thoracic breathing (TB) and abdominal breathing (AB) was conducted. The experimental results for both the TB and AB indicate that the proposed method can be used to loosely estimate the activities of thoracic muscles, abdominal muscles, and diaphragm. Graphical abstract ᅟ.

  1. Training of Lay Health Educators to Implement an Evidence-Based Behavioral Weight Loss Intervention in Rural Senior Centers

    ERIC Educational Resources Information Center

    Krukowski, Rebecca A.; Lensing, Shelly; Love, ShaRhonda; Prewitt, T. Elaine; Adams, Becky; Cornell, Carol E.; Felix, Holly C.; West, Delia

    2013-01-01

    Purpose of the Study: Lay health educators (LHEs) offer great promise for facilitating the translation of evidence-based health promotion programs to underserved areas; yet, there is little guidance on how to train LHEs to implement these programs, particularly in the crucial area of empirically validated obesity interventions. Design and Methods:…

  2. Using Empirical Data to Clarify the Meaning of Various Prescriptions for Designing a Web-Based Course

    ERIC Educational Resources Information Center

    Boulet, Marie-Michele

    2004-01-01

    Design prescriptions to create web-based courses and sites that are dynamic, easy-to-use, interactive and data-driven, emerge from a "how to do it" approach. Unfortunately, the theory behind these methods, prescriptions, procedures or tools, is rarely provided and the important terms, such as "easy-to-use", to which these…

  3. Unraveling the Motivational Effects and Challenges of Web-Based Collaborative Inquiry Learning across Different Groups of Learners

    ERIC Educational Resources Information Center

    Raes, Annelies; Schellens, Tammy

    2015-01-01

    This study deals with the implementation of a web-based collaborative inquiry (WISE) project in secondary science education and unravels the contribution and challenges of this learning approach to foster students' motivation to learn science, and its relation with student and class-level characteristics. An empirical mixed methods study in 13…

  4. Comparison of Expert-Based and Empirical Evaluation Methodologies in the Case of a CBL Environment: The ''Orestis'' Experience

    ERIC Educational Resources Information Center

    Karoulis, Athanasis; Demetriadis, Stavros; Pombortsis, Andreas

    2006-01-01

    This paper compares several interface evaluation methods applied in the case of a computer based learning (CBL) environment, during a longitudinal study performed in three European countries, Greece, Germany, and Holland, and within the framework of an EC funded Leonardo da Vinci program. The paper firstly considers the particularities of the CBL…

  5. An empirical approach to improving tidal predictions using recent real-time tide gauge data

    NASA Astrophysics Data System (ADS)

    Hibbert, Angela; Royston, Samantha; Horsburgh, Kevin J.; Leach, Harry

    2014-05-01

    Classical harmonic methods of tidal prediction are often problematic in estuarine environments due to the distortion of tidal fluctuations in shallow water, which results in a disparity between predicted and observed sea levels. This is of particular concern in the Bristol Channel, where the error associated with tidal predictions is potentially greater due to an unusually large tidal range of around 12m. As such predictions are fundamental to the short-term forecasting of High Water (HW) extremes, it is vital that alternative solutions are found. In a pilot study, using a year-long observational sea level record from the Port of Avonmouth in the Bristol Channel, the UK National Tidal and Sea Level Facility (NTSLF) tested the potential for reducing tidal prediction errors, using three alternatives to the Harmonic Method of tidal prediction. The three methods evaluated were (1) the use of Artificial Neural Network (ANN) models, (2) the Species Concordance technique and (3) a simple empirical procedure for correcting Harmonic Method High Water predictions based upon a few recent observations (referred to as the Empirical Correction Method). This latter method was then successfully applied to sea level records from an additional 42 of the 45 tide gauges that comprise the UK Tide Gauge Network. Consequently, it is to be incorporated into the operational systems of the UK Coastal Monitoring and Forecasting Partnership in order to improve short-term sea level predictions for the UK and in particular, the accurate estimation of HW extremes.

  6. [An EMD based time-frequency distribution and its application in EEG analysis].

    PubMed

    Li, Xiaobing; Chu, Meng; Qiu, Tianshuang; Bao, Haiping

    2007-10-01

    Hilbert-Huang transform (HHT) is a new time-frequency analytic method to analyze the nonlinear and the non-stationary signals. The key step of this method is the empirical mode decomposition (EMD), with which any complicated signal can be decomposed into a finite and small number of intrinsic mode functions (IMF). In this paper, a new EMD based method for suppressing the cross-term of Wigner-Ville distribution (WVD) is developed and is applied to analyze the epileptic EEG signals. The simulation data and analysis results show that the new method suppresses the cross-term of the WVD effectively with an excellent resolution.

  7. Innovation Analysis | Energy Analysis | NREL

    Science.gov Websites

    . New empirical methods for estimating technical and commercial impact (based on patent citations and Commercial Breakthroughs, NREL employed regression models and multivariate simulations to compare social in the marketplace and found that: Web presence may provide a better representation of the commercial

  8. Empirical Estimation of Local Dielectric Constants: Toward Atomistic Design of Collagen Mimetic Peptides

    PubMed Central

    Pike, Douglas H.; Nanda, Vikas

    2017-01-01

    One of the key challenges in modeling protein energetics is the treatment of solvent interactions. This is particularly important in the case of peptides, where much of the molecule is highly exposed to solvent due to its small size. In this study, we develop an empirical method for estimating the local dielectric constant based on an additive model of atomic polarizabilities. Calculated values match reported apparent dielectric constants for a series of Staphylococcus aureus nuclease mutants. Calculated constants are used to determine screening effects on Coulombic interactions and to determine solvation contributions based on a modified Generalized Born model. These terms are incorporated into the protein modeling platform protCAD, and benchmarked on a data set of collagen mimetic peptides for which experimentally determined stabilities are available. Computing local dielectric constants using atomistic protein models and the assumption of additive atomic polarizabilities is a rapid and potentially useful method for improving electrostatics and solvation calculations that can be applied in the computational design of peptides. PMID:25784456

  9. A method of predicting the energy-absorption capability of composite subfloor beams

    NASA Technical Reports Server (NTRS)

    Farley, Gary L.

    1987-01-01

    A simple method of predicting the energy-absorption capability of composite subfloor beam structure was developed. The method is based upon the weighted sum of the energy-absorption capability of constituent elements of a subfloor beam. An empirical data base of energy absorption results from circular and square cross section tube specimens were used in the prediction capability. The procedure is applicable to a wide range of subfloor beam structure. The procedure was demonstrated on three subfloor beam concepts. Agreement between test and prediction was within seven percent for all three cases.

  10. Accurate low-cost methods for performance evaluation of cache memory systems

    NASA Technical Reports Server (NTRS)

    Laha, Subhasis; Patel, Janak H.; Iyer, Ravishankar K.

    1988-01-01

    Methods of simulation based on statistical techniques are proposed to decrease the need for large trace measurements and for predicting true program behavior. Sampling techniques are applied while the address trace is collected from a workload. This drastically reduces the space and time needed to collect the trace. Simulation techniques are developed to use the sampled data not only to predict the mean miss rate of the cache, but also to provide an empirical estimate of its actual distribution. Finally, a concept of primed cache is introduced to simulate large caches by the sampling-based method.

  11. Informing web-based communication curricula in veterinary education: a systematic review of web-based methods used for teaching and assessing clinical communication in medical education.

    PubMed

    Artemiou, Elpida; Adams, Cindy L; Toews, Lorraine; Violato, Claudio; Coe, Jason B

    2014-01-01

    We determined the Web-based configurations that are applied to teach medical and veterinary communication skills, evaluated their effectiveness, and suggested future educational directions for Web-based communication teaching in veterinary education. We performed a systematic search of CAB Abstracts, MEDLINE, Scopus, and ERIC limited to articles published in English between 2000 and 2012. The review focused on medical or veterinary undergraduate to clinical- or residency-level students. We selected studies for which the study population was randomized to the Web-based learning (WBL) intervention with a post-test comparison with another WBL or non-WBL method and that reported at least one empirical outcome. Two independent reviewers completed relevancy screening, data extraction, and synthesis of results using Kirkpatrick and Kirkpatrick's framework. The search retrieved 1,583 articles, and 10 met the final inclusion criteria. We identified no published articles on Web based communication platforms in veterinary medicine; however, publications summarized from human medicine demonstrated that WBL provides a potentially reliable and valid approach for teaching and assessing communication skills. Student feedback on the use of virtual patients for teaching clinical communication skills has been positive,though evidence has suggested that practice with virtual patients prompted lower relation-building responses.Empirical outcomes indicate that WBL is a viable method for expanding the approach to teaching history taking and possibly to additional tasks of the veterinary medical interview.

  12. Partial differential equation-based approach for empirical mode decomposition: application on image analysis.

    PubMed

    Niang, Oumar; Thioune, Abdoulaye; El Gueirea, Mouhamed Cheikh; Deléchelle, Eric; Lemoine, Jacques

    2012-09-01

    The major problem with the empirical mode decomposition (EMD) algorithm is its lack of a theoretical framework. So, it is difficult to characterize and evaluate this approach. In this paper, we propose, in the 2-D case, the use of an alternative implementation to the algorithmic definition of the so-called "sifting process" used in the original Huang's EMD method. This approach, especially based on partial differential equations (PDEs), was presented by Niang in previous works, in 2005 and 2007, and relies on a nonlinear diffusion-based filtering process to solve the mean envelope estimation problem. In the 1-D case, the efficiency of the PDE-based method, compared to the original EMD algorithmic version, was also illustrated in a recent paper. Recently, several 2-D extensions of the EMD method have been proposed. Despite some effort, 2-D versions for EMD appear poorly performing and are very time consuming. So in this paper, an extension to the 2-D space of the PDE-based approach is extensively described. This approach has been applied in cases of both signal and image decomposition. The obtained results confirm the usefulness of the new PDE-based sifting process for the decomposition of various kinds of data. Some results have been provided in the case of image decomposition. The effectiveness of the approach encourages its use in a number of signal and image applications such as denoising, detrending, or texture analysis.

  13. Multivariate Qst–Fst Comparisons: A Neutrality Test for the Evolution of the G Matrix in Structured Populations

    PubMed Central

    Martin, Guillaume; Chapuis, Elodie; Goudet, Jérôme

    2008-01-01

    Neutrality tests in quantitative genetics provide a statistical framework for the detection of selection on polygenic traits in wild populations. However, the existing method based on comparisons of divergence at neutral markers and quantitative traits (Qst–Fst) suffers from several limitations that hinder a clear interpretation of the results with typical empirical designs. In this article, we propose a multivariate extension of this neutrality test based on empirical estimates of the among-populations (D) and within-populations (G) covariance matrices by MANOVA. A simple pattern is expected under neutrality: D = 2Fst/(1 − Fst)G, so that neutrality implies both proportionality of the two matrices and a specific value of the proportionality coefficient. This pattern is tested using Flury's framework for matrix comparison [common principal-component (CPC) analysis], a well-known tool in G matrix evolution studies. We show the importance of using a Bartlett adjustment of the test for the small sample sizes typically found in empirical studies. We propose a dual test: (i) that the proportionality coefficient is not different from its neutral expectation [2Fst/(1 − Fst)] and (ii) that the MANOVA estimates of mean square matrices between and among populations are proportional. These two tests combined provide a more stringent test for neutrality than the classic Qst–Fst comparison and avoid several statistical problems. Extensive simulations of realistic empirical designs suggest that these tests correctly detect the expected pattern under neutrality and have enough power to efficiently detect mild to strong selection (homogeneous, heterogeneous, or mixed) when it is occurring on a set of traits. This method also provides a rigorous and quantitative framework for disentangling the effects of different selection regimes and of drift on the evolution of the G matrix. We discuss practical requirements for the proper application of our test in empirical studies and potential extensions. PMID:18245845

  14. Unified Least Squares Methods for the Evaluation of Diagnostic Tests With the Gold Standard

    PubMed Central

    Tang, Liansheng Larry; Yuan, Ao; Collins, John; Che, Xuan; Chan, Leighton

    2017-01-01

    The article proposes a unified least squares method to estimate the receiver operating characteristic (ROC) parameters for continuous and ordinal diagnostic tests, such as cancer biomarkers. The method is based on a linear model framework using the empirically estimated sensitivities and specificities as input “data.” It gives consistent estimates for regression and accuracy parameters when the underlying continuous test results are normally distributed after some monotonic transformation. The key difference between the proposed method and the method of Tang and Zhou lies in the response variable. The response variable in the latter is transformed empirical ROC curves at different thresholds. It takes on many values for continuous test results, but few values for ordinal test results. The limited number of values for the response variable makes it impractical for ordinal data. However, the response variable in the proposed method takes on many more distinct values so that the method yields valid estimates for ordinal data. Extensive simulation studies are conducted to investigate and compare the finite sample performance of the proposed method with an existing method, and the method is then used to analyze 2 real cancer diagnostic example as an illustration. PMID:28469385

  15. Semi-empirical spectrophotometric (SESp) method for the indirect determination of the ratio of cationic micellar binding constants of counterions X⁻ and Br⁻(K(X)/K(Br)).

    PubMed

    Khan, Mohammad Niyaz; Yusof, Nor Saadah Mohd; Razak, Norazizah Abdul

    2013-01-01

    The semi-empirical spectrophotometric (SESp) method, for the indirect determination of ion exchange constants (K(X)(Br)) of ion exchange processes occurring between counterions (X⁻ and Br⁻) at the cationic micellar surface, is described in this article. The method uses an anionic spectrophotometric probe molecule, N-(2-methoxyphenyl)phthalamate ion (1⁻), which measures the effects of varying concentrations of inert inorganic or organic salt (Na(v)X, v = 1, 2) on absorbance, (A(ob)) at 310 nm, of samples containing constant concentrations of 1⁻, NaOH and cationic micelles. The observed data fit satisfactorily to an empirical equation which gives the values of two empirical constants. These empirical constants lead to the determination of K(X)(Br) (= K(X)/K(Br) with K(X) and K(Br) representing cationic micellar binding constants of counterions X and Br⁻). This method gives values of K(X)(Br) for both moderately hydrophobic and hydrophilic X⁻. The values of K(X)(Br), obtained by using this method, are comparable with the corresponding values of K(X)(Br), obtained by the use of semi-empirical kinetic (SEK) method, for different moderately hydrophobic X. The values of K(X)(Br) for X = Cl⁻ and 2,6-Cl₂C6H₃CO₂⁻, obtained by the use of SESp and SEK methods, are similar to those obtained by the use of other different conventional methods.

  16. Educating psychotherapy supervisors.

    PubMed

    Watkins, C Edward

    2012-01-01

    What do we know clinically and empirically about the education of psychotherapy supervisors? In this paper, I attempt to address that question by: (1) reviewing briefly current thinking about psychotherapy supervisor training; and (2) examining the available research where supervisor training and supervision have been studied. The importance of such matters as training format and methods, supervision topics for study, supervisor development, and supervisor competencies are considered, and some prototypical, competency-based supervisor training programs that hold educational promise are identified and described. Twenty supervisor training studies are critiqued, and their implications for practice and research are examined. Based on this review of training programs and research, the following conclusions are drawn: (1) the clinical validity of supervisor education appears to be strong, solid, and sound, (2) although research suggests that supervisor training can have value in stimulating the development of supervisor trainees and better preparing them for the supervisory role, any such base of empirical support or validity should be regarded as tentative at best; and (3) the most formidable challenge for psychotherapy supervisor education may well be correcting the imbalance that currently exists between clinical and empirical validity and "raising the bar" on the rigor, relevance, and replicability of future supervisor training research.

  17. Determination of a Limited Scope Network's Lightning Detection Efficiency

    NASA Technical Reports Server (NTRS)

    Rompala, John T.; Blakeslee, R.

    2008-01-01

    This paper outlines a modeling technique to map lightning detection efficiency variations over a region surveyed by a sparse array of ground based detectors. A reliable flash peak current distribution (PCD) for the region serves as the technique's base. This distribution is recast as an event probability distribution function. The technique then uses the PCD together with information regarding: site signal detection thresholds, type of solution algorithm used, and range attenuation; to formulate the probability that a flash at a specified location will yield a solution. Applying this technique to the full region produces detection efficiency contour maps specific to the parameters employed. These contours facilitate a comparative analysis of each parameter's effect on the network's detection efficiency. In an alternate application, this modeling technique gives an estimate of the number, strength, and distribution of events going undetected. This approach leads to a variety of event density contour maps. This application is also illustrated. The technique's base PCD can be empirical or analytical. A process for formulating an empirical PCD specific to the region and network being studied is presented. A new method for producing an analytical representation of the empirical PCD is also introduced.

  18. Trend extraction using empirical mode decomposition and statistical empirical mode decomposition: Case study: Kuala Lumpur stock market

    NASA Astrophysics Data System (ADS)

    Jaber, Abobaker M.

    2014-12-01

    Two nonparametric methods for prediction and modeling of financial time series signals are proposed. The proposed techniques are designed to handle non-stationary and non-linearity behave and to extract meaningful signals for reliable prediction. Due to Fourier Transform (FT), the methods select significant decomposed signals that will be employed for signal prediction. The proposed techniques developed by coupling Holt-winter method with Empirical Mode Decomposition (EMD) and it is Extending the scope of empirical mode decomposition by smoothing (SEMD). To show performance of proposed techniques, we analyze daily closed price of Kuala Lumpur stock market index.

  19. Semi-empirical studies of atomic structure. Progress report, 1 July 1982-1 February 1983

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Curtis, L.J.

    1983-01-01

    A program of studies of the properties of the heavy and highly ionized atomic systems which often occur as contaminants in controlled fusion devices is continuing. The project combines experimental measurements by fast-ion-beam excitation with semi-empirical data parametrizations to identify and exploit regularities in the properties of these very heavy and very highly ionized systems. The increasing use of spectroscopic line intensities as diagnostics for determining thermonuclear plasma temperatures and densities requires laboratory observation and analysis of such spectra, often to accuracies that exceed the capabilities of ab initio theoretical methods for these highly relativistic many electron systems. Through themore » acquisition and systematization of empirical data, remarkably precise methods for predicting excitation energies, transition wavelengths, transition probabilities, level lifetimes, ionization potentials, core polarizabilities, and core penetrabilities are being developed and applied. Although the data base for heavy, highly ionized atoms is still sparse, parametrized extrapolations and interpolations along isoelectronic, homologous, and Rydberg sequences are providing predictions for large classes of quantities, with a precision that is sharpened by subsequent measurements.« less

  20. Project- versus Lecture-Based Courses: Assessing the Role of Course Structure on Perceived Utility, Anxiety, Academic Performance, and Satisfaction in the Undergraduate Research Methods Course

    ERIC Educational Resources Information Center

    Rubenking, Bridget; Dodd, Melissa

    2018-01-01

    Previous research suggests that undergraduate research methods students doubt the utility of course content and experience math and research anxiety. Research also suggests involving students in hands-on, applied research activities, although empirical data on the scope and nature of these activities are lacking. This study compared academic…

  1. Optimal Placement of Dynamic Var Sources by Using Empirical Controllability Covariance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qi, Junjian; Huang, Weihong; Sun, Kai

    In this paper, the empirical controllability covariance (ECC), which is calculated around the considered operating condition of a power system, is applied to quantify the degree of controllability of system voltages under specific dynamic var source locations. An optimal dynamic var source placement method addressing fault-induced delayed voltage recovery (FIDVR) issues is further formulated as an optimization problem that maximizes the determinant of ECC. The optimization problem is effectively solved by the NOMAD solver, which implements the mesh adaptive direct search algorithm. The proposed method is tested on an NPCC 140-bus system and the results show that the proposed methodmore » with fault specified ECC can solve the FIDVR issue caused by the most severe contingency with fewer dynamic var sources than the voltage sensitivity index (VSI)-based method. The proposed method with fault unspecified ECC does not depend on the settings of the contingency and can address more FIDVR issues than the VSI method when placing the same number of SVCs under different fault durations. It is also shown that the proposed method can help mitigate voltage collapse.« less

  2. Particle Swarm-Based Translation Control for Immersed Tunnel Element in the Hong Kong-Zhuhai-Macao Bridge Project

    NASA Astrophysics Data System (ADS)

    Li, Jun-jun; Yang, Xiao-jun; Xiao, Ying-jie; Xu, Bo-wei; Wu, Hua-feng

    2018-03-01

    Immersed tunnel is an important part of the Hong Kong-Zhuhai-Macao Bridge (HZMB) project. In immersed tunnel floating, translation which includes straight and transverse movements is the main working mode. To decide the magnitude and direction of the towing force for each tug, a particle swarm-based translation control method is presented for non-power immersed tunnel element. A sort of linear weighted logarithmic function is exploited to avoid weak subgoals. In simulation, the particle swarm-based control method is evaluated and compared with traditional empirical method in the case of the HZMB project. Simulation results show that the presented method delivers performance improvement in terms of the enhanced surplus towing force.

  3. Application of empirical Bayes methods to predict the rate of decline in ERG at the individual level among patients with retinitis pigmentosa.

    PubMed

    Qiu, Weiliang; Sandberg, Michael A; Rosner, Bernard

    2018-05-31

    Retinitis pigmentosa is one of the most common forms of inherited retinal degeneration. The electroretinogram (ERG) can be used to determine the severity of retinitis pigmentosa-the lower the ERG amplitude, the more severe the disease is. In practice for career, lifestyle, and treatment counseling, it is of interest to predict the ERG amplitude of a patient at a future time. One approach is prediction based on the average rate of decline for individual patients. However, there is considerable variation both in initial amplitude and in rate of decline. In this article, we propose an empirical Bayes (EB) approach to incorporate the variations in initial amplitude and rate of decline for the prediction of ERG amplitude at the individual level. We applied the EB method to a collection of ERGs from 898 patients with 3 or more visits over 5 or more years of follow-up tested in the Berman-Gund Laboratory and observed that the predicted values at the last (kth) visit obtained by using the proposed method based on data for the first k-1 visits are highly correlated with the observed values at the kth visit (Spearman correlation =0.93) and have a higher correlation with the observed values than those obtained based on either the population average decline rate or those obtained based on the individual decline rate. The mean square errors for predicted values obtained by the EB method are also smaller than those predicted by the other methods. Copyright © 2018 John Wiley & Sons, Ltd.

  4. Airborne electromagnetic bathymetry investigations in Port Lincoln, South Australia - comparison with an equivalent floating transient electromagnetic system

    NASA Astrophysics Data System (ADS)

    Vrbancich, Julian

    2011-09-01

    Helicopter time-domain airborne electromagnetic (AEM) methodology is being investigated as a reconnaissance technique for bathymetric mapping in shallow coastal waters, especially in areas affected by water turbidity where light detection and ranging (LIDAR) and hyperspectral techniques may be limited. Previous studies in Port Lincoln, South Australia, used a floating AEM time-domain system to provide an upper limit to the expected bathymetric accuracy based on current technology for AEM systems. The survey lines traced by the towed floating system were also flown with an airborne system using the same transmitter and receiver electronic instrumentation, on two separate occasions. On the second occasion, significant improvements had been made to the instrumentation to reduce the system self-response at early times. A comparison of the interpreted water depths obtained from the airborne and floating systems is presented, showing the degradation in bathymetric accuracy obtained from the airborne data. An empirical data correction method based on modelled and observed EM responses over deep seawater (i.e. a quasi half-space response) at varying survey altitudes, combined with known seawater conductivity measured during the survey, can lead to significant improvements in interpreted water depths and serves as a useful method for checking system calibration. Another empirical data correction method based on observed and modelled EM responses in shallow water was shown to lead to similar improvements in interpreted water depths; however, this procedure is notably inferior to the quasi half-space response because more parameters need to be assumed in order to compute the modelled EM response. A comparison between the results of the two airborne surveys in Port Lincoln shows that uncorrected data obtained from the second airborne survey gives good agreement with known water depths without the need to apply any empirical corrections to the data. This result significantly decreases the data-processing time thereby enabling the AEM method to serve as a rapid reconnaissance technique for bathymetric mapping.

  5. Aeroacoustic Prediction Codes

    NASA Technical Reports Server (NTRS)

    Gliebe, P; Mani, R.; Shin, H.; Mitchell, B.; Ashford, G.; Salamah, S.; Connell, S.; Huff, Dennis (Technical Monitor)

    2000-01-01

    This report describes work performed on Contract NAS3-27720AoI 13 as part of the NASA Advanced Subsonic Transport (AST) Noise Reduction Technology effort. Computer codes were developed to provide quantitative prediction, design, and analysis capability for several aircraft engine noise sources. The objective was to provide improved, physics-based tools for exploration of noise-reduction concepts and understanding of experimental results. Methods and codes focused on fan broadband and 'buzz saw' noise and on low-emissions combustor noise and compliment work done by other contractors under the NASA AST program to develop methods and codes for fan harmonic tone noise and jet noise. The methods and codes developed and reported herein employ a wide range of approaches, from the strictly empirical to the completely computational, with some being semiempirical analytical, and/or analytical/computational. Emphasis was on capturing the essential physics while still considering method or code utility as a practical design and analysis tool for everyday engineering use. Codes and prediction models were developed for: (1) an improved empirical correlation model for fan rotor exit flow mean and turbulence properties, for use in predicting broadband noise generated by rotor exit flow turbulence interaction with downstream stator vanes: (2) fan broadband noise models for rotor and stator/turbulence interaction sources including 3D effects, noncompact-source effects. directivity modeling, and extensions to the rotor supersonic tip-speed regime; (3) fan multiple-pure-tone in-duct sound pressure prediction methodology based on computational fluid dynamics (CFD) analysis; and (4) low-emissions combustor prediction methodology and computer code based on CFD and actuator disk theory. In addition. the relative importance of dipole and quadrupole source mechanisms was studied using direct CFD source computation for a simple cascadeigust interaction problem, and an empirical combustor-noise correlation model was developed from engine acoustic test results. This work provided several insights on potential approaches to reducing aircraft engine noise. Code development is described in this report, and those insights are discussed.

  6. Evaluation of directional normalization methods for Landsat TM/ETM+ over primary Amazonian lowland forests

    NASA Astrophysics Data System (ADS)

    Van doninck, Jasper; Tuomisto, Hanna

    2017-06-01

    Biodiversity mapping in extensive tropical forest areas poses a major challenge for the interpretation of Landsat images, because floristically clearly distinct forest types may show little difference in reflectance. In such cases, the effects of the bidirectional reflection distribution function (BRDF) can be sufficiently strong to cause erroneous image interpretation and classification. Since the opening of the Landsat archive in 2008, several BRDF normalization methods for Landsat have been developed. The simplest of these consist of an empirical view angle normalization, whereas more complex approaches apply the semi-empirical Ross-Li BRDF model and the MODIS MCD43-series of products to normalize directional Landsat reflectance to standard view and solar angles. Here we quantify the effect of surface anisotropy on Landsat TM/ETM+ images over old-growth Amazonian forests, and evaluate five angular normalization approaches. Even for the narrow swath of the Landsat sensors, we observed directional effects in all spectral bands. Those normalization methods that are based on removing the surface reflectance gradient as observed in each image were adequate to normalize TM/ETM+ imagery to nadir viewing, but were less suitable for multitemporal analysis when the solar vector varied strongly among images. Approaches based on the MODIS BRDF model parameters successfully reduced directional effects in the visible bands, but removed only half of the systematic errors in the infrared bands. The best results were obtained when the semi-empirical BRDF model was calibrated using pairs of Landsat observation. This method produces a single set of BRDF parameters, which can then be used to operationally normalize Landsat TM/ETM+ imagery over Amazonian forests to nadir viewing and a standard solar configuration.

  7. Fractal Theory for Permeability Prediction, Venezuelan and USA Wells

    NASA Astrophysics Data System (ADS)

    Aldana, Milagrosa; Altamiranda, Dignorah; Cabrera, Ana

    2014-05-01

    Inferring petrophysical parameters such as permeability, porosity, water saturation, capillary pressure, etc, from the analysis of well logs or other available core data has always been of critical importance in the oil industry. Permeability in particular, which is considered to be a complex parameter, has been inferred using both empirical and theoretical techniques. The main goal of this work is to predict permeability values on different wells using Fractal Theory, based on a method proposed by Pape et al. (1999). This approach uses the relationship between permeability and the geometric form of the pore space of the rock. This method is based on the modified equation of Kozeny-Carman and a fractal pattern, which allows determining permeability as a function of the cementation exponent, porosity and the fractal dimension. Data from wells located in Venezuela and the United States of America are analyzed. Employing data of porosity and permeability obtained from core samples, and applying the Fractal Theory method, we calculated the prediction equations for each well. At the beginning, this was achieved by training with 50% of the data available for each well. Afterwards, these equations were tested inferring over 100% of the data to analyze possible trends in their distribution. This procedure gave excellent results in all the wells in spite of their geographic distance, generating permeability models with the potential to accurately predict permeability logs in the remaining parts of the well for which there are no core samples, using even porority logs. Additionally, empirical models were used to determine permeability and the results were compared with those obtained by applying the fractal method. The results indicated that, although there are empirical equations that give a proper adjustment, the prediction results obtained using fractal theory give a better fit to the core reference data.

  8. Computer implemented empirical mode decomposition method apparatus, and article of manufacture utilizing curvature extrema

    NASA Technical Reports Server (NTRS)

    Shen, Zheng (Inventor); Huang, Norden Eh (Inventor)

    2003-01-01

    A computer implemented physical signal analysis method is includes two essential steps and the associated presentation techniques of the results. All the steps exist only in a computer: there are no analytic expressions resulting from the method. The first step is a computer implemented Empirical Mode Decomposition to extract a collection of Intrinsic Mode Functions (IMF) from nonlinear, nonstationary physical signals based on local extrema and curvature extrema. The decomposition is based on the direct extraction of the energy associated with various intrinsic time scales in the physical signal. Expressed in the IMF's, they have well-behaved Hilbert Transforms from which instantaneous frequencies can be calculated. The second step is the Hilbert Transform. The final result is the Hilbert Spectrum. Thus, the invention can localize any event on the time as well as the frequency axis. The decomposition can also be viewed as an expansion of the data in terms of the IMF's. Then, these IMF's, based on and derived from the data, can serve as the basis of that expansion. The local energy and the instantaneous frequency derived from the IMF's through the Hilbert transform give a full energy-frequency-time distribution of the data which is designated as the Hilbert Spectrum.

  9. An empirical identification and categorisation of training best practices for ERP implementation projects

    NASA Astrophysics Data System (ADS)

    Esteves, Jose Manuel

    2014-11-01

    Although training is one of the most cited critical success factors in Enterprise Resource Planning (ERP) systems implementations, few empirical studies have attempted to examine the characteristics of management of the training process within ERP implementation projects. Based on the data gathered from a sample of 158 respondents across four stakeholder groups involved in ERP implementation projects, and using a mixed method design, we have assembled a derived set of training best practices. Results suggest that the categorised list of ERP training best practices can be used to better understand training activities in ERP implementation projects. Furthermore, the results reveal that the company size and location have an impact on the relevance of training best practices. This empirical study also highlights the need to investigate the role of informal workplace trainers in ERP training activities.

  10. Bearing performance degradation assessment based on a combination of empirical mode decomposition and k-medoids clustering

    NASA Astrophysics Data System (ADS)

    Rai, Akhand; Upadhyay, S. H.

    2017-09-01

    Bearing is the most critical component in rotating machinery since it is more susceptible to failure. The monitoring of degradation in bearings becomes of great concern for averting the sudden machinery breakdown. In this study, a novel method for bearing performance degradation assessment (PDA) based on an amalgamation of empirical mode decomposition (EMD) and k-medoids clustering is encouraged. The fault features are extracted from the bearing signals using the EMD process. The extracted features are then subjected to k-medoids based clustering for obtaining the normal state and failure state cluster centres. A confidence value (CV) curve based on dissimilarity of the test data object to the normal state is obtained and employed as the degradation indicator for assessing the health of bearings. The proposed outlook is applied on the vibration signals collected in run-to-failure tests of bearings to assess its effectiveness in bearing PDA. To validate the superiority of the suggested approach, it is compared with commonly used time-domain features RMS and kurtosis, well-known fault diagnosis method envelope analysis (EA) and existing PDA classifiers i.e. self-organizing maps (SOM) and Fuzzy c-means (FCM). The results demonstrate that the recommended method outperforms the time-domain features, SOM and FCM based PDA in detecting the early stage degradation more precisely. Moreover, EA can be used as an accompanying method to confirm the early stage defect detected by the proposed bearing PDA approach. The study shows the potential application of k-medoids clustering as an effective tool for PDA of bearings.

  11. EXPLORING FUNCTIONAL CONNECTIVITY IN FMRI VIA CLUSTERING.

    PubMed

    Venkataraman, Archana; Van Dijk, Koene R A; Buckner, Randy L; Golland, Polina

    2009-04-01

    In this paper we investigate the use of data driven clustering methods for functional connectivity analysis in fMRI. In particular, we consider the K-Means and Spectral Clustering algorithms as alternatives to the commonly used Seed-Based Analysis. To enable clustering of the entire brain volume, we use the Nyström Method to approximate the necessary spectral decompositions. We apply K-Means, Spectral Clustering and Seed-Based Analysis to resting-state fMRI data collected from 45 healthy young adults. Without placing any a priori constraints, both clustering methods yield partitions that are associated with brain systems previously identified via Seed-Based Analysis. Our empirical results suggest that clustering provides a valuable tool for functional connectivity analysis.

  12. Resampling-Based Empirical Bayes Multiple Testing Procedures for Controlling Generalized Tail Probability and Expected Value Error Rates: Focus on the False Discovery Rate and Simulation Study

    PubMed Central

    Dudoit, Sandrine; Gilbert, Houston N.; van der Laan, Mark J.

    2014-01-01

    Summary This article proposes resampling-based empirical Bayes multiple testing procedures for controlling a broad class of Type I error rates, defined as generalized tail probability (gTP) error rates, gTP(q, g) = Pr(g(Vn, Sn) > q), and generalized expected value (gEV) error rates, gEV(g) = E[g(Vn, Sn)], for arbitrary functions g(Vn, Sn) of the numbers of false positives Vn and true positives Sn. Of particular interest are error rates based on the proportion g(Vn, Sn) = Vn/(Vn + Sn) of Type I errors among the rejected hypotheses, such as the false discovery rate (FDR), FDR = E[Vn/(Vn + Sn)]. The proposed procedures offer several advantages over existing methods. They provide Type I error control for general data generating distributions, with arbitrary dependence structures among variables. Gains in power are achieved by deriving rejection regions based on guessed sets of true null hypotheses and null test statistics randomly sampled from joint distributions that account for the dependence structure of the data. The Type I error and power properties of an FDR-controlling version of the resampling-based empirical Bayes approach are investigated and compared to those of widely-used FDR-controlling linear step-up procedures in a simulation study. The Type I error and power trade-off achieved by the empirical Bayes procedures under a variety of testing scenarios allows this approach to be competitive with or outperform the Storey and Tibshirani (2003) linear step-up procedure, as an alternative to the classical Benjamini and Hochberg (1995) procedure. PMID:18932138

  13. The difficulty of measuring the absorption of scattered sunlight by H2O and CO2 in volcanic plumes: A comment on Pering et al. “A novel and inexpensive method for measuring volcanic plume water fluxes at high temporal resolution,” Remote Sens. 2017, 9, 146

    USGS Publications Warehouse

    Kern, Christoph

    2017-01-01

    In their recent study, Pering et al. (2017) presented a novel method for measuring volcanic water vapor fluxes. Their method is based on imaging volcanic gas and aerosol plumes using a camera sensitive to the near-infrared (NIR) absorption of water vapor. The imaging data are empirically calibrated by comparison with in situ water measurements made within the plumes. Though the presented method may give reasonable results over short time scales, the authors fail to recognize the sensitivity of the technique to light scattering on aerosols within the plume. In fact, the signals measured by Pering et al. are not related to the absorption of NIR radiation by water vapor within the plume. Instead, the measured signals are most likely caused by a change in the effective light path of the detected radiation through the atmospheric background water vapor column. Therefore, their method is actually based on establishing an empirical relationship between in-plume scattering efficiency and plume water content. Since this relationship is sensitive to plume aerosol abundance and numerous environmental factors, the method will only yield accurate results if it is calibrated very frequently using other measurement techniques.

  14. A hierarchy of effective teaching and learning to acquire competence in evidenced-based medicine

    PubMed Central

    Khan, Khalid S; Coomarasamy, Arri

    2006-01-01

    Background A variety of methods exists for teaching and learning evidence-based medicine (EBM). However, there is much debate about the effectiveness of various EBM teaching and learning activities, resulting in a lack of consensus as to what methods constitute the best educational practice. There is a need for a clear hierarchy of educational activities to effectively impart and acquire competence in EBM skills. This paper develops such a hierarchy based on current empirical and theoretical evidence. Discussion EBM requires that health care decisions be based on the best available valid and relevant evidence. To achieve this, teachers delivering EBM curricula need to inculcate amongst learners the skills to gain, assess, apply, integrate and communicate new knowledge in clinical decision-making. Empirical and theoretical evidence suggests that there is a hierarchy of teaching and learning activities in terms of their educational effectiveness: Level 1, interactive and clinically integrated activities; Level 2(a), interactive but classroom based activities; Level 2(b), didactic but clinically integrated activities; and Level 3, didactic, classroom or standalone teaching. Summary All health care professionals need to understand and implement the principles of EBM to improve care of their patients. Interactive and clinically integrated teaching and learning activities provide the basis for the best educational practice in this field. PMID:17173690

  15. Wavelet-bounded empirical mode decomposition for measured time series analysis

    NASA Astrophysics Data System (ADS)

    Moore, Keegan J.; Kurt, Mehmet; Eriten, Melih; McFarland, D. Michael; Bergman, Lawrence A.; Vakakis, Alexander F.

    2018-01-01

    Empirical mode decomposition (EMD) is a powerful technique for separating the transient responses of nonlinear and nonstationary systems into finite sets of nearly orthogonal components, called intrinsic mode functions (IMFs), which represent the dynamics on different characteristic time scales. However, a deficiency of EMD is the mixing of two or more components in a single IMF, which can drastically affect the physical meaning of the empirical decomposition results. In this paper, we present a new approached based on EMD, designated as wavelet-bounded empirical mode decomposition (WBEMD), which is a closed-loop, optimization-based solution to the problem of mode mixing. The optimization routine relies on maximizing the isolation of an IMF around a characteristic frequency. This isolation is measured by fitting a bounding function around the IMF in the frequency domain and computing the area under this function. It follows that a large (small) area corresponds to a poorly (well) separated IMF. An optimization routine is developed based on this result with the objective of minimizing the bounding-function area and with the masking signal parameters serving as free parameters, such that a well-separated IMF is extracted. As examples of application of WBEMD we apply the proposed method, first to a stationary, two-component signal, and then to the numerically simulated response of a cantilever beam with an essentially nonlinear end attachment. We find that WBEMD vastly improves upon EMD and that the extracted sets of IMFs provide insight into the underlying physics of the response of each system.

  16. The Use of Empirical Methods for Testing Granular Materials in Analogue Modelling

    PubMed Central

    Montanari, Domenico; Agostini, Andrea; Bonini, Marco; Corti, Giacomo; Del Ventisette, Chiara

    2017-01-01

    The behaviour of a granular material is mainly dependent on its frictional properties, angle of internal friction, and cohesion, which, together with material density, are the key factors to be considered during the scaling procedure of analogue models. The frictional properties of a granular material are usually investigated by means of technical instruments such as a Hubbert-type apparatus and ring shear testers, which allow for investigating the response of the tested material to a wide range of applied stresses. Here we explore the possibility to determine material properties by means of different empirical methods applied to mixtures of quartz and K-feldspar sand. Empirical methods exhibit the great advantage of measuring the properties of a certain analogue material under the experimental conditions, which are strongly sensitive to the handling techniques. Finally, the results obtained from the empirical methods have been compared with ring shear tests carried out on the same materials, which show a satisfactory agreement with those determined empirically. PMID:28772993

  17. The effects of time-varying observation errors on semi-empirical sea-level projections

    DOE PAGES

    Ruckert, Kelsey L.; Guan, Yawen; Bakker, Alexander M. R.; ...

    2016-11-30

    Sea-level rise is a key driver of projected flooding risks. The design of strategies to manage these risks often hinges on projections that inform decision-makers about the surrounding uncertainties. Producing semi-empirical sea-level projections is difficult, for example, due to the complexity of the error structure of the observations, such as time-varying (heteroskedastic) observation errors and autocorrelation of the data-model residuals. This raises the question of how neglecting the error structure impacts hindcasts and projections. Here, we quantify this effect on sea-level projections and parameter distributions by using a simple semi-empirical sea-level model. Specifically, we compare three model-fitting methods: a frequentistmore » bootstrap as well as a Bayesian inversion with and without considering heteroskedastic residuals. All methods produce comparable hindcasts, but the parametric distributions and projections differ considerably based on methodological choices. In conclusion, our results show that the differences based on the methodological choices are enhanced in the upper tail projections. For example, the Bayesian inversion accounting for heteroskedasticity increases the sea-level anomaly with a 1% probability of being equaled or exceeded in the year 2050 by about 34% and about 40% in the year 2100 compared to a frequentist bootstrap. These results indicate that neglecting known properties of the observation errors and the data-model residuals can lead to low-biased sea-level projections.« less

  18. The effects of time-varying observation errors on semi-empirical sea-level projections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruckert, Kelsey L.; Guan, Yawen; Bakker, Alexander M. R.

    Sea-level rise is a key driver of projected flooding risks. The design of strategies to manage these risks often hinges on projections that inform decision-makers about the surrounding uncertainties. Producing semi-empirical sea-level projections is difficult, for example, due to the complexity of the error structure of the observations, such as time-varying (heteroskedastic) observation errors and autocorrelation of the data-model residuals. This raises the question of how neglecting the error structure impacts hindcasts and projections. Here, we quantify this effect on sea-level projections and parameter distributions by using a simple semi-empirical sea-level model. Specifically, we compare three model-fitting methods: a frequentistmore » bootstrap as well as a Bayesian inversion with and without considering heteroskedastic residuals. All methods produce comparable hindcasts, but the parametric distributions and projections differ considerably based on methodological choices. In conclusion, our results show that the differences based on the methodological choices are enhanced in the upper tail projections. For example, the Bayesian inversion accounting for heteroskedasticity increases the sea-level anomaly with a 1% probability of being equaled or exceeded in the year 2050 by about 34% and about 40% in the year 2100 compared to a frequentist bootstrap. These results indicate that neglecting known properties of the observation errors and the data-model residuals can lead to low-biased sea-level projections.« less

  19. Improving risk assessment of violence among military veterans: an evidence-based approach for clinical decision-making.

    PubMed

    Elbogen, Eric B; Fuller, Sara; Johnson, Sally C; Brooks, Stephanie; Kinneer, Patricia; Calhoun, Patrick S; Beckham, Jean C

    2010-08-01

    Increased media attention to post-deployment violence highlights the need to develop effective models to guide risk assessment among military Veterans. Ideally, a method would help identify which Veterans are most at risk for violence so that it can be determined what could be done to prevent violent behavior. This article suggests how empirical approaches to risk assessment used successfully in civilian populations can be applied to Veterans. A review was conducted of the scientific literature on Veteran populations regarding factors related to interpersonal violence generally and to domestic violence specifically. A checklist was then generated of empirically-supported risk factors for clinicians to consider in practice. To conceptualize how these known risk factors relate to a Veteran's violence potential, risk assessment scholarship was utilized to develop an evidence-based method to guide mental health professionals. The goals of this approach are to integrate science into practice, overcome logistical barriers, and permit more effective assessment, monitoring, and management of violence risk for clinicians working with Veterans, both in Department of Veteran Affairs settings and in the broader community. Research is needed to test the predictive validity of risk assessment models. Ultimately, the use of a systematic, empirical framework could lead to improved clinical decision-making in the area of risk assessment and potentially help prevent violence among Veterans. Published by Elsevier Ltd.

  20. Similarity indices based on link weight assignment for link prediction of unweighted complex networks

    NASA Astrophysics Data System (ADS)

    Liu, Shuxin; Ji, Xinsheng; Liu, Caixia; Bai, Yi

    2017-01-01

    Many link prediction methods have been proposed for predicting the likelihood that a link exists between two nodes in complex networks. Among these methods, similarity indices are receiving close attention. Most similarity-based methods assume that the contribution of links with different topological structures is the same in the similarity calculations. This paper proposes a local weighted method, which weights the strength of connection between each pair of nodes. Based on the local weighted method, six local weighted similarity indices extended from unweighted similarity indices (including Common Neighbor (CN), Adamic-Adar (AA), Resource Allocation (RA), Salton, Jaccard and Local Path (LP) index) are proposed. Empirical study has shown that the local weighted method can significantly improve the prediction accuracy of these unweighted similarity indices and that in sparse and weakly clustered networks, the indices perform even better.

  1. Neuroimaging mechanisms of change in psychotherapy for addictive behaviors: emerging translational approaches that bridge biology and behavior.

    PubMed

    Feldstein Ewing, Sarah W; Chung, Tammy

    2013-06-01

    Research on mechanisms of behavior change provides an innovative method to improve treatment for addictive behaviors. An important extension of mechanisms of change research involves the use of translational approaches, which examine how basic biological (i.e., brain-based mechanisms) and behavioral factors interact in initiating and sustaining positive behavior change as a result of psychotherapy. Articles in this special issue include integrative conceptual reviews and innovative empirical research on brain-based mechanisms that may underlie risk for addictive behaviors and response to psychotherapy from adolescence through adulthood. Review articles discuss hypothesized mechanisms of change for cognitive and behavioral therapies, mindfulness-based interventions, and neuroeconomic approaches. Empirical articles cover a range of addictive behaviors, including use of alcohol, cigarettes, marijuana, cocaine, and pathological gambling and represent a variety of imaging approaches including fMRI, magneto-encephalography, real-time fMRI, and diffusion tensor imaging. Additionally, a few empirical studies directly examine brain-based mechanisms of change, whereas others examine brain-based indicators as predictors of treatment outcome. Finally, two commentaries discuss craving as a core feature of addiction, and the importance of a developmental approach to examining mechanisms of change. Ultimately, translational research on mechanisms of behavior change holds promise for increasing understanding of how psychotherapy may modify brain structure and functioning and facilitate the initiation and maintenance of positive treatment outcomes for addictive behaviors. 2013 APA, all rights reserved

  2. Neuroimaging mechanisms of change in psychotherapy for addictive behaviors: Emerging translational approaches that bridge biology and behavior

    PubMed Central

    Feldstein Ewing, Sarah W.; Chung, Tammy

    2013-01-01

    Research on mechanisms of behavior change provides an innovative method to improve treatment for addictive behaviors. An important extension of mechanisms of change research involves the use of translational approaches, which examine how basic biological (i.e., brain-based mechanisms) and behavioral factors interact in initiating and sustaining positive behavior change as a result of psychotherapy. Articles in this special issue include integrative conceptual reviews and innovative empirical research on brain-based mechanisms that may underlie risk for addictive behaviors and response to psychotherapy from adolescence through adulthood. Review articles discuss hypothesized mechanisms of change for cognitive and behavioral therapies, mindfulness-based interventions, and neuroeconomic approaches. Empirical articles cover a range of addictive behaviors, including use of alcohol, cigarettes, marijuana, cocaine, and pathological gambling and represent a variety of imaging approaches including fMRI, magneto-encephalography, real time fMRI, and diffusion tensor imaging. Additionally, a few empirical studies directly examined brain-based mechanisms of change, whereas others examined brain-based indicators as predictors of treatment outcome. Finally, two commentaries discuss craving as a core feature of addiction, and the importance of a developmental approach to examining mechanisms of change. Ultimately, translational research on mechanisms of behavior change holds promise for increasing understanding of how psychotherapy may modify brain structure and functioning and facilitate the initiation and maintenance of positive treatment outcomes for addictive behaviors. PMID:23815447

  3. Meta-analysis of haplotype-association studies: comparison of methods and empirical evaluation of the literature

    PubMed Central

    2011-01-01

    Background Meta-analysis is a popular methodology in several fields of medical research, including genetic association studies. However, the methods used for meta-analysis of association studies that report haplotypes have not been studied in detail. In this work, methods for performing meta-analysis of haplotype association studies are summarized, compared and presented in a unified framework along with an empirical evaluation of the literature. Results We present multivariate methods that use summary-based data as well as methods that use binary and count data in a generalized linear mixed model framework (logistic regression, multinomial regression and Poisson regression). The methods presented here avoid the inflation of the type I error rate that could be the result of the traditional approach of comparing a haplotype against the remaining ones, whereas, they can be fitted using standard software. Moreover, formal global tests are presented for assessing the statistical significance of the overall association. Although the methods presented here assume that the haplotypes are directly observed, they can be easily extended to allow for such an uncertainty by weighting the haplotypes by their probability. Conclusions An empirical evaluation of the published literature and a comparison against the meta-analyses that use single nucleotide polymorphisms, suggests that the studies reporting meta-analysis of haplotypes contain approximately half of the included studies and produce significant results twice more often. We show that this excess of statistically significant results, stems from the sub-optimal method of analysis used and, in approximately half of the cases, the statistical significance is refuted if the data are properly re-analyzed. Illustrative examples of code are given in Stata and it is anticipated that the methods developed in this work will be widely applied in the meta-analysis of haplotype association studies. PMID:21247440

  4. Merger of three modeling approaches to assess potential effects of climate change on trees in the eastern United States

    Treesearch

    Louis R. Iverson; Anantha M. Prasad; Stephen N. Matthews; Matthew P. Peters

    2010-01-01

    Climate change will likely cause impacts that are species specific and significant; modeling is critical to better understand potential changes in suitable habitat. We use empirical, abundance-based habitat models utilizing decision tree-based ensemble methods to explore potential changes of 134 tree species habitats in the eastern United States (http://www.nrs.fs.fed....

  5. A Perceptual Repetition Blindness Effect

    NASA Technical Reports Server (NTRS)

    Hochhaus, Larry; Johnston, James C.; Null, Cynthia H. (Technical Monitor)

    1994-01-01

    Before concluding Repetition Blindness is a perceptual phenomenon, alternative explanations based on memory retrieval problems and report bias must be rejected. Memory problems were minimized by requiring a judgment about only a single briefly displayed field. Bias and sensitivity effects were empirically measured with an ROC-curve analysis method based on confidence ratings. Results from five experiments support the hypothesis that Repetition Blindness can be a perceptual phenomenon.

  6. Student Background, School Climate, School Disorder, and Student Achievement: An Empirical Study of New York City's Middle Schools

    ERIC Educational Resources Information Center

    Chen, Greg; Weikart, Lynne A.

    2008-01-01

    This study develops and tests a school disorder and student achievement model based upon the school climate framework. The model was fitted to 212 New York City middle schools using the Structural Equations Modeling Analysis method. The analysis shows that the model fits the data well based upon test statistics and goodness of fit indices. The…

  7. A Metasynthesis of the Complementarity of Culturally Responsive and Inquiry-Based Science Education in K-12 Settings: Implications for Advancing Equitable Science Teaching and Learning

    ERIC Educational Resources Information Center

    Brown, Julie C.

    2017-01-01

    Employing metasynthesis as a method, this study examined 52 empirical articles on culturally relevant and responsive science education in K-12 settings to determine the nature and scope of complementarity between culturally responsive and inquiry-based science practices (i.e., science and engineering practices identified in the National Research…

  8. Empirical deck for phased construction and widening [summary].

    DOT National Transportation Integrated Search

    2017-06-01

    The most common method used to design and analyze bridge decks, termed the traditional : method, treats a deck slab as if it were made of strips supported by inflexible girders. An : alternative the empirical method treats the deck slab as a ...

  9. Toward a Model-Based Approach to the Clinical Assessment of Personality Psychopathology

    PubMed Central

    Eaton, Nicholas R.; Krueger, Robert F.; Docherty, Anna R.; Sponheim, Scott R.

    2015-01-01

    Recent years have witnessed tremendous growth in the scope and sophistication of statistical methods available to explore the latent structure of psychopathology, involving continuous, discrete, and hybrid latent variables. The availability of such methods has fostered optimism that they can facilitate movement from classification primarily crafted through expert consensus to classification derived from empirically-based models of psychopathological variation. The explication of diagnostic constructs with empirically supported structures can then facilitate the development of assessment tools that appropriately characterize these constructs. Our goal in this paper is to illustrate how new statistical methods can inform conceptualization of personality psychopathology and therefore its assessment. We use magical thinking as example, because both theory and earlier empirical work suggested the possibility of discrete aspects to the latent structure of personality psychopathology, particularly forms of psychopathology involving distortions of reality testing, yet other data suggest that personality psychopathology is generally continuous in nature. We directly compared the fit of a variety of latent variable models to magical thinking data from a sample enriched with clinically significant variation in psychotic symptomatology for explanatory purposes. Findings generally suggested a continuous latent variable model best represented magical thinking, but results varied somewhat depending on different indices of model fit. We discuss the implications of the findings for classification and applied personality assessment. We also highlight some limitations of this type of approach that are illustrated by these data, including the importance of substantive interpretation, in addition to use of model fit indices, when evaluating competing structural models. PMID:24007309

  10. Empirical Mining of Large Data Sets Already Helps to Solve Practical Ecological Problems; A Panoply of Working Examples (Invited)

    NASA Astrophysics Data System (ADS)

    Hargrove, W. W.; Hoffman, F. M.; Kumar, J.; Spruce, J.; Norman, S. P.

    2013-12-01

    Here we present diverse examples where empirical mining and statistical analysis of large data sets have already been shown to be useful for a wide variety of practical decision-making problems within the realm of large-scale ecology. Because a full understanding and appreciation of particular ecological phenomena are possible only after hypothesis-directed research regarding the existence and nature of that process, some ecologists may feel that purely empirical data harvesting may represent a less-than-satisfactory approach. Restricting ourselves exclusively to process-driven approaches, however, may actually slow progress, particularly for more complex or subtle ecological processes. We may not be able to afford the delays caused by such directed approaches. Rather than attempting to formulate and ask every relevant question correctly, empirical methods allow trends, relationships and associations to emerge freely from the data themselves, unencumbered by a priori theories, ideas and prejudices that have been imposed upon them. Although they cannot directly demonstrate causality, empirical methods can be extremely efficient at uncovering strong correlations with intermediate "linking" variables. In practice, these correlative structures and linking variables, once identified, may provide sufficient predictive power to be useful themselves. Such correlation "shadows" of causation can be harnessed by, e.g., Bayesian Belief Nets, which bias ecological management decisions, made with incomplete information, toward favorable outcomes. Empirical data-harvesting also generates a myriad of testable hypotheses regarding processes, some of which may even be correct. Quantitative statistical regionalizations based on quantitative multivariate similarity have lended insights into carbon eddy-flux direction and magnitude, wildfire biophysical conditions, phenological ecoregions useful for vegetation type mapping and monitoring, forest disease risk maps (e.g., sudden oak death), global aquatic ecoregion risk maps for aquatic invasives, and forest vertical structure ecoregions (e.g., using extensive LiDAR data sets). Multivariate Spatio-Temporal Clustering, which quantitatively places alternative future conditions on a common footing with present conditions, allows prediction of present and future shifts in tree species ranges, given alternative climatic change forecasts. ForWarn, a forest disturbance detection and monitoring system mining 12 years of national 8-day MODIS phenology data, has been operating since 2010, producing national maps every 8 days showing many kinds of potential forest disturbances. Forest resource managers can view disturbance maps via a web-based viewer, and alerts are issued when particular forest disturbances are seen. Regression-based decadal trend analysis showing long-term forest thrive and decline areas, and individual-based, brute-force supercomputing to map potential movement corridors and migration routes across landscapes will also be discussed. As significant ecological changes occur with increasing rapidity, such empirical data-mining approaches may be the most efficient means to help land managers find the best, most-actionable policies and decision strategies.

  11. Does Gene Tree Discordance Explain the Mismatch between Macroevolutionary Models and Empirical Patterns of Tree Shape and Branching Times?

    PubMed Central

    Stadler, Tanja; Degnan, James H.; Rosenberg, Noah A.

    2016-01-01

    Classic null models for speciation and extinction give rise to phylogenies that differ in distribution from empirical phylogenies. In particular, empirical phylogenies are less balanced and have branching times closer to the root compared to phylogenies predicted by common null models. This difference might be due to null models of the speciation and extinction process being too simplistic, or due to the empirical datasets not being representative of random phylogenies. A third possibility arises because phylogenetic reconstruction methods often infer gene trees rather than species trees, producing an incongruity between models that predict species tree patterns and empirical analyses that consider gene trees. We investigate the extent to which the difference between gene trees and species trees under a combined birth–death and multispecies coalescent model can explain the difference in empirical trees and birth–death species trees. We simulate gene trees embedded in simulated species trees and investigate their difference with respect to tree balance and branching times. We observe that the gene trees are less balanced and typically have branching times closer to the root than the species trees. Empirical trees from TreeBase are also less balanced than our simulated species trees, and model gene trees can explain an imbalance increase of up to 8% compared to species trees. However, we see a much larger imbalance increase in empirical trees, about 100%, meaning that additional features must also be causing imbalance in empirical trees. This simulation study highlights the necessity of revisiting the assumptions made in phylogenetic analyses, as these assumptions, such as equating the gene tree with the species tree, might lead to a biased conclusion. PMID:26968785

  12. The structure of carbon nanotubes formed of graphene layers L4-8, L5-7, L3-12, L4-6-12

    NASA Astrophysics Data System (ADS)

    Shapovalova, K. E.; Belenkov, E. A.

    2017-11-01

    We geometrically calculate the optimized structure of nanotubes based on the graphene layers, using the method of molecular mechanics MM+. It was found that only the nanotubes, based on the graphene layers L4-8, L5-7, L3-12, L4-6-12, have a cylindrical form. Calculations of the sublimation energy, carried out using the semi-empirical quantum-mechanic method PM3, show that energy increases with the increase of nanotube diameters.

  13. Site classification for National Strong Motion Observation Network System (NSMONS) stations in China using an empirical H/V spectral ratio method

    NASA Astrophysics Data System (ADS)

    Ji, Kun; Ren, Yefei; Wen, Ruizhi

    2017-10-01

    Reliable site classification of the stations of the China National Strong Motion Observation Network System (NSMONS) has not yet been assigned because of lacking borehole data. This study used an empirical horizontal-to-vertical (H/V) spectral ratio (hereafter, HVSR) site classification method to overcome this problem. First, according to their borehole data, stations selected from KiK-net in Japan were individually assigned a site class (CL-I, CL-II, or CL-III), which is defined in the Chinese seismic code. Then, the mean HVSR curve for each site class was computed using strong motion recordings captured during the period 1996-2012. These curves were compared with those proposed by Zhao et al. (2006a) for four types of site classes (SC-I, SC-II, SC-III, and SC-IV) defined in the Japanese seismic code (JRA, 1980). It was found that an approximate range of the predominant period Tg could be identified by the predominant peak of the HVSR curve for the CL-I and SC-I sites, CL-II and SC-II sites, and CL-III and SC-III + SC-IV sites. Second, an empirical site classification method was proposed based on comprehensive consideration of peak period, amplitude, and shape of the HVSR curve. The selected stations from KiK-net were classified using the proposed method. The results showed that the success rates of the proposed method in identifying CL-I, CL-II, and CL-III sites were 63%, 64%, and 58% respectively. Finally, the HVSRs of 178 NSMONS stations were computed based on recordings from 2007 to 2015 and the sites classified using the proposed method. The mean HVSR curves were re-calculated for three site classes and compared with those from KiK-net data. It was found that both the peak period and the amplitude were similar for the mean HVSR curves derived from NSMONS classification results and KiK-net borehole data, implying the effectiveness of the proposed method in identifying different site classes. The classification results have good agreement with site classes based on borehole data of 81 stations in China, which indicates that our site classification results are acceptable and that the proposed method is practicable.

  14. pLARmEB: integration of least angle regression with empirical Bayes for multilocus genome-wide association studies.

    PubMed

    Zhang, J; Feng, J-Y; Ni, Y-L; Wen, Y-J; Niu, Y; Tamba, C L; Yue, C; Song, Q; Zhang, Y-M

    2017-06-01

    Multilocus genome-wide association studies (GWAS) have become the state-of-the-art procedure to identify quantitative trait nucleotides (QTNs) associated with complex traits. However, implementation of multilocus model in GWAS is still difficult. In this study, we integrated least angle regression with empirical Bayes to perform multilocus GWAS under polygenic background control. We used an algorithm of model transformation that whitened the covariance matrix of the polygenic matrix K and environmental noise. Markers on one chromosome were included simultaneously in a multilocus model and least angle regression was used to select the most potentially associated single-nucleotide polymorphisms (SNPs), whereas the markers on the other chromosomes were used to calculate kinship matrix as polygenic background control. The selected SNPs in multilocus model were further detected for their association with the trait by empirical Bayes and likelihood ratio test. We herein refer to this method as the pLARmEB (polygenic-background-control-based least angle regression plus empirical Bayes). Results from simulation studies showed that pLARmEB was more powerful in QTN detection and more accurate in QTN effect estimation, had less false positive rate and required less computing time than Bayesian hierarchical generalized linear model, efficient mixed model association (EMMA) and least angle regression plus empirical Bayes. pLARmEB, multilocus random-SNP-effect mixed linear model and fast multilocus random-SNP-effect EMMA methods had almost equal power of QTN detection in simulation experiments. However, only pLARmEB identified 48 previously reported genes for 7 flowering time-related traits in Arabidopsis thaliana.

  15. Development and Validation of Cognitive Screening Instruments.

    ERIC Educational Resources Information Center

    Jarman, Ronald F.

    The author suggests that most research on the early detection of learning disabilities is characterisized by an ineffective and a theoretical method of selecting and validating tasks. An alternative technique is proposed, based on a neurological theory of cognitive processes, whereby task analysis is a first step, with empirical analyses as…

  16. Misrepresenting Chinese Folk Happiness: A Critique of a Study

    ERIC Educational Resources Information Center

    Ip, Po-Keung

    2013-01-01

    Discourses on Chinese folk happiness are often based on anecdotal narratives or qualitative analysis. A recent study on Chinese folk happiness using qualitative method seems to provide some empirical findings beyond anecdotal evidence on Chinese folk happiness. This paper critically examines the study's constructed image of Chinese folk happiness,…

  17. Education Research as Analytic Claims: The Case of Mathematics

    ERIC Educational Resources Information Center

    Hyslop-Margison, Emery; Rogers, Matthew; Oladi, Soudeh

    2017-01-01

    Despite widespread calls for evidence-based research in education, this strategy has heretofore generated a surprisingly small return on the related financial investment. Some scholars have suggested that the situation follows from a mismatch between education as an assumed field of study and applied empirical research methods. This article's…

  18. Nondestructive test determines overload destruction characteristics of current limiter fuses

    NASA Technical Reports Server (NTRS)

    Swartz, G. A.

    1968-01-01

    Nondestructive test predicts the time required for current limiters to blow /open the circuit/ when subjected to a given overload. The test method is based on an empirical relationship between the voltage rise across a current limiter for a fixed time interval and the time to blow.

  19. Training in Structured Diagnostic Assessment Using DSM-IV Criteria

    ERIC Educational Resources Information Center

    Ponniah, Kathryn; Weissman, Myrna M.; Bledsoe, Sarah E.; Verdeli, Helen; Gameroff, Marc J.; Mufson, Laura; Fitterling, Heidi; Wickramaratne, Priya

    2011-01-01

    Objectives: Determining a patient's psychiatric diagnosis is an important first step for the selection of empirically supported treatments and a critical component of evidence-based practice. Structured diagnostic assessment covers the range of psychiatric diagnoses and is usually more complete and accurate than unstructured assessment. Method: We…

  20. Theory, Method and Practice of Neuroscientific Findings in Science Education

    ERIC Educational Resources Information Center

    Liu, Chia-Ju; Chiang, Wen-Wei

    2014-01-01

    This report provides an overview of neuroscience research that is applicable for science educators. It first offers a brief analysis of empirical studies in educational neuroscience literature, followed by six science concept learning constructs based on the whole brain theory: gaining an understanding of brain function; pattern recognition and…

  1. Cluster Analysis of Minnesota School Districts. A Research Report.

    ERIC Educational Resources Information Center

    Cleary, James

    The term "cluster analysis" refers to a set of statistical methods that classify entities with similar profiles of scores on a number of measured dimensions, in order to create empirically based typologies. A 1980 Minnesota House Research Report employed cluster analysis to categorize school districts according to their relative mixtures…

  2. A review of propeller noise prediction methodology: 1919-1994

    NASA Technical Reports Server (NTRS)

    Metzger, F. Bruce

    1995-01-01

    This report summarizes a review of the literature regarding propeller noise prediction methods. The review is divided into six sections: (1) early methods; (2) more recent methods based on earlier theory; (3) more recent methods based on the Acoustic Analogy; (4) more recent methods based on Computational Acoustics; (5) empirical methods; and (6) broadband methods. The report concludes that there are a large number of noise prediction procedures available which vary markedly in complexity. Deficiencies in accuracy of methods in many cases may be related, not to the methods themselves, but the accuracy and detail of the aerodynamic inputs used to calculate noise. The steps recommended in the report to provide accurate and easy to use prediction methods are: (1) identify reliable test data; (2) define and conduct test programs to fill gaps in the existing data base; (3) identify the most promising prediction methods; (4) evaluate promising prediction methods relative to the data base; (5) identify and correct the weaknesses in the prediction methods, including lack of user friendliness, and include features now available only in research codes; (6) confirm the accuracy of improved prediction methods to the data base; and (7) make the methods widely available and provide training in their use.

  3. An empirical method for approximating stream baseflow time series using groundwater table fluctuations

    NASA Astrophysics Data System (ADS)

    Meshgi, Ali; Schmitter, Petra; Babovic, Vladan; Chui, Ting Fong May

    2014-11-01

    Developing reliable methods to estimate stream baseflow has been a subject of interest due to its importance in catchment response and sustainable watershed management. However, to date, in the absence of complex numerical models, baseflow is most commonly estimated using statistically derived empirical approaches that do not directly incorporate physically-meaningful information. On the other hand, Artificial Intelligence (AI) tools such as Genetic Programming (GP) offer unique capabilities to reduce the complexities of hydrological systems without losing relevant physical information. This study presents a simple-to-use empirical equation to estimate baseflow time series using GP so that minimal data is required and physical information is preserved. A groundwater numerical model was first adopted to simulate baseflow for a small semi-urban catchment (0.043 km2) located in Singapore. GP was then used to derive an empirical equation relating baseflow time series to time series of groundwater table fluctuations, which are relatively easily measured and are physically related to baseflow generation. The equation was then generalized for approximating baseflow in other catchments and validated for a larger vegetation-dominated basin located in the US (24 km2). Overall, this study used GP to propose a simple-to-use equation to predict baseflow time series based on only three parameters: minimum daily baseflow of the entire period, area of the catchment and groundwater table fluctuations. It serves as an alternative approach for baseflow estimation in un-gauged systems when only groundwater table and soil information is available, and is thus complementary to other methods that require discharge measurements.

  4. Empirical source noise prediction method with application to subsonic coaxial jet mixing noise

    NASA Technical Reports Server (NTRS)

    Zorumski, W. E.; Weir, D. S.

    1982-01-01

    A general empirical method, developed for source noise predictions, uses tensor splines to represent the dependence of the acoustic field on frequency and direction and Taylor's series to represent the dependence on source state parameters. The method is applied to prediction of mixing noise from subsonic circular and coaxial jets. A noise data base of 1/3-octave-band sound pressure levels (SPL's) from 540 tests was gathered from three countries: United States, United Kingdom, and France. The SPL's depend on seven variables: frequency, polar direction angle, and five source state parameters: inner and outer nozzle pressure ratios, inner and outer stream total temperatures, and nozzle area ratio. A least-squares seven-dimensional curve fit defines a table of constants which is used for the prediction method. The resulting prediction has a mean error of 0 dB and a standard deviation of 1.2 dB. The prediction method is used to search for a coaxial jet which has the greatest coaxial noise benefit as compared with an equivalent single jet. It is found that benefits of about 6 dB are possible.

  5. An alternative empirical likelihood method in missing response problems and causal inference.

    PubMed

    Ren, Kaili; Drummond, Christopher A; Brewster, Pamela S; Haller, Steven T; Tian, Jiang; Cooper, Christopher J; Zhang, Biao

    2016-11-30

    Missing responses are common problems in medical, social, and economic studies. When responses are missing at random, a complete case data analysis may result in biases. A popular debias method is inverse probability weighting proposed by Horvitz and Thompson. To improve efficiency, Robins et al. proposed an augmented inverse probability weighting method. The augmented inverse probability weighting estimator has a double-robustness property and achieves the semiparametric efficiency lower bound when the regression model and propensity score model are both correctly specified. In this paper, we introduce an empirical likelihood-based estimator as an alternative to Qin and Zhang (2007). Our proposed estimator is also doubly robust and locally efficient. Simulation results show that the proposed estimator has better performance when the propensity score is correctly modeled. Moreover, the proposed method can be applied in the estimation of average treatment effect in observational causal inferences. Finally, we apply our method to an observational study of smoking, using data from the Cardiovascular Outcomes in Renal Atherosclerotic Lesions clinical trial. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  6. 40 CFR Appendix C to Part 75 - Missing Data Estimation Procedures

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... certification of a parametric, empirical, or process simulation method or model for calculating substitute data... available process simulation methods and models. 1.2Petition Requirements Continuously monitor, determine... desulfurization, a corresponding empirical correlation or process simulation parametric method using appropriate...

  7. Dealing with contaminated datasets: An approach to classifier training

    NASA Astrophysics Data System (ADS)

    Homenda, Wladyslaw; Jastrzebska, Agnieszka; Rybnik, Mariusz

    2016-06-01

    The paper presents a novel approach to classification reinforced with rejection mechanism. The method is based on a two-tier set of classifiers. First layer classifies elements, second layer separates native elements from foreign ones in each distinguished class. The key novelty presented here is rejection mechanism training scheme according to the philosophy "one-against-all-other-classes". Proposed method was tested in an empirical study of handwritten digits recognition.

  8. A novel hybrid decomposition-and-ensemble model based on CEEMD and GWO for short-term PM2.5 concentration forecasting

    NASA Astrophysics Data System (ADS)

    Niu, Mingfei; Wang, Yufang; Sun, Shaolong; Li, Yongwu

    2016-06-01

    To enhance prediction reliability and accuracy, a hybrid model based on the promising principle of "decomposition and ensemble" and a recently proposed meta-heuristic called grey wolf optimizer (GWO) is introduced for daily PM2.5 concentration forecasting. Compared with existing PM2.5 forecasting methods, this proposed model has improved the prediction accuracy and hit rates of directional prediction. The proposed model involves three main steps, i.e., decomposing the original PM2.5 series into several intrinsic mode functions (IMFs) via complementary ensemble empirical mode decomposition (CEEMD) for simplifying the complex data; individually predicting each IMF with support vector regression (SVR) optimized by GWO; integrating all predicted IMFs for the ensemble result as the final prediction by another SVR optimized by GWO. Seven benchmark models, including single artificial intelligence (AI) models, other decomposition-ensemble models with different decomposition methods and models with the same decomposition-ensemble method but optimized by different algorithms, are considered to verify the superiority of the proposed hybrid model. The empirical study indicates that the proposed hybrid decomposition-ensemble model is remarkably superior to all considered benchmark models for its higher prediction accuracy and hit rates of directional prediction.

  9. The Contingency of Laws of Nature in Science and Theology

    NASA Astrophysics Data System (ADS)

    Jaeger, Lydia

    2010-10-01

    The belief that laws of nature are contingent played an important role in the emergence of the empirical method of modern physics. During the scientific revolution, this belief was based on the idea of voluntary creation. Taking up Peter Mittelstaedt’s work on laws of nature, this article explores several alternative answers which do not overtly make use of metaphysics: some laws are laws of mathematics; macroscopic laws can emerge from the interplay of numerous subsystems without any specific microscopic nomic structures (John Wheeler’s “law without law”); laws are the preconditions of scientific experience (Kant); laws are theoretical abstractions which only apply in very limited circumstances (Nancy Cartwright). Whereas Cartwright’s approach is in tension with modern scientific methodology, the first three strategies count as illuminating, though partial answers. It is important for the empirical method of modern physics that these three strategies, even when taken together, do not provide a complete explanation of the order of nature. Thus the question of why laws are valid is still relevant. In the concluding section, I argue that the traditional answer, based on voluntary creation, provides the right balance of contingency and coherence which is in harmony with modern scientific method.

  10. Robust multitask learning with three-dimensional empirical mode decomposition-based features for hyperspectral classification

    NASA Astrophysics Data System (ADS)

    He, Zhi; Liu, Lin

    2016-11-01

    Empirical mode decomposition (EMD) and its variants have recently been applied for hyperspectral image (HSI) classification due to their ability to extract useful features from the original HSI. However, it remains a challenging task to effectively exploit the spectral-spatial information by the traditional vector or image-based methods. In this paper, a three-dimensional (3D) extension of EMD (3D-EMD) is proposed to naturally treat the HSI as a cube and decompose the HSI into varying oscillations (i.e. 3D intrinsic mode functions (3D-IMFs)). To achieve fast 3D-EMD implementation, 3D Delaunay triangulation (3D-DT) is utilized to determine the distances of extrema, while separable filters are adopted to generate the envelopes. Taking the extracted 3D-IMFs as features of different tasks, robust multitask learning (RMTL) is further proposed for HSI classification. In RMTL, pairs of low-rank and sparse structures are formulated by trace-norm and l1,2 -norm to capture task relatedness and specificity, respectively. Moreover, the optimization problems of RMTL can be efficiently solved by the inexact augmented Lagrangian method (IALM). Compared with several state-of-the-art feature extraction and classification methods, the experimental results conducted on three benchmark data sets demonstrate the superiority of the proposed methods.

  11. Carbon Budget and its Dynamics over Northern Eurasia Forest Ecosystems

    NASA Astrophysics Data System (ADS)

    Shvidenko, Anatoly; Schepaschenko, Dmitry; Kraxner, Florian; Maksyutov, Shamil

    2016-04-01

    The presentation contains an overview of recent findings and results of assessment of carbon cycling of forest ecosystems of Northern Eurasia. From a methodological point of view, there is a clear tendency in understanding a need of a Full and Verified Carbon Account (FCA), i.e. in reliable assessment of uncertainties for all modules and all stages of FCA. FCA is considered as a fuzzy (underspecified) system that supposes a system integration of major methods of carbon cycling study (land-ecosystem approach, LEA; process-based models; eddy covariance; and inverse modelling). Landscape-ecosystem approach 1) serves for accumulation of all relevant knowledge of landscape and ecosystems; 2) for strict systems designing the account, 3) contains all relevant spatially distributed empirical and semi-empirical data and models, and 4) is presented in form of an Integrated Land Information System (ILIS). The ILIS includes a hybrid land cover in a spatially and temporarily explicit way and corresponding attributive databases. The forest mask is provided by utilizing multi-sensor remote sensing data, geographically weighed regression and validation within GEO-wiki platform. By-pixel parametrization of forest cover is based on a special optimization algorithms using all available knowledge and information sources (data of forest inventory and different surveys, observations in situ, official statistics of forest management etc.). Major carbon fluxes within the LEA (NPP, HR, disturbances etc.) are estimated based on fusion of empirical data and aggregations with process-based elements by sets of regionally distributed models. Uncertainties within LEA are assessed for each module and at each step of the account. Within method results of LEA and corresponding uncertainties are harmonized and mutually constrained with independent outputs received by other methods based on the Bayesian approach. The above methodology have been applied to carbon account of Russian forests for 2000-2012. It has been shown that the Net Ecosystem Carbon Budget (NECB) of Russian forests for this period was in range of 0.5-0.7 Pg C yr-1 with a slight negative trend during the period due to acceleration of disturbance regimes and negative impacts of weather extremes (heat waves etc.). Uncertainties of the FCA for individual years were estimated at about 25% (CI 0.9). It has been shown that some models (e.g. majority of DGVMs) do not describe some processes on permafrost satisfactory while results of applications of ensembles of inverse models on average are closed to empirical assessments. A most important conclusion from this experience is that future improvements of knowledge of carbon cycling of Northern Eurasia forests requires development of an integrated observing system as a unified information background, as well as systems methodological improvements of all methods of cognition of carbon cycling.

  12. Using Empirical Models for Communication Prediction of Spacecraft

    NASA Technical Reports Server (NTRS)

    Quasny, Todd

    2015-01-01

    A viable communication path to a spacecraft is vital for its successful operation. For human spaceflight, a reliable and predictable communication link between the spacecraft and the ground is essential not only for the safety of the vehicle and the success of the mission, but for the safety of the humans on board as well. However, analytical models of these communication links are challenged by unique characteristics of space and the vehicle itself. For example, effects of radio frequency during high energy solar events while traveling through a solar array of a spacecraft can be difficult to model, and thus to predict. This presentation covers the use of empirical methods of communication link predictions, using the International Space Station (ISS) and its associated historical data as the verification platform and test bed. These empirical methods can then be incorporated into communication prediction and automation tools for the ISS in order to better understand the quality of the communication path given a myriad of variables, including solar array positions, line of site to satellites, position of the sun, and other dynamic structures on the outside of the ISS. The image on the left below show the current analytical model of one of the communication systems on the ISS. The image on the right shows a rudimentary empirical model of the same system based on historical archived data from the ISS.

  13. Multidimensional k-nearest neighbor model based on EEMD for financial time series forecasting

    NASA Astrophysics Data System (ADS)

    Zhang, Ningning; Lin, Aijing; Shang, Pengjian

    2017-07-01

    In this paper, we propose a new two-stage methodology that combines the ensemble empirical mode decomposition (EEMD) with multidimensional k-nearest neighbor model (MKNN) in order to forecast the closing price and high price of the stocks simultaneously. The modified algorithm of k-nearest neighbors (KNN) has an increasingly wide application in the prediction of all fields. Empirical mode decomposition (EMD) decomposes a nonlinear and non-stationary signal into a series of intrinsic mode functions (IMFs), however, it cannot reveal characteristic information of the signal with much accuracy as a result of mode mixing. So ensemble empirical mode decomposition (EEMD), an improved method of EMD, is presented to resolve the weaknesses of EMD by adding white noise to the original data. With EEMD, the components with true physical meaning can be extracted from the time series. Utilizing the advantage of EEMD and MKNN, the new proposed ensemble empirical mode decomposition combined with multidimensional k-nearest neighbor model (EEMD-MKNN) has high predictive precision for short-term forecasting. Moreover, we extend this methodology to the case of two-dimensions to forecast the closing price and high price of the four stocks (NAS, S&P500, DJI and STI stock indices) at the same time. The results indicate that the proposed EEMD-MKNN model has a higher forecast precision than EMD-KNN, KNN method and ARIMA.

  14. An empirical approach for estimating stress-coupling lengths for marine-terminating glaciers

    USGS Publications Warehouse

    Enderlin, Ellyn; Hamilton, Gordon S.; O'Neel, Shad; Bartholomaus, Timothy C.; Morlighem, Mathieu; Holt, John W.

    2016-01-01

    Here we present a new empirical method to estimate the SCL for marine-terminating glaciers using high-resolution observations. We use the empirically-determined periodicity in resistive stress oscillations as a proxy for the SCL. Application of our empirical method to two well-studied tidewater glaciers (Helheim Glacier, SE Greenland, and Columbia Glacier, Alaska, USA) demonstrates that SCL estimates obtained using this approach are consistent with theory (i.e., can be parameterized as a function of the ice thickness) and with prior, independent SCL estimates. In order to accurately resolve stress variations, we suggest that similar empirical stress-coupling parameterizations be employed in future analyses of glacier dynamics.

  15. Component isolation for multi-component signal analysis using a non-parametric gaussian latent feature model

    NASA Astrophysics Data System (ADS)

    Yang, Yang; Peng, Zhike; Dong, Xingjian; Zhang, Wenming; Clifton, David A.

    2018-03-01

    A challenge in analysing non-stationary multi-component signals is to isolate nonlinearly time-varying signals especially when they are overlapped in time and frequency plane. In this paper, a framework integrating time-frequency analysis-based demodulation and a non-parametric Gaussian latent feature model is proposed to isolate and recover components of such signals. The former aims to remove high-order frequency modulation (FM) such that the latter is able to infer demodulated components while simultaneously discovering the number of the target components. The proposed method is effective in isolating multiple components that have the same FM behavior. In addition, the results show that the proposed method is superior to generalised demodulation with singular-value decomposition-based method, parametric time-frequency analysis with filter-based method and empirical model decomposition base method, in recovering the amplitude and phase of superimposed components.

  16. Aircraft directional stability and vertical tail design: A review of semi-empirical methods

    NASA Astrophysics Data System (ADS)

    Ciliberti, Danilo; Della Vecchia, Pierluigi; Nicolosi, Fabrizio; De Marco, Agostino

    2017-11-01

    Aircraft directional stability and control are related to vertical tail design. The safety, performance, and flight qualities of an aircraft also depend on a correct empennage sizing. Specifically, the vertical tail is responsible for the aircraft yaw stability and control. If these characteristics are not well balanced, the entire aircraft design may fail. Stability and control are often evaluated, especially in the preliminary design phase, with semi-empirical methods, which are based on the results of experimental investigations performed in the past decades, and occasionally are merged with data provided by theoretical assumptions. This paper reviews the standard semi-empirical methods usually applied in the estimation of airplane directional stability derivatives in preliminary design, highlighting the advantages and drawbacks of these approaches that were developed from wind tunnel tests performed mainly on fighter airplane configurations of the first decades of the past century, and discussing their applicability on current transport aircraft configurations. Recent investigations made by the authors have shown the limit of these methods, proving the existence of aerodynamic interference effects in sideslip conditions which are not adequately considered in classical formulations. The article continues with a concise review of the numerical methods for aerodynamics and their applicability in aircraft design, highlighting how Reynolds-Averaged Navier-Stokes (RANS) solvers are well-suited to attain reliable results in attached flow conditions, with reasonable computational times. From the results of RANS simulations on a modular model of a representative regional turboprop airplane layout, the authors have developed a modern method to evaluate the vertical tail and fuselage contributions to aircraft directional stability. The investigation on the modular model has permitted an effective analysis of the aerodynamic interference effects by moving, changing, and expanding the available airplane components. Wind tunnel tests over a wide range of airplane configurations have been used to validate the numerical approach. The comparison between the proposed method and the standard semi-empirical methods available in literature proves the reliability of the innovative approach, according to the available experimental data collected in the wind tunnel test campaign.

  17. Predicting the effects of magnesium oxide nanoparticles and temperature on the thermal conductivity of water using artificial neural network and experimental data

    NASA Astrophysics Data System (ADS)

    Afrand, Masoud; Hemmat Esfe, Mohammad; Abedini, Ehsan; Teimouri, Hamid

    2017-03-01

    The current paper first presents an empirical correlation based on experimental results for estimating thermal conductivity enhancement of MgO-water nanofluid using curve fitting method. Then, artificial neural networks (ANNs) with various numbers of neurons have been assessed by considering temperature and MgO volume fraction as the inputs variables and thermal conductivity enhancement as the output variable to select the most appropriate and optimized network. Results indicated that the network with 7 neurons had minimum error. Eventually, the output of artificial neural network was compared with the results of the proposed empirical correlation and those of the experiments. Comparisons revealed that ANN modeling was more accurate than curve-fitting method in the predicting the thermal conductivity enhancement of the nanofluid.

  18. Nonparametric spirometry reference values for Hispanic Americans.

    PubMed

    Glenn, Nancy L; Brown, Vanessa M

    2011-02-01

    Recent literature sites ethnic origin as a major factor in developing pulmonary function reference values. Extensive studies established reference values for European and African Americans, but not for Hispanic Americans. The Third National Health and Nutrition Examination Survey defines Hispanic as individuals of Spanish speaking cultures. While no group was excluded from the target population, sample size requirements only allowed inclusion of individuals who identified themselves as Mexican Americans. This research constructs nonparametric reference value confidence intervals for Hispanic American pulmonary function. The method is applicable to all ethnicities. We use empirical likelihood confidence intervals to establish normal ranges for reference values. Its major advantage: it is model free, but shares asymptotic properties of model based methods. Statistical comparisons indicate that empirical likelihood interval lengths are comparable to normal theory intervals. Power and efficiency studies agree with previously published theoretical results.

  19. Application of spectral methods for high-frequency financial data to quantifying states of market participants

    NASA Astrophysics Data System (ADS)

    Sato, Aki-Hiro

    2008-06-01

    Empirical analysis of the foreign exchange market is conducted based on methods to quantify similarities among multi-dimensional time series with spectral distances introduced in [A.-H. Sato, Physica A 382 (2007) 258-270]. As a result it is found that the similarities among currency pairs fluctuate with the rotation of the earth, and that the similarities among best quotation rates are associated with those among quotation frequencies. Furthermore, it is shown that the Jensen-Shannon spectral divergence is proportional to a mean of the Kullback-Leibler spectral distance both empirically and numerically. It is confirmed that these spectral distances are connected with distributions for behavioural parameters of the market participants from numerical simulation. This concludes that spectral distances of representative quantities of financial markets are related into diversification of behavioural parameters of the market participants.

  20. Shear velocity criterion for incipient motion of sediment

    USGS Publications Warehouse

    Simoes, Francisco J.

    2014-01-01

    The prediction of incipient motion has had great importance to the theory of sediment transport. The most commonly used methods are based on the concept of critical shear stress and employ an approach similar, or identical, to the Shields diagram. An alternative method that uses the movability number, defined as the ratio of the shear velocity to the particle’s settling velocity, was employed in this study. A large amount of experimental data were used to develop an empirical incipient motion criterion based on the movability number. It is shown that this approach can provide a simple and accurate method of computing the threshold condition for sediment motion.

  1. Short-Term fo F2 Forecast: Present Day State of Art

    NASA Astrophysics Data System (ADS)

    Mikhailov, A. V.; Depuev, V. H.; Depueva, A. H.

    An analysis of the F2-layer short-term forecast problem has been done. Both objective and methodological problems prevent us from a deliberate F2-layer forecast issuing at present. An empirical approach based on statistical methods may be recommended for practical use. A forecast method based on a new aeronomic index (a proxy) AI has been proposed and tested over selected 64 severe storm events. The method provides an acceptable prediction accuracy both for strongly disturbed and quiet conditions. The problems with the prediction of the F2-layer quiet-time disturbances as well as some other unsolved problems are discussed

  2. An applicable method for efficiency estimation of operating tray distillation columns and its comparison with the methods utilized in HYSYS and Aspen Plus

    NASA Astrophysics Data System (ADS)

    Sadeghifar, Hamidreza

    2015-10-01

    Developing general methods that rely on column data for the efficiency estimation of operating (existing) distillation columns has been overlooked in the literature. Most of the available methods are based on empirical mass transfer and hydraulic relations correlated to laboratory data. Therefore, these methods may not be sufficiently accurate when applied to industrial columns. In this paper, an applicable and accurate method was developed for the efficiency estimation of distillation columns filled with trays. This method can calculate efficiency as well as mass and heat transfer coefficients without using any empirical mass transfer or hydraulic correlations and without the need to estimate operational or hydraulic parameters of the column. E.g., the method does not need to estimate tray interfacial area, which can be its most important advantage over all the available methods. The method can be used for the efficiency prediction of any trays in distillation columns. For the efficiency calculation, the method employs the column data and uses the true rates of the mass and heat transfers occurring inside the operating column. It is highly emphasized that estimating efficiency of an operating column has to be distinguished from that of a column being designed.

  3. Concentration Dependences of the Surface Tension and Density of Solutions of Acetone-Ethanol-Water Systems at 293 K

    NASA Astrophysics Data System (ADS)

    Dadashev, R. Kh.; Dzhambulatov, R. S.; Mezhidov, V. Kh.; Elimkhanov, D. Z.

    2018-05-01

    Concentration dependences of the surface tension and density of solutions of three-component acetone-ethanol-water systems and the bounding binary systems at 273 K are studied. The molar volume, adsorption, and composition of surface layers are calculated. Experimental data and calculations show that three-component solutions are close to ideal ones. The surface tensions of these solutions are calculated using semi-empirical and theoretical equations. Theoretical equations qualitatively convey the concentration dependence of surface tension. A semi-empirical method based on the Köhler equation allows us to predict the concentration dependence of surface tension within the experimental error.

  4. A simple method for the extraction and identification of light density microplastics from soil.

    PubMed

    Zhang, Shaoliang; Yang, Xiaomei; Gertsen, Hennie; Peters, Piet; Salánki, Tamás; Geissen, Violette

    2018-03-01

    This article introduces a simple and cost-saving method developed to extract, distinguish and quantify light density microplastics of polyethylene (PE) and polypropylene (PP) in soil. A floatation method using distilled water was used to extract the light density microplastics from soil samples. Microplastics and impurities were identified using a heating method (3-5s at 130°C). The number and size of particles were determined using a camera (Leica DFC 425) connected to a microscope (Leica wild M3C, Type S, simple light, 6.4×). Quantification of the microplastics was conducted using a developed model. Results showed that the floatation method was effective in extracting microplastics from soils, with recovery rates of approximately 90%. After being exposed to heat, the microplastics in the soil samples melted and were transformed into circular transparent particles while other impurities, such as organic matter and silicates were not changed by the heat. Regression analysis of microplastics weight and particle volume (a calculation based on image J software analysis) after heating showed the best fit (y=1.14x+0.46, R 2 =99%, p<0.001). Recovery rates based on the empirical model method were >80%. Results from field samples collected from North-western China prove that our method of repetitive floatation and heating can be used to extract, distinguish and quantify light density polyethylene microplastics in soils. Microplastics mass can be evaluated using the empirical model. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Effective crop evapotranspiration measurement using time-domain reflectometry technique in a sub-humid region

    NASA Astrophysics Data System (ADS)

    Srivastava, R. K.; Panda, R. K.; Halder, Debjani

    2017-08-01

    The primary objective of this study was to evaluate the performance of the time-domain reflectometry (TDR) technique for daily evapotranspiration estimation of peanut and maize crop in a sub-humid region. Four independent methods were used to estimate crop evapotranspiration (ETc), namely, soil water balance budgeting approach, energy balance approach—(Bowen ratio), empirical methods approach, and Pan evaporation method. The soil water balance budgeting approach utilized the soil moisture measurement by gravimetric and TDR method. The empirical evapotranspiration methods such as combination approach (FAO-56 Penman-Monteith and Penman), temperature-based approach (Hargreaves-Samani), and radiation-based approach (Priestley-Taylor, Turc, Abetw) were used to estimate the reference evapotranspiration (ET0). The daily ETc determined by the FAO-56 Penman-Monteith, Priestley-Taylor, Turc, Pan evaporation, and Bowen ratio were found to be at par with the ET values derived from the soil water balance budget; while the methods Abetw, Penman, and Hargreaves-Samani were not found to be ideal for the determination of ETc. The study illustrates the in situ applicability of the TDR method in order to make it possible for a user to choose the best way for the optimum water consumption for a given crop in a sub-humid region. The study suggests that the FAO-56 Penman-Monteith, Turc, and Priestley-Taylor can be used for the determination of crop ETc using TDR in comparison to soil water balance budget.

  6. On the Reliability of Source Time Functions Estimated Using Empirical Green's Function Methods

    NASA Astrophysics Data System (ADS)

    Gallegos, A. C.; Xie, J.; Suarez Salas, L.

    2017-12-01

    The Empirical Green's Function (EGF) method (Hartzell, 1978) has been widely used to extract source time functions (STFs). In this method, seismograms generated by collocated events with different magnitudes are deconvolved. Under a fundamental assumption that the STF of the small event is a delta function, the deconvolved Relative Source Time Function (RSTF) yields the large event's STF. While this assumption can be empirically justified by examination of differences in event size and frequency content of the seismograms, there can be a lack of rigorous justification of the assumption. In practice, a small event might have a finite duration when the RSTF is retrieved and interpreted as the large event STF with a bias. In this study, we rigorously analyze this bias using synthetic waveforms generated by convolving a realistic Green's function waveform with pairs of finite-duration triangular or parabolic STFs. The RSTFs are found using a time-domain based matrix deconvolution. We find when the STFs of smaller events are finite, the RSTFs are a series of narrow non-physical spikes. Interpreting these RSTFs as a series of high-frequency source radiations would be very misleading. The only reliable and unambiguous information we can retrieve from these RSTFs is the difference in durations and the moment ratio of the two STFs. We can apply a Tikhonov smoothing to obtain a single-pulse RSTF, but its duration is dependent on the choice of weighting, which may be subjective. We then test the Multi-Channel Deconvolution (MCD) method (Plourde & Bostock, 2017) which assumes that both STFs have finite durations to be solved for. A concern about the MCD method is that the number of unknown parameters is larger, which would tend to make the problem rank-deficient. Because the kernel matrix is dependent on the STFs to be solved for under a positivity constraint, we can only estimate the rank-deficiency with a semi-empirical approach. Based on the results so far, we find that the rank-deficiency makes it improbable to solve for both STFs. To solve for the larger STF we need to assume the shape of the small STF to be known a priori. Thus, the reliability of the estimated large STF depends on the difference between the assumed and true shapes of the small STF. We will show how the reliability varies with realistic scenarios.

  7. A protocol for the creation of useful geometric shape metrics illustrated with a newly derived geometric measure of leaf circularity.

    PubMed

    Krieger, Jonathan D

    2014-08-01

    I present a protocol for creating geometric leaf shape metrics to facilitate widespread application of geometric morphometric methods to leaf shape measurement. • To quantify circularity, I created a novel shape metric in the form of the vector between a circle and a line, termed geometric circularity. Using leaves from 17 fern taxa, I performed a coordinate-point eigenshape analysis to empirically identify patterns of shape covariation. I then compared the geometric circularity metric to the empirically derived shape space and the standard metric, circularity shape factor. • The geometric circularity metric was consistent with empirical patterns of shape covariation and appeared more biologically meaningful than the standard approach, the circularity shape factor. The protocol described here has the potential to make geometric morphometrics more accessible to plant biologists by generalizing the approach to developing synthetic shape metrics based on classic, qualitative shape descriptors.

  8. Semi-Empirical Prediction of Aircraft Low-Speed Aerodynamic Characteristics

    NASA Technical Reports Server (NTRS)

    Olson, Erik D.

    2015-01-01

    This paper lays out a comprehensive methodology for computing a low-speed, high-lift polar, without requiring additional details about the aircraft design beyond what is typically available at the conceptual design stage. Introducing low-order, physics-based aerodynamic analyses allows the methodology to be more applicable to unconventional aircraft concepts than traditional, fully-empirical methods. The methodology uses empirical relationships for flap lift effectiveness, chord extension, drag-coefficient increment and maximum lift coefficient of various types of flap systems as a function of flap deflection, and combines these increments with the characteristics of the unflapped airfoils. Once the aerodynamic characteristics of the flapped sections are known, a vortex-lattice analysis calculates the three-dimensional lift, drag and moment coefficients of the whole aircraft configuration. This paper details the results of two validation cases: a supercritical airfoil model with several types of flaps; and a 12-foot, full-span aircraft model with slats and double-slotted flaps.

  9. Forecasting stochastic neural network based on financial empirical mode decomposition.

    PubMed

    Wang, Jie; Wang, Jun

    2017-06-01

    In an attempt to improve the forecasting accuracy of stock price fluctuations, a new one-step-ahead model is developed in this paper which combines empirical mode decomposition (EMD) with stochastic time strength neural network (STNN). The EMD is a processing technique introduced to extract all the oscillatory modes embedded in a series, and the STNN model is established for considering the weight of occurrence time of the historical data. The linear regression performs the predictive availability of the proposed model, and the effectiveness of EMD-STNN is revealed clearly through comparing the predicted results with the traditional models. Moreover, a new evaluated method (q-order multiscale complexity invariant distance) is applied to measure the predicted results of real stock index series, and the empirical results show that the proposed model indeed displays a good performance in forecasting stock market fluctuations. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Method to improve accuracy of positioning object by eLoran system with applying standard Kalman filter

    NASA Astrophysics Data System (ADS)

    Grunin, A. P.; Kalinov, G. A.; Bolokhovtsev, A. V.; Sai, S. V.

    2018-05-01

    This article reports on a novel method to improve the accuracy of positioning an object by a low frequency hyperbolic radio navigation system like an eLoran. This method is based on the application of the standard Kalman filter. Investigations of an affection of the filter parameters and the type of the movement on accuracy of the vehicle position estimation are carried out. Evaluation of the method accuracy was investigated by separating data from the semi-empirical movement model to different types of movements.

  11. North Dakota implementation of mechanistic-empirical pavement design guide (MEPDG).

    DOT National Transportation Integrated Search

    2014-12-01

    North Dakota currently designs roads based on the AASHTO Design Guide procedure, which is based on : the empirical findings of the AASHTO Road Test of the late 1950s. However, limitations of the current : empirical approach have prompted AASHTO to mo...

  12. The Effect of Poverty, Gender Exclusion, and Child Labor on Out-of-School Rates for Female Children

    ERIC Educational Resources Information Center

    Laborda Castillo, Leopoldo; Sotelsek Salem, Daniel; Sarr, Leopold Remi

    2014-01-01

    In this article, the authors analyze the effect of poverty, social exclusion, and child labor on out-of-school rates for female children. This empirical study is based on a dynamic panel model for a sample of 216 countries over the period 1970 to 2010. Results based on the generalized method of moments (GMM) of Arellano and Bond (1991) and the…

  13. Improving the performance of the mass transfer-based reference evapotranspiration estimation approaches through a coupled wavelet-random forest methodology

    NASA Astrophysics Data System (ADS)

    Shiri, Jalal

    2018-06-01

    Among different reference evapotranspiration (ETo) modeling approaches, mass transfer-based methods have been less studied. These approaches utilize temperature and wind speed records. On the other hand, the empirical equations proposed in this context generally produce weak simulations, except when a local calibration is used for improving their performance. This might be a crucial drawback for those equations in case of local data scarcity for calibration procedure. So, application of heuristic methods can be considered as a substitute for improving the performance accuracy of the mass transfer-based approaches. However, given that the wind speed records have usually higher variation magnitudes than the other meteorological parameters, application of a wavelet transform for coupling with heuristic models would be necessary. In the present paper, a coupled wavelet-random forest (WRF) methodology was proposed for the first time to improve the performance accuracy of the mass transfer-based ETo estimation approaches using cross-validation data management scenarios in both local and cross-station scales. The obtained results revealed that the new coupled WRF model (with the minimum scatter index values of 0.150 and 0.192 for local and external applications, respectively) improved the performance accuracy of the single RF models as well as the empirical equations to great extent.

  14. Benchmarking of a T-wave alternans detection method based on empirical mode decomposition.

    PubMed

    Blanco-Velasco, Manuel; Goya-Esteban, Rebeca; Cruz-Roldán, Fernando; García-Alberola, Arcadi; Rojo-Álvarez, José Luis

    2017-07-01

    T-wave alternans (TWA) is a fluctuation of the ST-T complex occurring on an every-other-beat basis of the surface electrocardiogram (ECG). It has been shown to be an informative risk stratifier for sudden cardiac death, though the lack of gold standard to benchmark detection methods has promoted the use of synthetic signals. This work proposes a novel signal model to study the performance of a TWA detection. Additionally, the methodological validation of a denoising technique based on empirical mode decomposition (EMD), which is used here along with the spectral method, is also tackled. The proposed test bed system is based on the following guidelines: (1) use of open source databases to enable experimental replication; (2) use of real ECG signals and physiological noise; (3) inclusion of randomized TWA episodes. Both sensitivity (Se) and specificity (Sp) are separately analyzed. Also a nonparametric hypothesis test, based on Bootstrap resampling, is used to determine whether the presence of the EMD block actually improves the performance. The results show an outstanding specificity when the EMD block is used, even in very noisy conditions (0.96 compared to 0.72 for SNR = 8 dB), being always superior than that of the conventional SM alone. Regarding the sensitivity, using the EMD method also outperforms in noisy conditions (0.57 compared to 0.46 for SNR=8 dB), while it decreases in noiseless conditions. The proposed test setting designed to analyze the performance guarantees that the actual physiological variability of the cardiac system is reproduced. The use of the EMD-based block in noisy environment enables the identification of most patients with fatal arrhythmias. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Tracer kinetics of forearm endothelial function: comparison of an empirical method and a quantitative modeling technique.

    PubMed

    Zhao, Xueli; Arsenault, Andre; Lavoie, Kim L; Meloche, Bernard; Bacon, Simon L

    2007-01-01

    Forearm Endothelial Function (FEF) is a marker that has been shown to discriminate patients with cardiovascular disease (CVD). FEF has been assessed using several parameters: the Rate of Uptake Ratio (RUR), EWUR (Elbow-to-Wrist Uptake Ratio) and EWRUR (Elbow-to-Wrist Relative Uptake Ratio). However, the modeling functions of FEF require more robust models. The present study was designed to compare an empirical method with quantitative modeling techniques to better estimate the physiological parameters and understand the complex dynamic processes. The fitted time activity curves of the forearms, estimating blood and muscle components, were assessed using both an empirical method and a two-compartment model. Although correlational analyses suggested a good correlation between the methods for RUR (r=.90) and EWUR (r=.79), but not EWRUR (r=.34), Altman-Bland plots found poor agreement between the methods for all 3 parameters. These results indicate that there is a large discrepancy between the empirical and computational method for FEF. Further work is needed to establish the physiological and mathematical validity of the 2 modeling methods.

  16. Estimating Casualties for Large Earthquakes Worldwide Using an Empirical Approach

    USGS Publications Warehouse

    Jaiswal, Kishor; Wald, David J.; Hearne, Mike

    2009-01-01

    We developed an empirical country- and region-specific earthquake vulnerability model to be used as a candidate for post-earthquake fatality estimation by the U.S. Geological Survey's Prompt Assessment of Global Earthquakes for Response (PAGER) system. The earthquake fatality rate is based on past fatal earthquakes (earthquakes causing one or more deaths) in individual countries where at least four fatal earthquakes occurred during the catalog period (since 1973). Because only a few dozen countries have experienced four or more fatal earthquakes since 1973, we propose a new global regionalization scheme based on idealization of countries that are expected to have similar susceptibility to future earthquake losses given the existing building stock, its vulnerability, and other socioeconomic characteristics. The fatality estimates obtained using an empirical country- or region-specific model will be used along with other selected engineering risk-based loss models for generation of automated earthquake alerts. These alerts could potentially benefit the rapid-earthquake-response agencies and governments for better response to reduce earthquake fatalities. Fatality estimates are also useful to stimulate earthquake preparedness planning and disaster mitigation. The proposed model has several advantages as compared with other candidate methods, and the country- or region-specific fatality rates can be readily updated when new data become available.

  17. Non-invasive optical detection of esophagus cancer based on urine surface-enhanced Raman spectroscopy

    NASA Astrophysics Data System (ADS)

    Huang, Shaohua; Wang, Lan; Chen, Weiwei; Lin, Duo; Huang, Lingling; Wu, Shanshan; Feng, Shangyuan; Chen, Rong

    2014-09-01

    A surface-enhanced Raman spectroscopy (SERS) approach was utilized for urine biochemical analysis with the aim to develop a label-free and non-invasive optical diagnostic method for esophagus cancer detection. SERS spectrums were acquired from 31 normal urine samples and 47 malignant esophagus cancer (EC) urine samples. Tentative assignments of urine SERS bands demonstrated esophagus cancer specific changes, including an increase in the relative amounts of urea and a decrease in the percentage of uric acid in the urine of normal compared with EC. The empirical algorithm integrated with linear discriminant analysis (LDA) were employed to identify some important urine SERS bands for differentiation between healthy subjects and EC urine. The empirical diagnostic approach based on the ratio of the SERS peak intensity at 527 to 1002 cm-1 and 725 to 1002 cm-1 coupled with LDA yielded a diagnostic sensitivity of 72.3% and specificity of 96.8%, respectively. The area under the receive operating characteristic (ROC) curve was 0.954, which further evaluate the performance of the diagnostic algorithm based on the ratio of the SERS peak intensity combined with LDA analysis. This work demonstrated that the urine SERS spectra associated with empirical algorithm has potential for noninvasive diagnosis of esophagus cancer.

  18. Evaluation of sea-surface photosynthetically available radiation algorithms under various sky conditions and solar elevations.

    PubMed

    Somayajula, Srikanth Ayyala; Devred, Emmanuel; Bélanger, Simon; Antoine, David; Vellucci, V; Babin, Marcel

    2018-04-20

    In this study, we report on the performance of satellite-based photosynthetically available radiation (PAR) algorithms used in published oceanic primary production models. The performance of these algorithms was evaluated using buoy observations under clear and cloudy skies, and for the particular case of low sun angles typically encountered at high latitudes or at moderate latitudes in winter. The PAR models consisted of (i) the standard one from the NASA-Ocean Biology Processing Group (OBPG), (ii) the Gregg and Carder (GC) semi-analytical clear-sky model, and (iii) look-up-tables based on the Santa Barbara DISORT atmospheric radiative transfer (SBDART) model. Various combinations of atmospheric inputs, empirical cloud corrections, and semi-analytical irradiance models yielded a total of 13 (11 + 2 developed in this study) different PAR products, which were compared with in situ measurements collected at high frequency (15 min) at a buoy site in the Mediterranean Sea (the "BOUée pour l'acquiSition d'une Série Optique à Long termE," or, "BOUSSOLE" site). An objective ranking method applied to the algorithm results indicated that seven PAR products out of 13 were well in agreement with the in situ measurements. Specifically, the OBPG method showed the best overall performance with a root mean square difference (RMSD) (bias) of 19.7% (6.6%) and 10% (6.3%) followed by the look-up-table method with a RMSD (bias) of 25.5% (6.8%) and 9.6% (2.6%) at daily and monthly scales, respectively. Among the four methods based on clear-sky PAR empirically corrected for cloud cover, the Dobson and Smith method consistently underestimated daily PAR while the Budyko formulation overestimated daily PAR. Empirically cloud-corrected methods using cloud fraction (CF) performed better under quasi-clear skies (CF<0.3) with an RMSD (bias) of 9.7%-14.8% (3.6%-11.3%) than under partially clear to cloudy skies (0.30.7), however, all methods showed larger RMSD differences (biases) ranging between 32% and 80.6% (-54.5%-8.7%). Finally, three methods tested for low sun elevations revealed systematic overestimation, and one method showed a systematic underestimation of daily PAR, with relative RMSDs as large as 50% under all sky conditions. Under partially clear to overcast conditions all the methods underestimated PAR. Model uncertainties predominantly depend on which cloud products were used.

  19. Empirical mode decomposition of the ECG signal for noise removal

    NASA Astrophysics Data System (ADS)

    Khan, Jesmin; Bhuiyan, Sharif; Murphy, Gregory; Alam, Mohammad

    2011-04-01

    Electrocardiography is a diagnostic procedure for the detection and diagnosis of heart abnormalities. The electrocardiogram (ECG) signal contains important information that is utilized by physicians for the diagnosis and analysis of heart diseases. So good quality ECG signal plays a vital role for the interpretation and identification of pathological, anatomical and physiological aspects of the whole cardiac muscle. However, the ECG signals are corrupted by noise which severely limit the utility of the recorded ECG signal for medical evaluation. The most common noise presents in the ECG signal is the high frequency noise caused by the forces acting on the electrodes. In this paper, we propose a new ECG denoising method based on the empirical mode decomposition (EMD). The proposed method is able to enhance the ECG signal upon removing the noise with minimum signal distortion. Simulation is done on the MIT-BIH database to verify the efficacy of the proposed algorithm. Experiments show that the presented method offers very good results to remove noise from the ECG signal.

  20. A method and data for video monitor sizing. [human CRT viewing requirements

    NASA Technical Reports Server (NTRS)

    Kirkpatrick, M., III; Shields, N. L., Jr.; Malone, T. B.; Guerin, E. G.

    1976-01-01

    The paper outlines an approach consisting of using analytical methods and empirical data to determine monitor size constraints based on the human operator's CRT viewing requirements in a context where panel space and volume considerations for the Space Shuttle aft cabin constrain the size of the monitor to be used. Two cases are examined: remote scene imaging and alphanumeric character display. The central parameter used to constrain monitor size is the ratio M/L where M is the monitor dimension and L the viewing distance. The study is restricted largely to 525 line video systems having an SNR of 32 db and bandwidth of 4.5 MHz. Degradation in these parameters would require changes in the empirically determined visual angle constants presented. The data and methods described are considered to apply to cases where operators are required to view via TV target objects which are well differentiated from the background and where the background is relatively sparse. It is also necessary to identify the critical target dimensions and cues.

  1. The Removal of EOG Artifacts From EEG Signals Using Independent Component Analysis and Multivariate Empirical Mode Decomposition.

    PubMed

    Wang, Gang; Teng, Chaolin; Li, Kuo; Zhang, Zhonglin; Yan, Xiangguo

    2016-09-01

    The recorded electroencephalography (EEG) signals are usually contaminated by electrooculography (EOG) artifacts. In this paper, by using independent component analysis (ICA) and multivariate empirical mode decomposition (MEMD), the ICA-based MEMD method was proposed to remove EOG artifacts (EOAs) from multichannel EEG signals. First, the EEG signals were decomposed by the MEMD into multiple multivariate intrinsic mode functions (MIMFs). The EOG-related components were then extracted by reconstructing the MIMFs corresponding to EOAs. After performing the ICA of EOG-related signals, the EOG-linked independent components were distinguished and rejected. Finally, the clean EEG signals were reconstructed by implementing the inverse transform of ICA and MEMD. The results of simulated and real data suggested that the proposed method could successfully eliminate EOAs from EEG signals and preserve useful EEG information with little loss. By comparing with other existing techniques, the proposed method achieved much improvement in terms of the increase of signal-to-noise and the decrease of mean square error after removing EOAs.

  2. Empirical Analysis of the Subjective Impressions and Objective Measures of Domain Scientists’ Visual Analytic Judgments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dasgupta, Aritra; Burrows, Susannah M.; Han, Kyungsik

    2017-05-08

    Scientists often use specific data analysis and presentation methods familiar within their domain. But does high familiarity drive better analytical judgment? This question is especially relevant when familiar methods themselves can have shortcomings: many visualizations used conventionally for scientific data analysis and presentation do not follow established best practices. This necessitates new methods that might be unfamiliar yet prove to be more effective. But there is little empirical understanding of the relationships between scientists’ subjective impressions about familiar and unfamiliar visualizations and objective measures of their visual analytic judgments. To address this gap and to study these factors, we focusmore » on visualizations used for comparison of climate model performance. We report on a comprehensive survey-based user study with 47 climate scientists and present an analysis of : i) relationships among scientists’ familiarity, their perceived lev- els of comfort, confidence, accuracy, and objective measures of accuracy, and ii) relationships among domain experience, visualization familiarity, and post-study preference.« less

  3. Data-driven mono-component feature identification via modified nonlocal means and MEWT for mechanical drivetrain fault diagnosis

    NASA Astrophysics Data System (ADS)

    Pan, Jun; Chen, Jinglong; Zi, Yanyang; Yuan, Jing; Chen, Binqiang; He, Zhengjia

    2016-12-01

    It is significant to perform condition monitoring and fault diagnosis on rolling mills in steel-making plant to ensure economic benefit. However, timely fault identification of key parts in a complicated industrial system under operating condition is still a challenging task since acquired condition signals are usually multi-modulated and inevitably mixed with strong noise. Therefore, a new data-driven mono-component identification method is proposed in this paper for diagnostic purpose. First, the modified nonlocal means algorithm (NLmeans) is proposed to reduce noise in vibration signals without destroying its original Fourier spectrum structure. During the modified NLmeans, two modifications are investigated and performed to improve denoising effect. Then, the modified empirical wavelet transform (MEWT) is applied on the de-noised signal to adaptively extract empirical mono-component modes. Finally, the modes are analyzed for mechanical fault identification based on Hilbert transform. The results show that the proposed data-driven method owns superior performance during system operation compared with the MEWT method.

  4. On the need and use of models to explore the role of economic confidence:a survey.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sprigg, James A.; Paez, Paul J.; Hand, Michael S.

    2005-04-01

    Empirical studies suggest that consumption is more sensitive to current income than suggested under the permanent income hypothesis, which raises questions regarding expectations for future income, risk aversion, and the role of economic confidence measures. This report surveys a body of fundamental economic literature as well as burgeoning computational modeling methods to support efforts to better anticipate cascading economic responses to terrorist threats and attacks. This is a three part survey to support the incorporation of models of economic confidence into agent-based microeconomic simulations. We first review broad underlying economic principles related to this topic. We then review the economicmore » principle of confidence and related empirical studies. Finally, we provide a brief survey of efforts and publications related to agent-based economic simulation.« less

  5. Development, Testing, and Validation of a Model-Based Tool to Predict Operator Responses in Unexpected Workload Transitions

    NASA Technical Reports Server (NTRS)

    Sebok, Angelia; Wickens, Christopher; Sargent, Robert

    2015-01-01

    One human factors challenge is predicting operator performance in novel situations. Approaches such as drawing on relevant previous experience, and developing computational models to predict operator performance in complex situations, offer potential methods to address this challenge. A few concerns with modeling operator performance are that models need to realistic, and they need to be tested empirically and validated. In addition, many existing human performance modeling tools are complex and require that an analyst gain significant experience to be able to develop models for meaningful data collection. This paper describes an effort to address these challenges by developing an easy to use model-based tool, using models that were developed from a review of existing human performance literature and targeted experimental studies, and performing an empirical validation of key model predictions.

  6. A Robust Gradient Based Method for Building Extraction from LiDAR and Photogrammetric Imagery.

    PubMed

    Siddiqui, Fasahat Ullah; Teng, Shyh Wei; Awrangjeb, Mohammad; Lu, Guojun

    2016-07-19

    Existing automatic building extraction methods are not effective in extracting buildings which are small in size and have transparent roofs. The application of large area threshold prohibits detection of small buildings and the use of ground points in generating the building mask prevents detection of transparent buildings. In addition, the existing methods use numerous parameters to extract buildings in complex environments, e.g., hilly area and high vegetation. However, the empirical tuning of large number of parameters reduces the robustness of building extraction methods. This paper proposes a novel Gradient-based Building Extraction (GBE) method to address these limitations. The proposed method transforms the Light Detection And Ranging (LiDAR) height information into intensity image without interpolation of point heights and then analyses the gradient information in the image. Generally, building roof planes have a constant height change along the slope of a roof plane whereas trees have a random height change. With such an analysis, buildings of a greater range of sizes with a transparent or opaque roof can be extracted. In addition, a local colour matching approach is introduced as a post-processing stage to eliminate trees. This stage of our proposed method does not require any manual setting and all parameters are set automatically from the data. The other post processing stages including variance, point density and shadow elimination are also applied to verify the extracted buildings, where comparatively fewer empirically set parameters are used. The performance of the proposed GBE method is evaluated on two benchmark data sets by using the object and pixel based metrics (completeness, correctness and quality). Our experimental results show the effectiveness of the proposed method in eliminating trees, extracting buildings of all sizes, and extracting buildings with and without transparent roof. When compared with current state-of-the-art building extraction methods, the proposed method outperforms the existing methods in various evaluation metrics.

  7. A Robust Gradient Based Method for Building Extraction from LiDAR and Photogrammetric Imagery

    PubMed Central

    Siddiqui, Fasahat Ullah; Teng, Shyh Wei; Awrangjeb, Mohammad; Lu, Guojun

    2016-01-01

    Existing automatic building extraction methods are not effective in extracting buildings which are small in size and have transparent roofs. The application of large area threshold prohibits detection of small buildings and the use of ground points in generating the building mask prevents detection of transparent buildings. In addition, the existing methods use numerous parameters to extract buildings in complex environments, e.g., hilly area and high vegetation. However, the empirical tuning of large number of parameters reduces the robustness of building extraction methods. This paper proposes a novel Gradient-based Building Extraction (GBE) method to address these limitations. The proposed method transforms the Light Detection And Ranging (LiDAR) height information into intensity image without interpolation of point heights and then analyses the gradient information in the image. Generally, building roof planes have a constant height change along the slope of a roof plane whereas trees have a random height change. With such an analysis, buildings of a greater range of sizes with a transparent or opaque roof can be extracted. In addition, a local colour matching approach is introduced as a post-processing stage to eliminate trees. This stage of our proposed method does not require any manual setting and all parameters are set automatically from the data. The other post processing stages including variance, point density and shadow elimination are also applied to verify the extracted buildings, where comparatively fewer empirically set parameters are used. The performance of the proposed GBE method is evaluated on two benchmark data sets by using the object and pixel based metrics (completeness, correctness and quality). Our experimental results show the effectiveness of the proposed method in eliminating trees, extracting buildings of all sizes, and extracting buildings with and without transparent roof. When compared with current state-of-the-art building extraction methods, the proposed method outperforms the existing methods in various evaluation metrics. PMID:27447631

  8. Networking for Leadership, Inquiry, and Systemic Thinking: A New Approach to Inquiry-Based Learning.

    ERIC Educational Resources Information Center

    Byers, Al; Fitzgerald, Mary Ann

    2002-01-01

    Points out difficulties with a change from traditional teaching methods to a more inquiry-centered approach. Presents theoretical and empirical foundations for the Networking for Leadership, Inquiry, and Systemic Thinking (NLIST) initiative sponsored by the Council of State Science Supervisors (CSSS) and NASA, describes its progress, and outlines…

  9. A Step Forward in Teaching Addiction Counselors How to Supervise Motivational Interviewing Using a Clinical Trials Training Approach

    ERIC Educational Resources Information Center

    Martino, Steve; Gallon, Steve; Ball, Samuel A.; Carroll, Kathleen M.

    2007-01-01

    A clinical trials training approach to supervision is a promising and empirically supported method for preparing addiction counselors to implement evidence-based behavioral treatments in community treatment programs. This supervision approach has three main components: (1) direct observation of treatment sessions; (2) structured performance…

  10. Application of Fuzzy Reasoning for Filtering and Enhancement of Ultrasonic Images

    NASA Technical Reports Server (NTRS)

    Sacha, J. P.; Cios, K. J.; Roth, D. J.; Berke, L.; Vary, A.

    1994-01-01

    This paper presents a new type of an adaptive fuzzy operator for detection of isolated abnormalities, and enhancement of raw ultrasonic images. Fuzzy sets used in decision rules are defined for each image based on empirical statistics of the color intensities. Examples of the method are also presented in the paper.

  11. Caregivers as Money Managers for Adults with Severe Mental Illness: How Treatment Providers Can Help

    ERIC Educational Resources Information Center

    Elbogen, Eric B.; Wilder, Christine; Swartz, Marvin S.; Swanson, Jeffrey W.

    2008-01-01

    Objective: To review the prevalence, benefits, and problems associated with families who, either informally or formally as representative payees, manage money for adults with severe mental illness. Methods: Based on empirical research and clinical cases, suggestions are offered for minimizing downsides and capitalizing upon benefits of family…

  12. Oppositional Defiant Disorder: An Overview and Strategies for Educators

    ERIC Educational Resources Information Center

    Jones, Sara H.

    2018-01-01

    Oppositional defiant disorder (ODD) is a behavioral disorder that affects approximately 3.3% of the population across cultures. In this article, the author discusses symptoms, methods of diagnosis, and treatments for the disorder. Although most empirically supported treatments of ODD are based on parent--child training and therapy, there are some…

  13. W17_geonuc “Application of the Spectral Element Method to improvement of Ground-based Nuclear Explosion Monitoring”

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larmat, Carene; Rougier, Esteban; Lei, Zhou

    This project is in support of the Source Physics Experiment SPE (Snelson et al. 2013), which aims to develop new seismic source models of explosions. One priority of this program is first principle numerical modeling to validate and extend current empirical models.

  14. Parent-Centered Intervention: A Practical Approach for Preventing Drug Abuse in Hispanic Adolescents

    ERIC Educational Resources Information Center

    Tapia, Maria I.; Schwartz, Seth J.; Prado, Guillermo; Lopez, Barbara; Pantin, Hilda

    2006-01-01

    Objective: The objective of the present article is to review and discuss Familias Unidas, an empirically supported, family-based, culturally specific drug abuse and HIV prevention intervention for Hispanic immigrant adolescents and their families. Method: The authors focus on engagement and retention as well as on intervention delivery.…

  15. Technologies for Foreign Language Learning: A Review of Technology Types and Their Effectiveness

    ERIC Educational Resources Information Center

    Golonka, Ewa M.; Bowles, Anita R.; Frank, Victor M.; Richardson, Dorna L.; Freynik, Suzanne

    2014-01-01

    This review summarizes evidence for the effectiveness of technology use in foreign language (FL) learning and teaching, with a focus on empirical studies that compare the use of newer technologies with more traditional methods or materials. The review of over 350 studies (including classroom-based technologies, individual study tools,…

  16. Gender, Science and Modernity in Seventeenth-Century England

    ERIC Educational Resources Information Center

    Watts, Ruth

    2005-01-01

    The seventeenth century in England, bounded by the scientific stimulus of Francis Bacon at the beginning and Isaac Newton at the end, seemingly saw a huge leap from the Aristotelian dialectic of the past to a reconstruction of knowledge based on inductive methods, empirical investigation and cooperative research. In mid-century, Puritan reformers…

  17. Computer-Based Enhancements for the Improvement of Learning.

    ERIC Educational Resources Information Center

    Tennyson, Robert D.

    The third of four symposium papers argues that, if instructional methods are to improve learning, they must have two aspects: a direct trace to a specific learning process, and empirical support that demonstrates their significance. Focusing on the tracing process, the paper presents an information processing model of learning that can be used by…

  18. Linking Class and Community: An Investigation of Service Learning

    ERIC Educational Resources Information Center

    Fleck, Bethany; Hussey, Heather D.; Rutledge-Ellison, Lily

    2017-01-01

    This study contributes to the service learning (SL) literature by providing new empirical evidence of learning from a problem-based SL research project conducted in a developmental research methods course. Two sections of the course taught in a traditional manner were compared to two sections of the course taught with an integrated SL project…

  19. Applications of Nonlinear Principal Components Analysis to Behavioral Data.

    ERIC Educational Resources Information Center

    Hicks, Marilyn Maginley

    1981-01-01

    An empirical investigation of the statistical procedure entitled nonlinear principal components analysis was conducted on a known equation and on measurement data in order to demonstrate the procedure and examine its potential usefulness. This method was suggested by R. Gnanadesikan and based on an early paper of Karl Pearson. (Author/AL)

  20. Functional Magnetic Resonance Imaging Clinical Trial of a Dual-Processing Treatment Protocol for Substance-Dependent Adults

    ERIC Educational Resources Information Center

    Matto, Holly C.; Hadjiyane, Maria C.; Kost, Michelle; Marshall, Jennifer; Wiley, Joseph; Strolin-Goltzman, Jessica; Khatiwada, Manish; VanMeter, John W.

    2014-01-01

    Objectives: Empirical evidence suggests substance dependence creates stress system dysregulation which, in turn, may limit the efficacy of verbal-based treatment interventions, as the recovering brain may not be functionally capable of executive level processing. Treatment models that target implicit functioning are necessary. Methods: An RCT was…

  1. Flipped @ SBU: Student Satisfaction and the College Classroom

    ERIC Educational Resources Information Center

    Gross, Benjamin; Marinari, Maddalena; Hoffman, Mike; DeSimone, Kimberly; Burke, Peggy

    2015-01-01

    In this paper, the authors find empirical support for the effectiveness of the flipped classroom model. Using a quasi-experimental method, the authors compared students enrolled in flipped courses to their counterparts in more traditional lecture-based ones. A survey instrument was constructed to study how these two different groups of students…

  2. A Research Synthesis of the Evaluation Capacity Building Literature

    ERIC Educational Resources Information Center

    Labin, Susan N.; Duffy, Jennifer L.; Meyers, Duncan C.; Wandersman, Abraham; Lesesne, Catherine A.

    2012-01-01

    The continuously growing demand for program results has produced an increased need for evaluation capacity building (ECB). The "Integrative ECB Model" was developed to integrate concepts from existing ECB theory literature and to structure a synthesis of the empirical ECB literature. The study used a broad-based research synthesis method with…

  3. Analysis of Institutionally Specific Retention Research: A Comparison between Survey and Institutional Database Methods

    ERIC Educational Resources Information Center

    Caison, Amy L.

    2007-01-01

    This study empirically explores the comparability of traditional survey-based retention research methodology with an alternative approach that relies on data commonly available in institutional student databases. Drawing on Tinto's [Tinto, V. (1993). "Leaving College: Rethinking the Causes and Cures of Student Attrition" (2nd Ed.), The University…

  4. The Growth of Tense Productivity

    ERIC Educational Resources Information Center

    Rispoli, Matthew; Hadley, Pamela A.; Holt, Janet K.

    2009-01-01

    Purpose: This study tests empirical predictions of a maturational model for the growth of tense in children younger than 36 months using a type-based productivity measure. Method: Caregiver-child language samples were collected from 20 typically developing children every 3 months from 21 to 33 months of age. Growth in the productivity of tense…

  5. Application of LSP Texts in Translator Training

    ERIC Educational Resources Information Center

    Ilynska, Larisa; Smirnova, Tatjana; Platonova, Marina

    2017-01-01

    The paper presents discussion of the results of extensive empirical research into efficient methods of educating and training translators of LSP (language for special purposes) texts. The methodology is based on using popular LSP texts in the respective fields as one of the main media for translator training. The aim of the paper is to investigate…

  6. Prediction of S-wave velocity using complete ensemble empirical mode decomposition and neural networks

    NASA Astrophysics Data System (ADS)

    Gaci, Said; Hachay, Olga; Zaourar, Naima

    2017-04-01

    One of the key elements in hydrocarbon reservoirs characterization is the S-wave velocity (Vs). Since the traditional estimating methods often fail to accurately predict this physical parameter, a new approach that takes into account its non-stationary and non-linear properties is needed. In this view, a prediction model based on complete ensemble empirical mode decomposition (CEEMD) and a multiple layer perceptron artificial neural network (MLP ANN) is suggested to compute Vs from P-wave velocity (Vp). Using a fine-to-coarse reconstruction algorithm based on CEEMD, the Vp log data is decomposed into a high frequency (HF) component, a low frequency (LF) component and a trend component. Then, different combinations of these components are used as inputs of the MLP ANN algorithm for estimating Vs log. Applications on well logs taken from different geological settings illustrate that the predicted Vs values using MLP ANN with the combinations of HF, LF and trend in inputs are more accurate than those obtained with the traditional estimating methods. Keywords: S-wave velocity, CEEMD, multilayer perceptron neural networks.

  7. Early prediction of extreme stratospheric polar vortex states based on causal precursors

    NASA Astrophysics Data System (ADS)

    Kretschmer, Marlene; Runge, Jakob; Coumou, Dim

    2017-08-01

    Variability in the stratospheric polar vortex (SPV) can influence the tropospheric circulation and thereby winter weather. Early predictions of extreme SPV states are thus important to improve forecasts of winter weather including cold spells. However, dynamical models are usually restricted in lead time because they poorly capture low-frequency processes. Empirical models often suffer from overfitting problems as the relevant physical processes and time lags are often not well understood. Here we introduce a novel empirical prediction method by uniting a response-guided community detection scheme with a causal discovery algorithm. This way, we objectively identify causal precursors of the SPV at subseasonal lead times and find them to be in good agreement with known physical drivers. A linear regression prediction model based on the causal precursors can explain most SPV variability (r2 = 0.58), and our scheme correctly predicts 58% (46%) of extremely weak SPV states for lead times of 1-15 (16-30) days with false-alarm rates of only approximately 5%. Our method can be applied to any variable relevant for (sub)seasonal weather forecasts and could thus help improving long-lead predictions.

  8. Methods of forming single source precursors, methods of forming polymeric single source precursors, and single source precursors formed by such methods

    DOEpatents

    Fox, Robert V.; Rodriguez, Rene G.; Pak, Joshua J.; Sun, Chivin; Margulieux, Kelsey R.; Holland, Andrew W.

    2016-04-19

    Methods of forming single source precursors (SSPs) include forming intermediate products having the empirical formula 1/2{L.sub.2N(.mu.-X).sub.2M'X.sub.2}.sub.2, and reacting MER with the intermediate products to form SSPs of the formula L.sub.2N(.mu.-ER).sub.2M'(ER).sub.2, wherein L is a Lewis base, M is a Group IA atom, N is a Group IB atom, M' is a Group IIIB atom, each E is a Group VIB atom, each X is a Group VIIA atom or a nitrate group, and each R group is an alkyl, aryl, vinyl, (per)fluoro alkyl, (per)fluoro aryl, silane, or carbamato group. Methods of forming polymeric or copolymeric SSPs include reacting at least one of HE.sup.1R.sup.1E.sup.1H and MER with one or more substances having the empirical formula L.sub.2N(.mu.-ER).sub.2M'(ER).sub.2 or L.sub.2N(.mu.-X).sub.2M'(X).sub.2 to form a polymeric or copolymeric SSP. New SSPs and intermediate products are formed by such methods.

  9. Methods of forming single source precursors, methods of forming polymeric single source precursors, and single source precursors formed by such methods

    DOEpatents

    Fox, Robert V.; Rodriguez, Rene G.; Pak, Joshua J.; Sun, Chivin; Margulieux, Kelsey R.; Holland, Andrew W.

    2014-09-09

    Methods of forming single source precursors (SSPs) include forming intermediate products having the empirical formula 1/2{L.sub.2N(.mu.-X).sub.2M'X.sub.2}.sub.2, and reacting MER with the intermediate products to form SSPs of the formula L.sub.2N(.mu.-ER).sub.2M'(ER).sub.2, wherein L is a Lewis base, M is a Group IA atom, N is a Group IB atom, M' is a Group IIIB atom, each E is a Group VIB atom, each X is a Group VIIA atom or a nitrate group, and each R group is an alkyl, aryl, vinyl, (per)fluoro alkyl, (per)fluoro aryl, silane, or carbamato group. Methods of forming polymeric or copolymeric SSPs include reacting at least one of HE.sup.1R.sup.1E.sup.1H and MER with one or more substances having the empirical formula L.sub.2N(.mu.-ER).sub.2M'(ER).sub.2 or L.sub.2N(.mu.-X).sub.2M'(X).sub.2 to form a polymeric or copolymeric SSP. New SSPs and intermediate products are formed by such methods.

  10. Comparison of Bayesian clustering and edge detection methods for inferring boundaries in landscape genetics

    USGS Publications Warehouse

    Safner, T.; Miller, M.P.; McRae, B.H.; Fortin, M.-J.; Manel, S.

    2011-01-01

    Recently, techniques available for identifying clusters of individuals or boundaries between clusters using genetic data from natural populations have expanded rapidly. Consequently, there is a need to evaluate these different techniques. We used spatially-explicit simulation models to compare three spatial Bayesian clustering programs and two edge detection methods. Spatially-structured populations were simulated where a continuous population was subdivided by barriers. We evaluated the ability of each method to correctly identify boundary locations while varying: (i) time after divergence, (ii) strength of isolation by distance, (iii) level of genetic diversity, and (iv) amount of gene flow across barriers. To further evaluate the methods' effectiveness to detect genetic clusters in natural populations, we used previously published data on North American pumas and a European shrub. Our results show that with simulated and empirical data, the Bayesian spatial clustering algorithms outperformed direct edge detection methods. All methods incorrectly detected boundaries in the presence of strong patterns of isolation by distance. Based on this finding, we support the application of Bayesian spatial clustering algorithms for boundary detection in empirical datasets, with necessary tests for the influence of isolation by distance. ?? 2011 by the authors; licensee MDPI, Basel, Switzerland.

  11. On Short-Time Estimation of Vocal Tract Length from Formant Frequencies

    PubMed Central

    Lammert, Adam C.; Narayanan, Shrikanth S.

    2015-01-01

    Vocal tract length is highly variable across speakers and determines many aspects of the acoustic speech signal, making it an essential parameter to consider for explaining behavioral variability. A method for accurate estimation of vocal tract length from formant frequencies would afford normalization of interspeaker variability and facilitate acoustic comparisons across speakers. A framework for considering estimation methods is developed from the basic principles of vocal tract acoustics, and an estimation method is proposed that follows naturally from this framework. The proposed method is evaluated using acoustic characteristics of simulated vocal tracts ranging from 14 to 19 cm in length, as well as real-time magnetic resonance imaging data with synchronous audio from five speakers whose vocal tracts range from 14.5 to 18.0 cm in length. Evaluations show improvements in accuracy over previously proposed methods, with 0.631 and 1.277 cm root mean square error on simulated and human speech data, respectively. Empirical results show that the effectiveness of the proposed method is based on emphasizing higher formant frequencies, which seem less affected by speech articulation. Theoretical predictions of formant sensitivity reinforce this empirical finding. Moreover, theoretical insights are explained regarding the reason for differences in formant sensitivity. PMID:26177102

  12. A comparison of three radiation models for the calculation of nozzle arcs

    NASA Astrophysics Data System (ADS)

    Dixon, C. M.; Yan, J. D.; Fang, M. T. C.

    2004-12-01

    Three radiation models, the semi-empirical model based on net emission coefficients (Zhang et al 1987 J. Phys. D: Appl. Phys. 20 386-79), the five-band P1 model (Eby et al 1998 J. Phys. D: Appl. Phys. 31 1578-88), and the method of partial characteristics (Aubrecht and Lowke 1994 J. Phys. D: Appl.Phys. 27 2066-73, Sevast'yanenko 1979 J. Eng. Phys. 36 138-48), are used to calculate the radiation transfer in an SF6 nozzle arc. The temperature distributions computed by the three models are compared with the measurements of Leseberg and Pietsch (1981 Proc. 4th Int. Symp. on Switching Arc Phenomena (Lodz, Poland) pp 236-40) and Leseberg (1982 PhD Thesis RWTH Aachen, Germany). It has been found that all three models give similar distributions of radiation loss per unit time and volume. For arcs burning in axially dominated flow, such as arcs in nozzle flow, the semi-empirical model and the P1 model give accurate predictions when compared with experimental results. The prediction by the method of partial characteristics is poorest. The computational cost is the lowest for the semi-empirical model.

  13. An evaluation of rise time characterization and prediction methods

    NASA Technical Reports Server (NTRS)

    Robinson, Leick D.

    1994-01-01

    One common method of extrapolating sonic boom waveforms from aircraft to ground is to calculate the nonlinear distortion, and then add a rise time to each shock by a simple empirical rule. One common rule is the '3 over P' rule which calculates the rise time in milliseconds as three divided by the shock amplitude in psf. This rule was compared with the results of ZEPHYRUS, a comprehensive algorithm which calculates sonic boom propagation and extrapolation with the combined effects of nonlinearity, attenuation, dispersion, geometric spreading, and refraction in a stratified atmosphere. It is shown there that the simple empirical rule considerably overestimates the rise time estimate. In addition, the empirical rule does not account for variations in the rise time due to humidity variation or propagation history. It is also demonstrated that the rise time is only an approximate indicator of perceived loudness. Three waveforms with identical characteristics (shock placement, amplitude, and rise time), but with different shock shapes, are shown to give different calculated loudness. This paper is based in part on work performed at the Applied Research Laboratories, the University of Texas at Austin, and supported by NASA Langley.

  14. Comparison of empirical estimate of clinical pretest probability with the Wells score for diagnosis of deep vein thrombosis.

    PubMed

    Wang, Bo; Lin, Yin; Pan, Fu-shun; Yao, Chen; Zheng, Zi-Yu; Cai, Dan; Xu, Xiang-dong

    2013-01-01

    Wells score has been validated for estimation of pretest probability in patients with suspected deep vein thrombosis (DVT). In clinical practice, many clinicians prefer to use empirical estimation rather than Wells score. However, which method is better to increase the accuracy of clinical evaluation is not well understood. Our present study compared empirical estimation of pretest probability with the Wells score to investigate the efficiency of empirical estimation in the diagnostic process of DVT. Five hundred and fifty-five patients were enrolled in this study. One hundred and fifty patients were assigned to examine the interobserver agreement for Wells score between emergency and vascular clinicians. The other 405 patients were assigned to evaluate the pretest probability of DVT on the basis of the empirical estimation and Wells score, respectively, and plasma D-dimer levels were then determined in the low-risk patients. All patients underwent venous duplex scans and had a 45-day follow up. Weighted Cohen's κ value for interobserver agreement between emergency and vascular clinicians of the Wells score was 0.836. Compared with Wells score evaluation, empirical assessment increased the sensitivity, specificity, Youden's index, positive likelihood ratio, and positive and negative predictive values, but decreased negative likelihood ratio. In addition, the appropriate D-dimer cutoff value based on Wells score was 175 μg/l and 108 patients were excluded. Empirical assessment increased the appropriate D-dimer cutoff point to 225 μg/l and 162 patients were ruled out. Our findings indicated that empirical estimation not only improves D-dimer assay efficiency for exclusion of DVT but also increases clinical judgement accuracy in the diagnosis of DVT.

  15. Modified complementary ensemble empirical mode decomposition and intrinsic mode functions evaluation index for high-speed train gearbox fault diagnosis

    NASA Astrophysics Data System (ADS)

    Chen, Dongyue; Lin, Jianhui; Li, Yanping

    2018-06-01

    Complementary ensemble empirical mode decomposition (CEEMD) has been developed for the mode-mixing problem in Empirical Mode Decomposition (EMD) method. Compared to the ensemble empirical mode decomposition (EEMD), the CEEMD method reduces residue noise in the signal reconstruction. Both CEEMD and EEMD need enough ensemble number to reduce the residue noise, and hence it would be too much computation cost. Moreover, the selection of intrinsic mode functions (IMFs) for further analysis usually depends on experience. A modified CEEMD method and IMFs evaluation index are proposed with the aim of reducing the computational cost and select IMFs automatically. A simulated signal and in-service high-speed train gearbox vibration signals are employed to validate the proposed method in this paper. The results demonstrate that the modified CEEMD can decompose the signal efficiently with less computation cost, and the IMFs evaluation index can select the meaningful IMFs automatically.

  16. Appropriate methodologies for empirical bioethics: it's all relative.

    PubMed

    Ives, Jonathan; Draper, Heather

    2009-05-01

    In this article we distinguish between philosophical bioethics (PB), descriptive policy orientated bioethics (DPOB) and normative policy oriented bioethics (NPOB). We argue that finding an appropriate methodology for combining empirical data and moral theory depends on what the aims of the research endeavour are, and that, for the most part, this combination is only required for NPOB. After briefly discussing the debate around the is/ought problem, and suggesting that both sides of this debate are misunderstanding one another (i.e. one side treats it as a conceptual problem, whilst the other treats it as an empirical claim), we outline and defend a methodological approach to NPOB based on work we have carried out on a project exploring the normative foundations of paternal rights and responsibilities. We suggest that given the prominent role already played by moral intuition in moral theory, one appropriate way to integrate empirical data and philosophical bioethics is to utilize empirically gathered lay intuition as the foundation for ethical reasoning in NPOB. The method we propose involves a modification of a long-established tradition on non-intervention in qualitative data gathering, combined with a form of reflective equilibrium where the demands of theory and data are given equal weight and a pragmatic compromise reached.

  17. Quantitative evaluation of simulated functional brain networks in graph theoretical analysis.

    PubMed

    Lee, Won Hee; Bullmore, Ed; Frangou, Sophia

    2017-02-01

    There is increasing interest in the potential of whole-brain computational models to provide mechanistic insights into resting-state brain networks. It is therefore important to determine the degree to which computational models reproduce the topological features of empirical functional brain networks. We used empirical connectivity data derived from diffusion spectrum and resting-state functional magnetic resonance imaging data from healthy individuals. Empirical and simulated functional networks, constrained by structural connectivity, were defined based on 66 brain anatomical regions (nodes). Simulated functional data were generated using the Kuramoto model in which each anatomical region acts as a phase oscillator. Network topology was studied using graph theory in the empirical and simulated data. The difference (relative error) between graph theory measures derived from empirical and simulated data was then estimated. We found that simulated data can be used with confidence to model graph measures of global network organization at different dynamic states and highlight the sensitive dependence of the solutions obtained in simulated data on the specified connection densities. This study provides a method for the quantitative evaluation and external validation of graph theory metrics derived from simulated data that can be used to inform future study designs. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  18. Measuring Racial/Ethnic Disparities in Health Care: Methods and Practical Issues

    PubMed Central

    Cook, Benjamin Lê; McGuire, Thomas G; Zaslavsky,, Alan M

    2012-01-01

    Objective To review methods of measuring racial/ethnic health care disparities. Study Design Identification and tracking of racial/ethnic disparities in health care will be advanced by application of a consistent definition and reliable empirical methods. We have proposed a definition of racial/ethnic health care disparities based in the Institute of Medicine's (IOM) Unequal Treatment report, which defines disparities as all differences except those due to clinical need and preferences. After briefly summarizing the strengths and critiques of this definition, we review methods that have been used to implement it. We discuss practical issues that arise during implementation and expand these methods to identify sources of disparities. We also situate the focus on methods to measure racial/ethnic health care disparities (an endeavor predominant in the United States) within a larger international literature in health outcomes and health care inequality. Empirical Application We compare different methods of implementing the IOM definition on measurement of disparities in any use of mental health care and mental health care expenditures using the 2004–2008 Medical Expenditure Panel Survey. Conclusion Disparities analysts should be aware of multiple methods available to measure disparities and their differing assumptions. We prefer a method concordant with the IOM definition. PMID:22353147

  19. Evaluation of an empirical monitor output estimation in carbon ion radiotherapy.

    PubMed

    Matsumura, Akihiko; Yusa, Ken; Kanai, Tatsuaki; Mizota, Manabu; Ohno, Tatsuya; Nakano, Takashi

    2015-09-01

    A conventional broad beam method is applied to carbon ion radiotherapy at Gunma University Heavy Ion Medical Center. According to this method, accelerated carbon ions are scattered by various beam line devices to form 3D dose distribution. The physical dose per monitor unit (d/MU) at the isocenter, therefore, depends on beam line parameters and should be calibrated by a measurement in clinical practice. This study aims to develop a calculation algorithm for d/MU using beam line parameters. Two major factors, the range shifter dependence and the field aperture effect, are measured via PinPoint chamber in a water phantom, which is an identical setup as that used for monitor calibration in clinical practice. An empirical monitor calibration method based on measurement results is developed using a simple algorithm utilizing a linear function and a double Gaussian pencil beam distribution to express the range shifter dependence and the field aperture effect. The range shifter dependence and the field aperture effect are evaluated to have errors of 0.2% and 0.5%, respectively. The proposed method has successfully estimated d/MU with a difference of less than 1% with respect to the measurement results. Taking the measurement deviation of about 0.3% into account, this result is sufficiently accurate for clinical applications. An empirical procedure to estimate d/MU with a simple algorithm is established in this research. This procedure allows them to use the beam time for more treatments, quality assurances, and other research endeavors.

  20. On the methods for determining the transverse dispersion coefficient in river mixing

    NASA Astrophysics Data System (ADS)

    Baek, Kyong Oh; Seo, Il Won

    2016-04-01

    In this study, the strengths and weaknesses of existing methods for determining the dispersion coefficient in the two-dimensional river mixing model were assessed based on hydraulic and tracer data sets acquired from experiments conducted on either laboratory channels or natural rivers. From the results of this study, it can be concluded that, when the longitudinal dispersion coefficient as well as the transverse dispersion coefficients must be determined in the transient concentration situation, the two-dimensional routing procedures, 2D RP and 2D STRP, can be employed to calculate dispersion coefficients among the observation methods. For the steady concentration situation, the STRP can be applied to calculate the transverse dispersion coefficient. When the tracer data are not available, either theoretical or empirical equations by the estimation method can be used to calculate the dispersion coefficient using the geometric and hydraulic data sets. Application of the theoretical and empirical equations to the laboratory channel showed that equations by Baek and Seo [[3], 2011] predicted reasonable values while equations by Fischer [23] and Boxwall and Guymer (2003) overestimated by factors of ten to one hundred. Among existing empirical equations, those by Jeon et al. [28] and Baek and Seo [6] gave the agreeable values of the transverse dispersion coefficient for most cases of natural rivers. Further, the theoretical equation by Baek and Seo [5] has the potential to be broadly applied to both laboratory and natural channels.

  1. An object programming based environment for protein secondary structure prediction.

    PubMed

    Giacomini, M; Ruggiero, C; Sacile, R

    1996-01-01

    The most frequently used methods for protein secondary structure prediction are empirical statistical methods and rule based methods. A consensus system based on object-oriented programming is presented, which integrates the two approaches with the aim of improving the prediction quality. This system uses an object-oriented knowledge representation based on the concepts of conformation, residue and protein, where the conformation class is the basis, the residue class derives from it and the protein class derives from the residue class. The system has been tested with satisfactory results on several proteins of the Brookhaven Protein Data Bank. Its results have been compared with the results of the most widely used prediction methods, and they show a higher prediction capability and greater stability. Moreover, the system itself provides an index of the reliability of its current prediction. This system can also be regarded as a basis structure for programs of this kind.

  2. NEW FRONTIERS IN DRUGGABILITY

    PubMed Central

    Kozakov, Dima; Hall, David R.; Napoleon, Raeanne L.; Yueh, Christine; Whitty, Adrian; Vajda, Sandor

    2016-01-01

    A powerful early approach to evaluating the druggability of proteins involved determining the hit rate in NMR-based screening of a library of small compounds. Here we show that a computational analog of this method, based on mapping proteins using small molecules as probes, can reliably reproduce druggability results from NMR-based screening, and can provide a more meaningful assessment in cases where the two approaches disagree. We apply the method to a large set of proteins. The results show that, because the method is based on the biophysics of binding rather than on empirical parameterization, meaningful information can be gained about classes of proteins and classes of compounds beyond those resembling validated targets and conventionally druglike ligands. In particular, the method identifies targets that, while not druggable by druglike compounds, may become druggable using compound classes such as macrocycles or other large molecules beyond the rule-of-five limit. PMID:26230724

  3. Estimating individual influences of behavioral intentions: an application of random-effects modeling to the theory of reasoned action.

    PubMed

    Hedeker, D; Flay, B R; Petraitis, J

    1996-02-01

    Methods are proposed and described for estimating the degree to which relations among variables vary at the individual level. As an example of the methods, M. Fishbein and I. Ajzen's (1975; I. Ajzen & M. Fishbein, 1980) theory of reasoned action is examined, which posits first that an individual's behavioral intentions are a function of 2 components: the individual's attitudes toward the behavior and the subjective norms as perceived by the individual. A second component of their theory is that individuals may weight these 2 components differently in assessing their behavioral intentions. This article illustrates the use of empirical Bayes methods based on a random-effects regression model to estimate these individual influences, estimating an individual's weighting of both of these components (attitudes toward the behavior and subjective norms) in relation to their behavioral intentions. This method can be used when an individual's behavioral intentions, subjective norms, and attitudes toward the behavior are all repeatedly measured. In this case, the empirical Bayes estimates are derived as a function of the data from the individual, strengthened by the overall sample data.

  4. Projecting adverse event incidence rates using empirical Bayes methodology.

    PubMed

    Ma, Guoguang Julie; Ganju, Jitendra; Huang, Jing

    2016-08-01

    Although there is considerable interest in adverse events observed in clinical trials, projecting adverse event incidence rates in an extended period can be of interest when the trial duration is limited compared to clinical practice. A naïve method for making projections might involve modeling the observed rates into the future for each adverse event. However, such an approach overlooks the information that can be borrowed across all the adverse event data. We propose a method that weights each projection using a shrinkage factor; the adverse event-specific shrinkage is a probability, based on empirical Bayes methodology, estimated from all the adverse event data, reflecting evidence in support of the null or non-null hypotheses. Also proposed is a technique to estimate the proportion of true nulls, called the common area under the density curves, which is a critical step in arriving at the shrinkage factor. The performance of the method is evaluated by projecting from interim data and then comparing the projected results with observed results. The method is illustrated on two data sets. © The Author(s) 2013.

  5. Empirically-derived Knowledge on Adolescent Assent to Pediatric Biomedical Research

    PubMed Central

    Brody, Janet L.; Annett, Robert D.; Turner, Charles; Dalen, Jeanne; Yoon, Yesel

    2013-01-01

    Background There has been a recent growth in empirical research on assent with pediatric populations, due in part, to the demand for increased participation of this population in biomedical research. Despite methodological limitations, studies of adolescent capacities to assent have advanced and identified a number of salient psychological and social variables that are key to understanding assent. Methods The authors review a subsection of the empirical literature on adolescent assent focusing primarily on asthma and cancer therapeutic research; adolescent competencies to assent to these studies; perceptions of protocol risk and benefit; the affects of various social context variables on adolescent research participation decision making; and the inter-relatedness of these psychological and social factors. Results Contemporary studies of assent, using multivariate methods and updated approaches to statistical modeling, have revealed the importance of studying the intercorrelation between adolescents’ psychological capacities and their ability to employ these capacities in family and medical decision-making contexts. Understanding these dynamic relationships will enable researchers and ethicists to develop assent procedures that respect the authority of parents, while at the same time accord adolescents appropriate decision-making autonomy. Conclusions Reviews of empirical literature on the assent process reveal that adolescents possess varying capacities for biomedical research participation decision making depending on their maturity and the social context in which the decision is made. The relationship between adolescents and physician-investigators can be used to attenuate concerns about research protocols and clarify risk and benefit information so adolescents, in concert with their families, can make the most informed and ethical decisions. Future assent researchers will be better able to navigate the complicated interplay of contextual and developmental factors and develop the empirical bases for research enrollment protocols that will support increased involvement of adolescents in biomedical research. PMID:23914304

  6. Thermal Conductivity of Metallic Uranium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hin, Celine

    This project has developed a modeling and simulation approaches to predict the thermal conductivity of metallic fuels and their alloys. We focus on two methods. The first method has been developed by the team at the University of Wisconsin Madison. They developed a practical and general modeling approach for thermal conductivity of metals and metal alloys that integrates ab-initio and semi-empirical physics-based models to maximize the strengths of both techniques. The second method has been developed by the team at Virginia Tech. This approach consists of a determining the thermal conductivity using only ab-initio methods without any fitting parameters. Bothmore » methods were complementary. The models incorporated both phonon and electron contributions. Good agreement with experimental data over a wide temperature range were found. The models also provided insight into the different physical factors that govern the thermal conductivity under different temperatures. The models were general enough to incorporate more complex effects like additional alloying species, defects, transmutation products and noble gas bubbles to predict the behavior of complex metallic alloys like U-alloy fuel systems under burnup. 3 Introduction Thermal conductivity is an important thermal physical property affecting the performance and efficiency of metallic fuels [1]. Some experimental measurement of thermal conductivity and its correlation with composition and temperature from empirical fitting are available for U, Zr and their alloys with Pu and other minor actinides. However, as reviewed in by Kim, Cho and Sohn [2], due to the difficulty in doing experiments on actinide materials, thermal conductivities of metallic fuels have only been measured at limited alloy compositions and temperatures, some of them even being negative and unphysical. Furthermore, the correlations developed so far are empirical in nature and may not be accurate when used for prediction at conditions far from those used in the original fitting. Moreover, as fuels burn up in the reactor and fission products are built up, thermal conductivity is also significantly changed [3]. Unfortunately, fundamental understanding of the effect of fission products is also currently lacking. In this project, we probe thermal conductivity of metallic fuels with ab initio calculations, a theoretical tool with the potential to yield better accuracy and predictive power than empirical fitting. This work will both complement experimental data by determining thermal conductivity in wider composition and temperature ranges than is available experimentally, and also develop mechanistic understanding to guide better design of metallic fuels in the future. So far, we focused on α-U perfect crystal, the ground-state phase of U metal. We focus on two methods. The first method has been developed by the team at the University of Wisconsin Madison. They developed a practical and general modeling approach for thermal conductivity of metals and metal alloys that integrates ab-initio and semi-empirical physics-based models to maximize the strengths of both techniques. The second method has been developed by the team at Virginia Tech. This approach consists of a determining the thermal conductivity using only ab-initio methods without any fitting parameters. Both methods were complementary and very helpful to understand the physics behind the thermal conductivity in metallic uranium and other materials with similar characteristics. In Section I, the combined model developed at UWM is explained. In Section II, the ab-initio method developed at VT is described along with the uranium pseudo-potential and its validation. Section III is devoted to the work done by Jianguo Yu at INL. Finally, we will present the performance of the project in terms of milestones, publications, and presentations.« less

  7. Time to Guideline-Based Empiric Antibiotic Therapy in the Treatment of Pneumonia in a Community Hospital: A Retrospective Review.

    PubMed

    Erwin, Beth L; Kyle, Jeffrey A; Allen, Leland N

    2016-08-01

    The 2005 American Thoracic Society/Infectious Diseases Society of America (ATS/IDSA) guidelines for hospital-acquired pneumonia (HAP), ventilator-associated pneumonia (VAP), and health care-associated pneumonia (HCAP) stress the importance of initiating prompt appropriate empiric antibiotic therapy. This study's purpose was to determine the percentage of patients with HAP, VAP, and HCAP who received guideline-based empiric antibiotic therapy and to determine the average time to receipt of an appropriate empiric regimen. A retrospective chart review of adults with HAP, VAP, or HCAP was conducted at a community hospital in suburban Birmingham, Alabama. The hospital's electronic medical record system utilized International Classification of Diseases, Ninth Revision (ICD-9) codes to identify patients diagnosed with pneumonia. The percentage of patients who received guideline-based empiric antibiotic therapy was calculated. The mean time from suspected diagnosis of pneumonia to initial administration of the final antibiotic within the empiric regimen was calculated for patients who received guideline-based therapy. Ninety-three patients met the inclusion criteria. The overall guideline adherence rate for empiric antibiotic therapy was 31.2%. The mean time to guideline-based therapy in hours:minutes was 7:47 for HAP and 28:16 for HCAP. For HAP and HCAP combined, the mean time to appropriate therapy was 21:55. Guideline adherence rates were lower and time to appropriate empiric therapy was greater for patients with HCAP compared to patients with HAP. © The Author(s) 2015.

  8. Physical–chemical determinants of coil conformations in globular proteins

    PubMed Central

    Perskie, Lauren L; Rose, George D

    2010-01-01

    We present a method with the potential to generate a library of coil segments from first principles. Proteins are built from α-helices and/or β-strands interconnected by these coil segments. Here, we investigate the conformational determinants of short coil segments, with particular emphasis on chain turns. Toward this goal, we extracted a comprehensive set of two-, three-, and four-residue turns from X-ray–elucidated proteins and classified them by conformation. A remarkably small number of unique conformers account for most of this experimentally determined set, whereas remaining members span a large number of rare conformers, many occurring only once in the entire protein database. Factors determining conformation were identified via Metropolis Monte Carlo simulations devised to test the effectiveness of various energy terms. Simulated structures were validated by comparison to experimental counterparts. After filtering rare conformers, we found that 98% of the remaining experimentally determined turn population could be reproduced by applying a hydrogen bond energy term to an exhaustively generated ensemble of clash-free conformers in which no backbone polar group lacks a hydrogen-bond partner. Further, at least 90% of longer coil segments, ranging from 5- to 20 residues, were found to be structural composites of these shorter primitives. These results are pertinent to protein structure prediction, where approaches can be divided into either empirical or ab initio methods. Empirical methods use database-derived information; ab initio methods rely on physical–chemical principles exclusively. Replacing the database-derived coil library with one generated from first principles would transform any empirically based method into its corresponding ab initio homologue. PMID:20512968

  9. Smsynth: AN Imagery Synthesis System for Soil Moisture Retrieval

    NASA Astrophysics Data System (ADS)

    Cao, Y.; Xu, L.; Peng, J.

    2018-04-01

    Soil moisture (SM) is a important variable in various research areas, such as weather and climate forecasting, agriculture, drought and flood monitoring and prediction, and human health. An ongoing challenge in estimating SM via synthetic aperture radar (SAR) is the development of the retrieval SM methods, especially the empirical models needs as training samples a lot of measurements of SM and soil roughness parameters which are very difficult to acquire. As such, it is difficult to develop empirical models using realistic SAR imagery and it is necessary to develop methods to synthesis SAR imagery. To tackle this issue, a SAR imagery synthesis system based on the SM named SMSynth is presented, which can simulate radar signals that are realistic as far as possible to the real SAR imagery. In SMSynth, SAR backscatter coefficients for each soil type are simulated via the Oh model under the Bayesian framework, where the spatial correlation is modeled by the Markov random field (MRF) model. The backscattering coefficients simulated based on the designed soil parameters and sensor parameters are added into the Bayesian framework through the data likelihood where the soil parameters and sensor parameters are set as realistic as possible to the circumstances on the ground and in the validity range of the Oh model. In this way, a complete and coherent Bayesian probabilistic framework is established. Experimental results show that SMSynth is capable of generating realistic SAR images that suit the needs of a large amount of training samples of empirical models.

  10. Decomposing Multifractal Crossovers

    PubMed Central

    Nagy, Zoltan; Mukli, Peter; Herman, Peter; Eke, Andras

    2017-01-01

    Physiological processes—such as, the brain's resting-state electrical activity or hemodynamic fluctuations—exhibit scale-free temporal structuring. However, impacts common in biological systems such as, noise, multiple signal generators, or filtering by transport function, result in multimodal scaling that cannot be reliably assessed by standard analytical tools that assume unimodal scaling. Here, we present two methods to identify breakpoints or crossovers in multimodal multifractal scaling functions. These methods incorporate the robust iterative fitting approach of the focus-based multifractal formalism (FMF). The first approach (moment-wise scaling range adaptivity) allows for a breakpoint-based adaptive treatment that analyzes segregated scale-invariant ranges. The second method (scaling function decomposition method, SFD) is a crossover-based design aimed at decomposing signal constituents from multimodal scaling functions resulting from signal addition or co-sampling, such as, contamination by uncorrelated fractals. We demonstrated that these methods could handle multimodal, mono- or multifractal, and exact or empirical signals alike. Their precision was numerically characterized on ideal signals, and a robust performance was demonstrated on exemplary empirical signals capturing resting-state brain dynamics by near infrared spectroscopy (NIRS), electroencephalography (EEG), and blood oxygen level-dependent functional magnetic resonance imaging (fMRI-BOLD). The NIRS and fMRI-BOLD low-frequency fluctuations were dominated by a multifractal component over an underlying biologically relevant random noise, thus forming a bimodal signal. The crossover between the EEG signal components was found at the boundary between the δ and θ bands, suggesting an independent generator for the multifractal δ rhythm. The robust implementation of the SFD method should be regarded as essential in the seamless processing of large volumes of bimodal fMRI-BOLD imaging data for the topology of multifractal metrics free of the masking effect of the underlying random noise. PMID:28798694

  11. Classification and modeling of human activities using empirical mode decomposition with S-band and millimeter-wave micro-Doppler radars

    NASA Astrophysics Data System (ADS)

    Fairchild, Dustin P.; Narayanan, Ram M.

    2012-06-01

    The ability to identify human movements can be an important tool in many different applications such as surveillance, military combat situations, search and rescue operations, and patient monitoring in hospitals. This information can provide soldiers, security personnel, and search and rescue workers with critical knowledge that can be used to potentially save lives and/or avoid a dangerous situation. Most research involving human activity recognition is focused on using the Short-Time Fourier Transform (STFT) as a method of analyzing the micro-Doppler signatures. Because of the time-frequency resolution limitations of the STFT and because Fourier transform-based methods are not well-suited for use with non-stationary and nonlinear signals, we have chosen a different approach. Empirical Mode Decomposition (EMD) has been shown to be a valuable time-frequency method for processing non-stationary and nonlinear data such as micro-Doppler signatures and EMD readily provides a feature vector that can be utilized for classification. For classification, the method of a Support Vector Machine (SVMs) was chosen. SVMs have been widely used as a method of pattern recognition due to their ability to generalize well and also because of their moderately simple implementation. In this paper, we discuss the ability of these methods to accurately identify human movements based on their micro-Doppler signatures obtained from S-band and millimeter-wave radar systems. Comparisons will also be made based on experimental results from each of these radar systems. Furthermore, we will present simulations of micro-Doppler movements for stationary subjects that will enable us to compare our experimental Doppler data to what we would expect from an "ideal" movement.

  12. The Naïve Overfitting Index Selection (NOIS): A new method to optimize model complexity for hyperspectral data

    NASA Astrophysics Data System (ADS)

    Rocha, Alby D.; Groen, Thomas A.; Skidmore, Andrew K.; Darvishzadeh, Roshanak; Willemen, Louise

    2017-11-01

    The growing number of narrow spectral bands in hyperspectral remote sensing improves the capacity to describe and predict biological processes in ecosystems. But it also poses a challenge to fit empirical models based on such high dimensional data, which often contain correlated and noisy predictors. As sample sizes, to train and validate empirical models, seem not to be increasing at the same rate, overfitting has become a serious concern. Overly complex models lead to overfitting by capturing more than the underlying relationship, and also through fitting random noise in the data. Many regression techniques claim to overcome these problems by using different strategies to constrain complexity, such as limiting the number of terms in the model, by creating latent variables or by shrinking parameter coefficients. This paper is proposing a new method, named Naïve Overfitting Index Selection (NOIS), which makes use of artificially generated spectra, to quantify the relative model overfitting and to select an optimal model complexity supported by the data. The robustness of this new method is assessed by comparing it to a traditional model selection based on cross-validation. The optimal model complexity is determined for seven different regression techniques, such as partial least squares regression, support vector machine, artificial neural network and tree-based regressions using five hyperspectral datasets. The NOIS method selects less complex models, which present accuracies similar to the cross-validation method. The NOIS method reduces the chance of overfitting, thereby avoiding models that present accurate predictions that are only valid for the data used, and too complex to make inferences about the underlying process.

  13. Stress intensity factors for part-elliptical cracks emanating from dimpled rivet holes

    NASA Astrophysics Data System (ADS)

    Wang, Ailun; She, Chongmin; Lin, Gang; Zhou, You; Guo, Wanlin

    2014-11-01

    Detailed investigations on the stress intensity factors (SIFs) for corner cracks emanated from interference fitted dimpled rivet holes are conducted using three-dimensional finite element method. The influences of the crack length a, elliptical shape factor t, far-end stress S and interference magnitude δ on the stress intensity factors are systematically studied. The SIFs for corner cracks emanated from open holes are also investigated for comparisons. An empirical formula of the normalized SIF is proposed by use of the least square method for convenience of the engineering application, which is a function of the crack length a, elliptical shape factor t, far-end stress S, interference magnitude δ and the normalized elliptical centrifugal angle φn. Based on the empirical formula, a crack growth simulation for a rivet filled hole is conducted, which shows a good agreement with the test data.

  14. Bearing Fault Detection Based on Empirical Wavelet Transform and Correlated Kurtosis by Acoustic Emission.

    PubMed

    Gao, Zheyu; Lin, Jing; Wang, Xiufeng; Xu, Xiaoqiang

    2017-05-24

    Rolling bearings are widely used in rotating equipment. Detection of bearing faults is of great importance to guarantee safe operation of mechanical systems. Acoustic emission (AE), as one of the bearing monitoring technologies, is sensitive to weak signals and performs well in detecting incipient faults. Therefore, AE is widely used in monitoring the operating status of rolling bearing. This paper utilizes Empirical Wavelet Transform (EWT) to decompose AE signals into mono-components adaptively followed by calculation of the correlated kurtosis (CK) at certain time intervals of these components. By comparing these CK values, the resonant frequency of the rolling bearing can be determined. Then the fault characteristic frequencies are found by spectrum envelope. Both simulation signal and rolling bearing AE signals are used to verify the effectiveness of the proposed method. The results show that the new method performs well in identifying bearing fault frequency under strong background noise.

  15. Study on Inland River Vessel Fuel-oil Spillage and Emergency Response Strategies

    NASA Astrophysics Data System (ADS)

    Chen, R. C.; Shi, N.; Wang, K. S.

    2017-12-01

    by making statistics and conducting regression analysis on the carrying volume of vessels navigating on inland rivers and coastal waters, the linear relation between the oil volume carried by a vessel and its gross tonnage (GT) is found. Based on the linear relation, the possible spillage of a 10,000 GT vessel is estimated by using the empirical formula method which is commonly used to measure oil spillage from any vessel spill incident. In the waters downstream of Yangtze River, the trajectory and fates model is used to predict the drifting paths and fates of the spilled oil under three weather scenarios, and then, the emergency response strategies for vessel oil spills are put forth. The results of the research can be used to develop an empirical method to quickly estimate oil spillage and provide recommendations on oil spill emergency response strategies for decision-makers.

  16. METAPHOR: Probability density estimation for machine learning based photometric redshifts

    NASA Astrophysics Data System (ADS)

    Amaro, V.; Cavuoti, S.; Brescia, M.; Vellucci, C.; Tortora, C.; Longo, G.

    2017-06-01

    We present METAPHOR (Machine-learning Estimation Tool for Accurate PHOtometric Redshifts), a method able to provide a reliable PDF for photometric galaxy redshifts estimated through empirical techniques. METAPHOR is a modular workflow, mainly based on the MLPQNA neural network as internal engine to derive photometric galaxy redshifts, but giving the possibility to easily replace MLPQNA with any other method to predict photo-z's and their PDF. We present here the results about a validation test of the workflow on the galaxies from SDSS-DR9, showing also the universality of the method by replacing MLPQNA with KNN and Random Forest models. The validation test include also a comparison with the PDF's derived from a traditional SED template fitting method (Le Phare).

  17. Preequating with Empirical Item Characteristic Curves: An Observed-Score Preequating Method

    ERIC Educational Resources Information Center

    Zu, Jiyun; Puhan, Gautam

    2014-01-01

    Preequating is in demand because it reduces score reporting time. In this article, we evaluated an observed-score preequating method: the empirical item characteristic curve (EICC) method, which makes preequating without item response theory (IRT) possible. EICC preequating results were compared with a criterion equating and with IRT true-score…

  18. A Comparison of Two Scoring Methods for an Automated Speech Scoring System

    ERIC Educational Resources Information Center

    Xi, Xiaoming; Higgins, Derrick; Zechner, Klaus; Williamson, David

    2012-01-01

    This paper compares two alternative scoring methods--multiple regression and classification trees--for an automated speech scoring system used in a practice environment. The two methods were evaluated on two criteria: construct representation and empirical performance in predicting human scores. The empirical performance of the two scoring models…

  19. Empirical best linear unbiased prediction method for small areas with restricted maximum likelihood and bootstrap procedure to estimate the average of household expenditure per capita in Banjar Regency

    NASA Astrophysics Data System (ADS)

    Aminah, Agustin Siti; Pawitan, Gandhi; Tantular, Bertho

    2017-03-01

    So far, most of the data published by Statistics Indonesia (BPS) as data providers for national statistics are still limited to the district level. Less sufficient sample size for smaller area levels to make the measurement of poverty indicators with direct estimation produced high standard error. Therefore, the analysis based on it is unreliable. To solve this problem, the estimation method which can provide a better accuracy by combining survey data and other auxiliary data is required. One method often used for the estimation is the Small Area Estimation (SAE). There are many methods used in SAE, one of them is Empirical Best Linear Unbiased Prediction (EBLUP). EBLUP method of maximum likelihood (ML) procedures does not consider the loss of degrees of freedom due to estimating β with β ^. This drawback motivates the use of the restricted maximum likelihood (REML) procedure. This paper proposed EBLUP with REML procedure for estimating poverty indicators by modeling the average of household expenditures per capita and implemented bootstrap procedure to calculate MSE (Mean Square Error) to compare the accuracy EBLUP method with the direct estimation method. Results show that EBLUP method reduced MSE in small area estimation.

  20. Reduction of atmospheric disturbances in PSInSAR measure technique based on ENVISAT ASAR data for Erta Ale Ridge

    NASA Astrophysics Data System (ADS)

    Kopeć, Anna

    2018-01-01

    The interferometric synthetic aperture radar (InSAR) is becoming more and more popular to investigate surface deformation, associated with volcanism, earthquakes, landslides, and post-mining surface subsidence. The measuring accuracy depends on many factors: surface, time and geometric decorrelation, orbit errors, however the largest challenges are the tropospheric delays. The spatial and temporal variations in temperature, pressure, and relative humidity are responsible for tropospheric delays. So far, many methods have been developed, but researchers are still searching for the one, that will allow to correct interferograms consistently in different regions and times. The article focuses on examining the methods based on empirical phase-based methods, spectrometer measurements and weather model. These methods were applied to the ENVISAT ASAR data for the Erta Ale Ridge in the Afar Depression, East Africa

Top