2001-10-25
Image Analysis aims to develop model-based computer analysis and visualization methods for showing focal and general abnormalities of lung ventilation and perfusion based on a sequence of digital chest fluoroscopy frames collected with the Dynamic Pulmonary Imaging technique 18,5,17,6. We have proposed and evaluated a multiresolutional method with an explicit ventilation model based on pyramid images for ventilation analysis. We have further extended the method for ventilation analysis to pulmonary perfusion. This paper focuses on the clinical evaluation of our method for
ERIC Educational Resources Information Center
Lin, Yi-Chun; Hsieh, Ya-Hui; Hou, Huei-Tse
2015-01-01
The development of a usability evaluation method for educational systems or applications, called the self-report-based sequential analysis, is described herein. The method aims to extend the current practice by proposing self-report-based sequential analysis as a new usability method, which integrates the advantages of self-report in survey…
Jiang, Wei; Yu, Weichuan
2017-02-15
In genome-wide association studies (GWASs) of common diseases/traits, we often analyze multiple GWASs with the same phenotype together to discover associated genetic variants with higher power. Since it is difficult to access data with detailed individual measurements, summary-statistics-based meta-analysis methods have become popular to jointly analyze datasets from multiple GWASs. In this paper, we propose a novel summary-statistics-based joint analysis method based on controlling the joint local false discovery rate (Jlfdr). We prove that our method is the most powerful summary-statistics-based joint analysis method when controlling the false discovery rate at a certain level. In particular, the Jlfdr-based method achieves higher power than commonly used meta-analysis methods when analyzing heterogeneous datasets from multiple GWASs. Simulation experiments demonstrate the superior power of our method over meta-analysis methods. Also, our method discovers more associations than meta-analysis methods from empirical datasets of four phenotypes. The R-package is available at: http://bioinformatics.ust.hk/Jlfdr.html . eeyu@ust.hk. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
A Method for Cognitive Task Analysis
1992-07-01
A method for cognitive task analysis is described based on the notion of ’generic tasks’. The method distinguishes three layers of analysis. At the...model for applied areas such as the development of knowledge-based systems and training, are discussed. Problem solving, Cognitive Task Analysis , Knowledge, Strategies.
A comparison of analysis methods to estimate contingency strength.
Lloyd, Blair P; Staubitz, Johanna L; Tapp, Jon T
2018-05-09
To date, several data analysis methods have been used to estimate contingency strength, yet few studies have compared these methods directly. To compare the relative precision and sensitivity of four analysis methods (i.e., exhaustive event-based, nonexhaustive event-based, concurrent interval, concurrent+lag interval), we applied all methods to a simulated data set in which several response-dependent and response-independent schedules of reinforcement were programmed. We evaluated the degree to which contingency strength estimates produced from each method (a) corresponded with expected values for response-dependent schedules and (b) showed sensitivity to parametric manipulations of response-independent reinforcement. Results indicated both event-based methods produced contingency strength estimates that aligned with expected values for response-dependent schedules, but differed in sensitivity to response-independent reinforcement. The precision of interval-based methods varied by analysis method (concurrent vs. concurrent+lag) and schedule type (continuous vs. partial), and showed similar sensitivities to response-independent reinforcement. Recommendations and considerations for measuring contingencies are identified. © 2018 Society for the Experimental Analysis of Behavior.
Li, Dongmei; Le Pape, Marc A; Parikh, Nisha I; Chen, Will X; Dye, Timothy D
2013-01-01
Microarrays are widely used for examining differential gene expression, identifying single nucleotide polymorphisms, and detecting methylation loci. Multiple testing methods in microarray data analysis aim at controlling both Type I and Type II error rates; however, real microarray data do not always fit their distribution assumptions. Smyth's ubiquitous parametric method, for example, inadequately accommodates violations of normality assumptions, resulting in inflated Type I error rates. The Significance Analysis of Microarrays, another widely used microarray data analysis method, is based on a permutation test and is robust to non-normally distributed data; however, the Significance Analysis of Microarrays method fold change criteria are problematic, and can critically alter the conclusion of a study, as a result of compositional changes of the control data set in the analysis. We propose a novel approach, combining resampling with empirical Bayes methods: the Resampling-based empirical Bayes Methods. This approach not only reduces false discovery rates for non-normally distributed microarray data, but it is also impervious to fold change threshold since no control data set selection is needed. Through simulation studies, sensitivities, specificities, total rejections, and false discovery rates are compared across the Smyth's parametric method, the Significance Analysis of Microarrays, and the Resampling-based empirical Bayes Methods. Differences in false discovery rates controls between each approach are illustrated through a preterm delivery methylation study. The results show that the Resampling-based empirical Bayes Methods offer significantly higher specificity and lower false discovery rates compared to Smyth's parametric method when data are not normally distributed. The Resampling-based empirical Bayes Methods also offers higher statistical power than the Significance Analysis of Microarrays method when the proportion of significantly differentially expressed genes is large for both normally and non-normally distributed data. Finally, the Resampling-based empirical Bayes Methods are generalizable to next generation sequencing RNA-seq data analysis.
Traditional environmental mold analysis is based-on microscopic observations and counting of mold structures collected from the air on a sticky surface or culturing of molds on growth media for identification and quantification. A DNA-based method of mold analysis called mol...
Heading in the right direction: thermodynamics-based network analysis and pathway engineering.
Ataman, Meric; Hatzimanikatis, Vassily
2015-12-01
Thermodynamics-based network analysis through the introduction of thermodynamic constraints in metabolic models allows a deeper analysis of metabolism and guides pathway engineering. The number and the areas of applications of thermodynamics-based network analysis methods have been increasing in the last ten years. We review recent applications of these methods and we identify the areas that such analysis can contribute significantly, and the needs for future developments. We find that organisms with multiple compartments and extremophiles present challenges for modeling and thermodynamics-based flux analysis. The evolution of current and new methods must also address the issues of the multiple alternatives in flux directionalities and the uncertainties and partial information from analytical methods. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
A method for data base management and analysis for wind tunnel data
NASA Technical Reports Server (NTRS)
Biser, Aileen O.
1987-01-01
To respond to the need for improved data base management and analysis capabilities for wind-tunnel data at the Langley 16-Foot Transonic Tunnel, research was conducted into current methods of managing wind-tunnel data and a method was developed as a solution to this need. This paper describes the development of the data base management and analysis method for wind-tunnel data. The design and implementation of the software system are discussed and examples of its use are shown.
NASA Technical Reports Server (NTRS)
Hou, Gene J.-W; Newman, Perry A. (Technical Monitor)
2004-01-01
A major step in a most probable point (MPP)-based method for reliability analysis is to determine the MPP. This is usually accomplished by using an optimization search algorithm. The minimum distance associated with the MPP provides a measurement of safety probability, which can be obtained by approximate probability integration methods such as FORM or SORM. The reliability sensitivity equations are derived first in this paper, based on the derivatives of the optimal solution. Examples are provided later to demonstrate the use of these derivatives for better reliability analysis and reliability-based design optimization (RBDO).
NASA Astrophysics Data System (ADS)
Klügel, J.
2006-12-01
Deterministic scenario-based seismic hazard analysis has a long tradition in earthquake engineering for developing the design basis of critical infrastructures like dams, transport infrastructures, chemical plants and nuclear power plants. For many applications besides of the design of infrastructures it is of interest to assess the efficiency of the design measures taken. These applications require a method allowing to perform a meaningful quantitative risk analysis. A new method for a probabilistic scenario-based seismic risk analysis has been developed based on a probabilistic extension of proven deterministic methods like the MCE- methodology. The input data required for the method are entirely based on the information which is necessary to perform any meaningful seismic hazard analysis. The method is based on the probabilistic risk analysis approach common for applications in nuclear technology developed originally by Kaplan & Garrick (1981). It is based (1) on a classification of earthquake events into different size classes (by magnitude), (2) the evaluation of the frequency of occurrence of events, assigned to the different classes (frequency of initiating events, (3) the development of bounding critical scenarios assigned to each class based on the solution of an optimization problem and (4) in the evaluation of the conditional probability of exceedance of critical design parameters (vulnerability analysis). The advantage of the method in comparison with traditional PSHA consists in (1) its flexibility, allowing to use different probabilistic models for earthquake occurrence as well as to incorporate advanced physical models into the analysis, (2) in the mathematically consistent treatment of uncertainties, and (3) in the explicit consideration of the lifetime of the critical structure as a criterion to formulate different risk goals. The method was applied for the evaluation of the risk of production interruption losses of a nuclear power plant during its residual lifetime.
Chappell, Michael A; Woolrich, Mark W; Petersen, Esben T; Golay, Xavier; Payne, Stephen J
2013-05-01
Amongst the various implementations of arterial spin labeling MRI methods for quantifying cerebral perfusion, the QUASAR method is unique. By using a combination of labeling with and without flow suppression gradients, the QUASAR method offers the separation of macrovascular and tissue signals. This permits local arterial input functions to be defined and "model-free" analysis, using numerical deconvolution, to be used. However, it remains unclear whether arterial spin labeling data are best treated using model-free or model-based analysis. This work provides a critical comparison of these two approaches for QUASAR arterial spin labeling in the healthy brain. An existing two-component (arterial and tissue) model was extended to the mixed flow suppression scheme of QUASAR to provide an optimal model-based analysis. The model-based analysis was extended to incorporate dispersion of the labeled bolus, generally regarded as the major source of discrepancy between the two analysis approaches. Model-free and model-based analyses were compared for perfusion quantification including absolute measurements, uncertainty estimation, and spatial variation in cerebral blood flow estimates. Major sources of discrepancies between model-free and model-based analysis were attributed to the effects of dispersion and the degree to which the two methods can separate macrovascular and tissue signal. Copyright © 2012 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Cheng, Jian; Yue, Huiqiang; Yu, Shengjiao; Liu, Tiegang
2018-06-01
In this paper, an adjoint-based high-order h-adaptive direct discontinuous Galerkin method is developed and analyzed for the two dimensional steady state compressible Navier-Stokes equations. Particular emphasis is devoted to the analysis of the adjoint consistency for three different direct discontinuous Galerkin discretizations: including the original direct discontinuous Galerkin method (DDG), the direct discontinuous Galerkin method with interface correction (DDG(IC)) and the symmetric direct discontinuous Galerkin method (SDDG). Theoretical analysis shows the extra interface correction term adopted in the DDG(IC) method and the SDDG method plays a key role in preserving the adjoint consistency. To be specific, for the model problem considered in this work, we prove that the original DDG method is not adjoint consistent, while the DDG(IC) method and the SDDG method can be adjoint consistent with appropriate treatment of boundary conditions and correct modifications towards the underlying output functionals. The performance of those three DDG methods is carefully investigated and evaluated through typical test cases. Based on the theoretical analysis, an adjoint-based h-adaptive DDG(IC) method is further developed and evaluated, numerical experiment shows its potential in the applications of adjoint-based adaptation for simulating compressible flows.
Retinal status analysis method based on feature extraction and quantitative grading in OCT images.
Fu, Dongmei; Tong, Hejun; Zheng, Shuang; Luo, Ling; Gao, Fulin; Minar, Jiri
2016-07-22
Optical coherence tomography (OCT) is widely used in ophthalmology for viewing the morphology of the retina, which is important for disease detection and assessing therapeutic effect. The diagnosis of retinal diseases is based primarily on the subjective analysis of OCT images by trained ophthalmologists. This paper describes an OCT images automatic analysis method for computer-aided disease diagnosis and it is a critical part of the eye fundus diagnosis. This study analyzed 300 OCT images acquired by Optovue Avanti RTVue XR (Optovue Corp., Fremont, CA). Firstly, the normal retinal reference model based on retinal boundaries was presented. Subsequently, two kinds of quantitative methods based on geometric features and morphological features were proposed. This paper put forward a retinal abnormal grading decision-making method which was used in actual analysis and evaluation of multiple OCT images. This paper showed detailed analysis process by four retinal OCT images with different abnormal degrees. The final grading results verified that the analysis method can distinguish abnormal severity and lesion regions. This paper presented the simulation of the 150 test images, where the results of analysis of retinal status showed that the sensitivity was 0.94 and specificity was 0.92.The proposed method can speed up diagnostic process and objectively evaluate the retinal status. This paper aims on studies of retinal status automatic analysis method based on feature extraction and quantitative grading in OCT images. The proposed method can obtain the parameters and the features that are associated with retinal morphology. Quantitative analysis and evaluation of these features are combined with reference model which can realize the target image abnormal judgment and provide a reference for disease diagnosis.
Structural reliability analysis under evidence theory using the active learning kriging model
NASA Astrophysics Data System (ADS)
Yang, Xufeng; Liu, Yongshou; Ma, Panke
2017-11-01
Structural reliability analysis under evidence theory is investigated. It is rigorously proved that a surrogate model providing only correct sign prediction of the performance function can meet the accuracy requirement of evidence-theory-based reliability analysis. Accordingly, a method based on the active learning kriging model which only correctly predicts the sign of the performance function is proposed. Interval Monte Carlo simulation and a modified optimization method based on Karush-Kuhn-Tucker conditions are introduced to make the method more efficient in estimating the bounds of failure probability based on the kriging model. Four examples are investigated to demonstrate the efficiency and accuracy of the proposed method.
Design Optimization Method for Composite Components Based on Moment Reliability-Sensitivity Criteria
NASA Astrophysics Data System (ADS)
Sun, Zhigang; Wang, Changxi; Niu, Xuming; Song, Yingdong
2017-08-01
In this paper, a Reliability-Sensitivity Based Design Optimization (RSBDO) methodology for the design of the ceramic matrix composites (CMCs) components has been proposed. A practical and efficient method for reliability analysis and sensitivity analysis of complex components with arbitrary distribution parameters are investigated by using the perturbation method, the respond surface method, the Edgeworth series and the sensitivity analysis approach. The RSBDO methodology is then established by incorporating sensitivity calculation model into RBDO methodology. Finally, the proposed RSBDO methodology is applied to the design of the CMCs components. By comparing with Monte Carlo simulation, the numerical results demonstrate that the proposed methodology provides an accurate, convergent and computationally efficient method for reliability-analysis based finite element modeling engineering practice.
NASA Astrophysics Data System (ADS)
Sun, Li; Wang, Deyu
2011-09-01
A new multi-level analysis method of introducing the super-element modeling method, derived from the multi-level analysis method first proposed by O. F. Hughes, has been proposed in this paper to solve the problem of high time cost in adopting a rational-based optimal design method for ship structural design. Furthermore, the method was verified by its effective application in optimization of the mid-ship section of a container ship. A full 3-D FEM model of a ship, suffering static and quasi-static loads, was used as the analyzing object for evaluating the structural performance of the mid-ship module, including static strength and buckling performance. Research results reveal that this new method could substantially reduce the computational cost of the rational-based optimization problem without decreasing its accuracy, which increases the feasibility and economic efficiency of using a rational-based optimal design method in ship structural design.
A catalog of automated analysis methods for enterprise models.
Florez, Hector; Sánchez, Mario; Villalobos, Jorge
2016-01-01
Enterprise models are created for documenting and communicating the structure and state of Business and Information Technologies elements of an enterprise. After models are completed, they are mainly used to support analysis. Model analysis is an activity typically based on human skills and due to the size and complexity of the models, this process can be complicated and omissions or miscalculations are very likely. This situation has fostered the research of automated analysis methods, for supporting analysts in enterprise analysis processes. By reviewing the literature, we found several analysis methods; nevertheless, they are based on specific situations and different metamodels; then, some analysis methods might not be applicable to all enterprise models. This paper presents the work of compilation (literature review), classification, structuring, and characterization of automated analysis methods for enterprise models, expressing them in a standardized modeling language. In addition, we have implemented the analysis methods in our modeling tool.
Fusing Symbolic and Numerical Diagnostic Computations
NASA Technical Reports Server (NTRS)
James, Mark
2007-01-01
X-2000 Anomaly Detection Language denotes a developmental computing language, and the software that establishes and utilizes the language, for fusing two diagnostic computer programs, one implementing a numerical analysis method, the other implementing a symbolic analysis method into a unified event-based decision analysis software system for realtime detection of events (e.g., failures) in a spacecraft, aircraft, or other complex engineering system. The numerical analysis method is performed by beacon-based exception analysis for multi-missions (BEAMs), which has been discussed in several previous NASA Tech Briefs articles. The symbolic analysis method is, more specifically, an artificial-intelligence method of the knowledge-based, inference engine type, and its implementation is exemplified by the Spacecraft Health Inference Engine (SHINE) software. The goal in developing the capability to fuse numerical and symbolic diagnostic components is to increase the depth of analysis beyond that previously attainable, thereby increasing the degree of confidence in the computed results. In practical terms, the sought improvement is to enable detection of all or most events, with no or few false alarms.
Markov Chain Monte Carlo Methods for Bayesian Data Analysis in Astronomy
NASA Astrophysics Data System (ADS)
Sharma, Sanjib
2017-08-01
Markov Chain Monte Carlo based Bayesian data analysis has now become the method of choice for analyzing and interpreting data in almost all disciplines of science. In astronomy, over the last decade, we have also seen a steady increase in the number of papers that employ Monte Carlo based Bayesian analysis. New, efficient Monte Carlo based methods are continuously being developed and explored. In this review, we first explain the basics of Bayesian theory and discuss how to set up data analysis problems within this framework. Next, we provide an overview of various Monte Carlo based methods for performing Bayesian data analysis. Finally, we discuss advanced ideas that enable us to tackle complex problems and thus hold great promise for the future. We also distribute downloadable computer software (available at https://github.com/sanjibs/bmcmc/ ) that implements some of the algorithms and examples discussed here.
Image analysis and modeling in medical image computing. Recent developments and advances.
Handels, H; Deserno, T M; Meinzer, H-P; Tolxdorff, T
2012-01-01
Medical image computing is of growing importance in medical diagnostics and image-guided therapy. Nowadays, image analysis systems integrating advanced image computing methods are used in practice e.g. to extract quantitative image parameters or to support the surgeon during a navigated intervention. However, the grade of automation, accuracy, reproducibility and robustness of medical image computing methods has to be increased to meet the requirements in clinical routine. In the focus theme, recent developments and advances in the field of modeling and model-based image analysis are described. The introduction of models in the image analysis process enables improvements of image analysis algorithms in terms of automation, accuracy, reproducibility and robustness. Furthermore, model-based image computing techniques open up new perspectives for prediction of organ changes and risk analysis of patients. Selected contributions are assembled to present latest advances in the field. The authors were invited to present their recent work and results based on their outstanding contributions to the Conference on Medical Image Computing BVM 2011 held at the University of Lübeck, Germany. All manuscripts had to pass a comprehensive peer review. Modeling approaches and model-based image analysis methods showing new trends and perspectives in model-based medical image computing are described. Complex models are used in different medical applications and medical images like radiographic images, dual-energy CT images, MR images, diffusion tensor images as well as microscopic images are analyzed. The applications emphasize the high potential and the wide application range of these methods. The use of model-based image analysis methods can improve segmentation quality as well as the accuracy and reproducibility of quantitative image analysis. Furthermore, image-based models enable new insights and can lead to a deeper understanding of complex dynamic mechanisms in the human body. Hence, model-based image computing methods are important tools to improve medical diagnostics and patient treatment in future.
NASA Astrophysics Data System (ADS)
Wang, Xiao; Gao, Feng; Dong, Junyu; Qi, Qiang
2018-04-01
Synthetic aperture radar (SAR) image is independent on atmospheric conditions, and it is the ideal image source for change detection. Existing methods directly analysis all the regions in the speckle noise contaminated difference image. The performance of these methods is easily affected by small noisy regions. In this paper, we proposed a novel change detection framework for saliency-guided change detection based on pattern and intensity distinctiveness analysis. The saliency analysis step can remove small noisy regions, and therefore makes the proposed method more robust to the speckle noise. In the proposed method, the log-ratio operator is first utilized to obtain a difference image (DI). Then, the saliency detection method based on pattern and intensity distinctiveness analysis is utilized to obtain the changed region candidates. Finally, principal component analysis and k-means clustering are employed to analysis pixels in the changed region candidates. Thus, the final change map can be obtained by classifying these pixels into changed or unchanged class. The experiment results on two real SAR images datasets have demonstrated the effectiveness of the proposed method.
Laser-based methods for the analysis of low molecular weight compounds in biological matrices.
Kiss, András; Hopfgartner, Gérard
2016-07-15
Laser-based desorption and/or ionization methods play an important role in the field of the analysis of low molecular-weight compounds (LMWCs) because they allow direct analysis with high-throughput capabilities. In the recent years there were several new improvements in ionization methods with the emergence of novel atmospheric ion sources such as laser ablation electrospray ionization or laser diode thermal desorption and atmospheric pressure chemical ionization and in sample preparation methods with the development of new matrix compounds for matrix-assisted laser desorption/ionization (MALDI). Also, the combination of ion mobility separation with laser-based ionization methods starts to gain popularity with access to commercial systems. These developments have been driven mainly by the emergence of new application fields such as MS imaging and non-chromatographic analytical approaches for quantification. This review aims to present these new developments in laser-based methods for the analysis of low-molecular weight compounds by MS and several potential applications. Copyright © 2016 Elsevier Inc. All rights reserved.
A Cyber-Attack Detection Model Based on Multivariate Analyses
NASA Astrophysics Data System (ADS)
Sakai, Yuto; Rinsaka, Koichiro; Dohi, Tadashi
In the present paper, we propose a novel cyber-attack detection model based on two multivariate-analysis methods to the audit data observed on a host machine. The statistical techniques used here are the well-known Hayashi's quantification method IV and cluster analysis method. We quantify the observed qualitative audit event sequence via the quantification method IV, and collect similar audit event sequence in the same groups based on the cluster analysis. It is shown in simulation experiments that our model can improve the cyber-attack detection accuracy in some realistic cases where both normal and attack activities are intermingled.
NASA Astrophysics Data System (ADS)
Rajshekhar, G.; Gorthi, Sai Siva; Rastogi, Pramod
2010-04-01
For phase estimation in digital holographic interferometry, a high-order instantaneous moments (HIM) based method was recently developed which relies on piecewise polynomial approximation of phase and subsequent evaluation of the polynomial coefficients using the HIM operator. A crucial step in the method is mapping the polynomial coefficient estimation to single-tone frequency determination for which various techniques exist. The paper presents a comparative analysis of the performance of the HIM operator based method in using different single-tone frequency estimation techniques for phase estimation. The analysis is supplemented by simulation results.
Application of a data-mining method based on Bayesian networks to lesion-deficit analysis
NASA Technical Reports Server (NTRS)
Herskovits, Edward H.; Gerring, Joan P.
2003-01-01
Although lesion-deficit analysis (LDA) has provided extensive information about structure-function associations in the human brain, LDA has suffered from the difficulties inherent to the analysis of spatial data, i.e., there are many more variables than subjects, and data may be difficult to model using standard distributions, such as the normal distribution. We herein describe a Bayesian method for LDA; this method is based on data-mining techniques that employ Bayesian networks to represent structure-function associations. These methods are computationally tractable, and can represent complex, nonlinear structure-function associations. When applied to the evaluation of data obtained from a study of the psychiatric sequelae of traumatic brain injury in children, this method generates a Bayesian network that demonstrates complex, nonlinear associations among lesions in the left caudate, right globus pallidus, right side of the corpus callosum, right caudate, and left thalamus, and subsequent development of attention-deficit hyperactivity disorder, confirming and extending our previous statistical analysis of these data. Furthermore, analysis of simulated data indicates that methods based on Bayesian networks may be more sensitive and specific for detecting associations among categorical variables than methods based on chi-square and Fisher exact statistics.
An advanced analysis method of initial orbit determination with too short arc data
NASA Astrophysics Data System (ADS)
Li, Binzhe; Fang, Li
2018-02-01
This paper studies the initial orbit determination (IOD) based on space-based angle measurement. Commonly, these space-based observations have short durations. As a result, classical initial orbit determination algorithms give poor results, such as Laplace methods and Gauss methods. In this paper, an advanced analysis method of initial orbit determination is developed for space-based observations. The admissible region and triangulation are introduced in the method. Genetic algorithm is also used for adding some constraints of parameters. Simulation results show that the algorithm can successfully complete the initial orbit determination.
ERIC Educational Resources Information Center
Chatzarakis, G. E.
2009-01-01
This paper presents a new pedagogical method for nodal analysis optimization based on the use of virtual current sources, applicable to any linear electric circuit (LEC), regardless of its complexity. The proposed method leads to straightforward solutions, mostly arrived at by inspection. Furthermore, the method is easily adapted to computer…
Robust Methods for Moderation Analysis with a Two-Level Regression Model.
Yang, Miao; Yuan, Ke-Hai
2016-01-01
Moderation analysis has many applications in social sciences. Most widely used estimation methods for moderation analysis assume that errors are normally distributed and homoscedastic. When these assumptions are not met, the results from a classical moderation analysis can be misleading. For more reliable moderation analysis, this article proposes two robust methods with a two-level regression model when the predictors do not contain measurement error. One method is based on maximum likelihood with Student's t distribution and the other is based on M-estimators with Huber-type weights. An algorithm for obtaining the robust estimators is developed. Consistent estimates of standard errors of the robust estimators are provided. The robust approaches are compared against normal-distribution-based maximum likelihood (NML) with respect to power and accuracy of parameter estimates through a simulation study. Results show that the robust approaches outperform NML under various distributional conditions. Application of the robust methods is illustrated through a real data example. An R program is developed and documented to facilitate the application of the robust methods.
PARTIAL RESTRAINING FORCE INTRODUCTION METHOD FOR DESIGNING CONSTRUCTION COUNTERMESURE ON ΔB METHOD
NASA Astrophysics Data System (ADS)
Nishiyama, Taku; Imanishi, Hajime; Chiba, Noriyuki; Ito, Takao
Landslide or slope failure is a three-dimensional movement phenomenon, thus a three-dimensional treatment makes it easier to understand stability. The ΔB method (simplified three-dimensional slope stability analysis method) is based on the limit equilibrium method and equals to an approximate three-dimensional slope stability analysis that extends two-dimensional cross-section stability analysis results to assess stability. This analysis can be conducted using conventional spreadsheets or two-dimensional slope stability computational software. This paper describes the concept of the partial restraining force in-troduction method for designing construction countermeasures using the distribution of the restraining force found along survey lines, which is based on the distribution of survey line safety factors derived from the above-stated analysis. This paper also presents the transverse distributive method of restraining force used for planning ground stabilizing on the basis of the example analysis.
EXPLORING FUNCTIONAL CONNECTIVITY IN FMRI VIA CLUSTERING.
Venkataraman, Archana; Van Dijk, Koene R A; Buckner, Randy L; Golland, Polina
2009-04-01
In this paper we investigate the use of data driven clustering methods for functional connectivity analysis in fMRI. In particular, we consider the K-Means and Spectral Clustering algorithms as alternatives to the commonly used Seed-Based Analysis. To enable clustering of the entire brain volume, we use the Nyström Method to approximate the necessary spectral decompositions. We apply K-Means, Spectral Clustering and Seed-Based Analysis to resting-state fMRI data collected from 45 healthy young adults. Without placing any a priori constraints, both clustering methods yield partitions that are associated with brain systems previously identified via Seed-Based Analysis. Our empirical results suggest that clustering provides a valuable tool for functional connectivity analysis.
NASA Technical Reports Server (NTRS)
Hill, Geoffrey A.; Olson, Erik D.
2004-01-01
Due to the growing problem of noise in today's air transportation system, there have arisen needs to incorporate noise considerations in the conceptual design of revolutionary aircraft. Through the use of response surfaces, complex noise models may be converted into polynomial equations for rapid and simplified evaluation. This conversion allows many of the commonly used response surface-based trade space exploration methods to be applied to noise analysis. This methodology is demonstrated using a noise model of a notional 300 passenger Blended-Wing-Body (BWB) transport. Response surfaces are created relating source noise levels of the BWB vehicle to its corresponding FAR-36 certification noise levels and the resulting trade space is explored. Methods demonstrated include: single point analysis, parametric study, an optimization technique for inverse analysis, sensitivity studies, and probabilistic analysis. Extended applications of response surface-based methods in noise analysis are also discussed.
The Analysis of Organizational Diagnosis on Based Six Box Model in Universities
ERIC Educational Resources Information Center
Hamid, Rahimi; Siadat, Sayyed Ali; Reza, Hoveida; Arash, Shahin; Ali, Nasrabadi Hasan; Azizollah, Arbabisarjou
2011-01-01
Purpose: The analysis of organizational diagnosis on based six box model at universities. Research method: Research method was descriptive-survey. Statistical population consisted of 1544 faculty members of universities which through random strafed sampling method 218 persons were chosen as the sample. Research Instrument were organizational…
Forkert, N D; Cheng, B; Kemmling, A; Thomalla, G; Fiehler, J
2014-01-01
The objective of this work is to present the software tool ANTONIA, which has been developed to facilitate a quantitative analysis of perfusion-weighted MRI (PWI) datasets in general as well as the subsequent multi-parametric analysis of additional datasets for the specific purpose of acute ischemic stroke patient dataset evaluation. Three different methods for the analysis of DSC or DCE PWI datasets are currently implemented in ANTONIA, which can be case-specifically selected based on the study protocol. These methods comprise a curve fitting method as well as a deconvolution-based and deconvolution-free method integrating a previously defined arterial input function. The perfusion analysis is extended for the purpose of acute ischemic stroke analysis by additional methods that enable an automatic atlas-based selection of the arterial input function, an analysis of the perfusion-diffusion and DWI-FLAIR mismatch as well as segmentation-based volumetric analyses. For reliability evaluation, the described software tool was used by two observers for quantitative analysis of 15 datasets from acute ischemic stroke patients to extract the acute lesion core volume, FLAIR ratio, perfusion-diffusion mismatch volume with manually as well as automatically selected arterial input functions, and follow-up lesion volume. The results of this evaluation revealed that the described software tool leads to highly reproducible results for all parameters if the automatic arterial input function selection method is used. Due to the broad selection of processing methods that are available in the software tool, ANTONIA is especially helpful to support image-based perfusion and acute ischemic stroke research projects.
Ganger, Michael T; Dietz, Geoffrey D; Ewing, Sarah J
2017-12-01
qPCR has established itself as the technique of choice for the quantification of gene expression. Procedures for conducting qPCR have received significant attention; however, more rigorous approaches to the statistical analysis of qPCR data are needed. Here we develop a mathematical model, termed the Common Base Method, for analysis of qPCR data based on threshold cycle values (C q ) and efficiencies of reactions (E). The Common Base Method keeps all calculations in the logscale as long as possible by working with log 10 (E) ∙ C q , which we call the efficiency-weighted C q value; subsequent statistical analyses are then applied in the logscale. We show how efficiency-weighted C q values may be analyzed using a simple paired or unpaired experimental design and develop blocking methods to help reduce unexplained variation. The Common Base Method has several advantages. It allows for the incorporation of well-specific efficiencies and multiple reference genes. The method does not necessitate the pairing of samples that must be performed using traditional analysis methods in order to calculate relative expression ratios. Our method is also simple enough to be implemented in any spreadsheet or statistical software without additional scripts or proprietary components.
Modified multidimensional scaling approach to analyze financial markets.
Yin, Yi; Shang, Pengjian
2014-06-01
Detrended cross-correlation coefficient (σDCCA) and dynamic time warping (DTW) are introduced as the dissimilarity measures, respectively, while multidimensional scaling (MDS) is employed to translate the dissimilarities between daily price returns of 24 stock markets. We first propose MDS based on σDCCA dissimilarity and MDS based on DTW dissimilarity creatively, while MDS based on Euclidean dissimilarity is also employed to provide a reference for comparisons. We apply these methods in order to further visualize the clustering between stock markets. Moreover, we decide to confront MDS with an alternative visualization method, "Unweighed Average" clustering method, for comparison. The MDS analysis and "Unweighed Average" clustering method are employed based on the same dissimilarity. Through the results, we find that MDS gives us a more intuitive mapping for observing stable or emerging clusters of stock markets with similar behavior, while the MDS analysis based on σDCCA dissimilarity can provide more clear, detailed, and accurate information on the classification of the stock markets than the MDS analysis based on Euclidean dissimilarity. The MDS analysis based on DTW dissimilarity indicates more knowledge about the correlations between stock markets particularly and interestingly. Meanwhile, it reflects more abundant results on the clustering of stock markets and is much more intensive than the MDS analysis based on Euclidean dissimilarity. In addition, the graphs, originated from applying MDS methods based on σDCCA dissimilarity and DTW dissimilarity, may also guide the construction of multivariate econometric models.
Basic gait analysis based on continuous wave radar.
Zhang, Jun
2012-09-01
A gait analysis method based on continuous wave (CW) radar is proposed in this paper. Time-frequency analysis is used to analyze the radar micro-Doppler echo from walking humans, and the relationships between the time-frequency spectrogram and human biological gait are discussed. The methods for extracting the gait parameters from the spectrogram are studied in depth and experiments on more than twenty subjects have been performed to acquire the radar gait data. The gait parameters are calculated and compared. The gait difference between men and women are presented based on the experimental data and extracted features. Gait analysis based on CW radar will provide a new method for clinical diagnosis and therapy. Copyright © 2012 Elsevier B.V. All rights reserved.
A Statistical Discrimination Experiment for Eurasian Events Using a Twenty-Seven-Station Network
1980-07-08
to test the effectiveness of a multivariate method of analysis for distinguishing earthquakes from explosions. The data base for the experiment...to test the effectiveness of a multivariate method of analysis for distinguishing earthquakes from explosions. The data base for the experiment...the weight assigned to each variable whenever a new one is added. Jennrich, R. I. (1977). Stepwise discriminant analysis , in Statistical Methods for
NASA Astrophysics Data System (ADS)
Cai, Jianhua
2017-05-01
The time-frequency analysis method represents signal as a function of time and frequency, and it is considered a powerful tool for handling arbitrary non-stationary time series by using instantaneous frequency and instantaneous amplitude. It also provides a possible alternative to the analysis of the non-stationary magnetotelluric (MT) signal. Based on the Hilbert-Huang transform (HHT), a time-frequency analysis method is proposed to obtain stable estimates of the magnetotelluric response function. In contrast to conventional methods, the response function estimation is performed in the time-frequency domain using instantaneous spectra rather than in the frequency domain, which allows for imaging the response parameter content as a function of time and frequency. The theory of the method is presented and the mathematical model and calculation procedure, which are used to estimate response function based on HHT time-frequency spectrum, are discussed. To evaluate the results, response function estimates are compared with estimates from a standard MT data processing method based on the Fourier transform. All results show that apparent resistivities and phases, which are calculated from the HHT time-frequency method, are generally more stable and reliable than those determined from the simple Fourier analysis. The proposed method overcomes the drawbacks of the traditional Fourier methods, and the resulting parameter minimises the estimation bias caused by the non-stationary characteristics of the MT data.
A Two-Step Approach to Uncertainty Quantification of Core Simulators
Yankov, Artem; Collins, Benjamin; Klein, Markus; ...
2012-01-01
For the multiple sources of error introduced into the standard computational regime for simulating reactor cores, rigorous uncertainty analysis methods are available primarily to quantify the effects of cross section uncertainties. Two methods for propagating cross section uncertainties through core simulators are the XSUSA statistical approach and the “two-step” method. The XSUSA approach, which is based on the SUSA code package, is fundamentally a stochastic sampling method. Alternatively, the two-step method utilizes generalized perturbation theory in the first step and stochastic sampling in the second step. The consistency of these two methods in quantifying uncertainties in the multiplication factor andmore » in the core power distribution was examined in the framework of phase I-3 of the OECD Uncertainty Analysis in Modeling benchmark. With the Three Mile Island Unit 1 core as a base model for analysis, the XSUSA and two-step methods were applied with certain limitations, and the results were compared to those produced by other stochastic sampling-based codes. Based on the uncertainty analysis results, conclusions were drawn as to the method that is currently more viable for computing uncertainties in burnup and transient calculations.« less
Tan, Ming T; Liu, Jian-ping; Lao, Lixing
2012-08-01
Recently, proper use of the statistical methods in traditional Chinese medicine (TCM) randomized controlled trials (RCTs) has received increased attention. Statistical inference based on hypothesis testing is the foundation of clinical trials and evidence-based medicine. In this article, the authors described the methodological differences between literature published in Chinese and Western journals in the design and analysis of acupuncture RCTs and the application of basic statistical principles. In China, qualitative analysis method has been widely used in acupuncture and TCM clinical trials, while the between-group quantitative analysis methods on clinical symptom scores are commonly used in the West. The evidence for and against these analytical differences were discussed based on the data of RCTs assessing acupuncture for pain relief. The authors concluded that although both methods have their unique advantages, quantitative analysis should be used as the primary analysis while qualitative analysis can be a secondary criterion for analysis. The purpose of this paper is to inspire further discussion of such special issues in clinical research design and thus contribute to the increased scientific rigor of TCM research.
Peterson, Leif E
2002-01-01
CLUSFAVOR (CLUSter and Factor Analysis with Varimax Orthogonal Rotation) 5.0 is a Windows-based computer program for hierarchical cluster and principal-component analysis of microarray-based transcriptional profiles. CLUSFAVOR 5.0 standardizes input data; sorts data according to gene-specific coefficient of variation, standard deviation, average and total expression, and Shannon entropy; performs hierarchical cluster analysis using nearest-neighbor, unweighted pair-group method using arithmetic averages (UPGMA), or furthest-neighbor joining methods, and Euclidean, correlation, or jack-knife distances; and performs principal-component analysis. PMID:12184816
Carvajal, Roberto C; Arias, Luis E; Garces, Hugo O; Sbarbaro, Daniel G
2016-04-01
This work presents a non-parametric method based on a principal component analysis (PCA) and a parametric one based on artificial neural networks (ANN) to remove continuous baseline features from spectra. The non-parametric method estimates the baseline based on a set of sampled basis vectors obtained from PCA applied over a previously composed continuous spectra learning matrix. The parametric method, however, uses an ANN to filter out the baseline. Previous studies have demonstrated that this method is one of the most effective for baseline removal. The evaluation of both methods was carried out by using a synthetic database designed for benchmarking baseline removal algorithms, containing 100 synthetic composed spectra at different signal-to-baseline ratio (SBR), signal-to-noise ratio (SNR), and baseline slopes. In addition to deomonstrating the utility of the proposed methods and to compare them in a real application, a spectral data set measured from a flame radiation process was used. Several performance metrics such as correlation coefficient, chi-square value, and goodness-of-fit coefficient were calculated to quantify and compare both algorithms. Results demonstrate that the PCA-based method outperforms the one based on ANN both in terms of performance and simplicity. © The Author(s) 2016.
A Method for the Analysis of Information Use in Source-Based Writing
ERIC Educational Resources Information Center
Sormunen, Eero; Heinstrom, Jannica; Romu, Leena; Turunen, Risto
2012-01-01
Introduction: Past research on source-based writing assignments has hesitated to scrutinize how students actually use information afforded by sources. This paper introduces a method for the analysis of text transformations from sources to texts composed. The method is aimed to serve scholars in building a more detailed understanding of how…
ERIC Educational Resources Information Center
Vieira, Rodrigo Drumond; Kelly, Gregory J.
2014-01-01
In this paper, we present and apply a multi-level method for discourse analysis in science classrooms. This method is based on the structure of human activity (activity, actions, and operations) and it was applied to study a pre-service physics teacher methods course. We argue that such an approach, based on a cultural psychological perspective,…
Using Formal Methods to Assist in the Requirements Analysis of the Space Shuttle GPS Change Request
NASA Technical Reports Server (NTRS)
DiVito, Ben L.; Roberts, Larry W.
1996-01-01
We describe a recent NASA-sponsored pilot project intended to gauge the effectiveness of using formal methods in Space Shuttle software requirements analysis. Several Change Requests (CR's) were selected as promising targets to demonstrate the utility of formal methods in this application domain. A CR to add new navigation capabilities to the Shuttle, based on Global Positioning System (GPS) technology, is the focus of this report. Carried out in parallel with the Shuttle program's conventional requirements analysis process was a limited form of analysis based on formalized requirements. Portions of the GPS CR were modeled using the language of SRI's Prototype Verification System (PVS). During the formal methods-based analysis, numerous requirements issues were discovered and submitted as official issues through the normal requirements inspection process. Shuttle analysts felt that many of these issues were uncovered earlier than would have occurred with conventional methods. We present a summary of these encouraging results and conclusions we have drawn from the pilot project.
An Improved Spectral Analysis Method for Fatigue Damage Assessment of Details in Liquid Cargo Tanks
NASA Astrophysics Data System (ADS)
Zhao, Peng-yuan; Huang, Xiao-ping
2018-03-01
Errors will be caused in calculating the fatigue damages of details in liquid cargo tanks by using the traditional spectral analysis method which is based on linear system, for the nonlinear relationship between the dynamic stress and the ship acceleration. An improved spectral analysis method for the assessment of the fatigue damage in detail of a liquid cargo tank is proposed in this paper. Based on assumptions that the wave process can be simulated by summing the sinusoidal waves in different frequencies and the stress process can be simulated by summing the stress processes induced by these sinusoidal waves, the stress power spectral density (PSD) is calculated by expanding the stress processes induced by the sinusoidal waves into Fourier series and adding the amplitudes of each harmonic component with the same frequency. This analysis method can take the nonlinear relationship into consideration and the fatigue damage is then calculated based on the PSD of stress. Take an independent tank in an LNG carrier for example, the accuracy of the improved spectral analysis method is proved much better than that of the traditional spectral analysis method by comparing the calculated damage results with the results calculated by the time domain method. The proposed spectral analysis method is more accurate in calculating the fatigue damages in detail of ship liquid cargo tanks.
NASA Astrophysics Data System (ADS)
Keyport, Ren N.; Oommen, Thomas; Martha, Tapas R.; Sajinkumar, K. S.; Gierke, John S.
2018-02-01
A comparative analysis of landslides detected by pixel-based and object-oriented analysis (OOA) methods was performed using very high-resolution (VHR) remotely sensed aerial images for the San Juan La Laguna, Guatemala, which witnessed widespread devastation during the 2005 Hurricane Stan. A 3-band orthophoto of 0.5 m spatial resolution together with a 115 field-based landslide inventory were used for the analysis. A binary reference was assigned with a zero value for landslide and unity for non-landslide pixels. The pixel-based analysis was performed using unsupervised classification, which resulted in 11 different trial classes. Detection of landslides using OOA includes 2-step K-means clustering to eliminate regions based on brightness; elimination of false positives using object properties such as rectangular fit, compactness, length/width ratio, mean difference of objects, and slope angle. Both overall accuracy and F-score for OOA methods outperformed pixel-based unsupervised classification methods in both landslide and non-landslide classes. The overall accuracy for OOA and pixel-based unsupervised classification was 96.5% and 94.3%, respectively, whereas the best F-score for landslide identification for OOA and pixel-based unsupervised methods: were 84.3% and 77.9%, respectively.Results indicate that the OOA is able to identify the majority of landslides with a few false positive when compared to pixel-based unsupervised classification.
Ohkubo, Hirotsugu; Nakagawa, Hiroaki; Niimi, Akio
2018-01-01
Idiopathic pulmonary fibrosis (IPF) is the most common type of progressive idiopathic interstitial pneumonia in adults. Many computer-based image analysis methods of chest computed tomography (CT) used in patients with IPF include the mean CT value of the whole lungs, density histogram analysis, density mask technique, and texture classification methods. Most of these methods offer good assessment of pulmonary functions, disease progression, and mortality. Each method has merits that can be used in clinical practice. One of the texture classification methods is reported to be superior to visual CT scoring by radiologist for correlation with pulmonary function and prediction of mortality. In this mini review, we summarize the current literature on computer-based CT image analysis of IPF and discuss its limitations and several future directions. Copyright © 2017 The Japanese Respiratory Society. Published by Elsevier B.V. All rights reserved.
Expert system for web based collaborative CAE
NASA Astrophysics Data System (ADS)
Hou, Liang; Lin, Zusheng
2006-11-01
An expert system for web based collaborative CAE was developed based on knowledge engineering, relational database and commercial FEA (Finite element analysis) software. The architecture of the system was illustrated. In this system, the experts' experiences, theories and typical examples and other related knowledge, which will be used in the stage of pre-process in FEA, were categorized into analysis process and object knowledge. Then, the integrated knowledge model based on object-oriented method and rule based method was described. The integrated reasoning process based on CBR (case based reasoning) and rule based reasoning was presented. Finally, the analysis process of this expert system in web based CAE application was illustrated, and an analysis example of a machine tool's column was illustrated to prove the validity of the system.
Applied Cognitive Task Analysis (ACTA) Methodology
1997-11-01
experienced based cognitive skills. The primary goal of this project was to develop streamlined methods of Cognitive Task Analysis that would fill this need...We have made important progression this direction. We have developed streamlined methods of Cognitive Task Analysis . Our evaluation study indicates...developed a CD-based stand alone instructional package, which will make the Applied Cognitive Task Analysis (ACTA) tools widely accessible. A survey of the
NASA Astrophysics Data System (ADS)
Vieira, Rodrigo Drumond; Kelly, Gregory J.
2014-11-01
In this paper, we present and apply a multi-level method for discourse analysis in science classrooms. This method is based on the structure of human activity (activity, actions, and operations) and it was applied to study a pre-service physics teacher methods course. We argue that such an approach, based on a cultural psychological perspective, affords opportunities for analysts to perform a theoretically based detailed analysis of discourse events. Along with the presentation of analysis, we show and discuss how the articulation of different levels offers interpretative criteria for analyzing instructional conversations. We synthesize the results into a model for a teacher's practice and discuss the implications and possibilities of this approach for the field of discourse analysis in science classrooms. Finally, we reflect on how the development of teachers' understanding of their activity structures can contribute to forms of progressive discourse of science education.
Gait Analysis Using Wearable Sensors
Tao, Weijun; Liu, Tao; Zheng, Rencheng; Feng, Hutian
2012-01-01
Gait analysis using wearable sensors is an inexpensive, convenient, and efficient manner of providing useful information for multiple health-related applications. As a clinical tool applied in the rehabilitation and diagnosis of medical conditions and sport activities, gait analysis using wearable sensors shows great prospects. The current paper reviews available wearable sensors and ambulatory gait analysis methods based on the various wearable sensors. After an introduction of the gait phases, the principles and features of wearable sensors used in gait analysis are provided. The gait analysis methods based on wearable sensors is divided into gait kinematics, gait kinetics, and electromyography. Studies on the current methods are reviewed, and applications in sports, rehabilitation, and clinical diagnosis are summarized separately. With the development of sensor technology and the analysis method, gait analysis using wearable sensors is expected to play an increasingly important role in clinical applications. PMID:22438763
General Framework for Meta-analysis of Rare Variants in Sequencing Association Studies
Lee, Seunggeun; Teslovich, Tanya M.; Boehnke, Michael; Lin, Xihong
2013-01-01
We propose a general statistical framework for meta-analysis of gene- or region-based multimarker rare variant association tests in sequencing association studies. In genome-wide association studies, single-marker meta-analysis has been widely used to increase statistical power by combining results via regression coefficients and standard errors from different studies. In analysis of rare variants in sequencing studies, region-based multimarker tests are often used to increase power. We propose meta-analysis methods for commonly used gene- or region-based rare variants tests, such as burden tests and variance component tests. Because estimation of regression coefficients of individual rare variants is often unstable or not feasible, the proposed method avoids this difficulty by calculating score statistics instead that only require fitting the null model for each study and then aggregating these score statistics across studies. Our proposed meta-analysis rare variant association tests are conducted based on study-specific summary statistics, specifically score statistics for each variant and between-variant covariance-type (linkage disequilibrium) relationship statistics for each gene or region. The proposed methods are able to incorporate different levels of heterogeneity of genetic effects across studies and are applicable to meta-analysis of multiple ancestry groups. We show that the proposed methods are essentially as powerful as joint analysis by directly pooling individual level genotype data. We conduct extensive simulations to evaluate the performance of our methods by varying levels of heterogeneity across studies, and we apply the proposed methods to meta-analysis of rare variant effects in a multicohort study of the genetics of blood lipid levels. PMID:23768515
Feizizadeh, Bakhtiar; Blaschke, Thomas
2014-03-04
GIS-based multicriteria decision analysis (MCDA) methods are increasingly being used in landslide susceptibility mapping. However, the uncertainties that are associated with MCDA techniques may significantly impact the results. This may sometimes lead to inaccurate outcomes and undesirable consequences. This article introduces a new GIS-based MCDA approach. We illustrate the consequences of applying different MCDA methods within a decision-making process through uncertainty analysis. Three GIS-MCDA methods in conjunction with Monte Carlo simulation (MCS) and Dempster-Shafer theory are analyzed for landslide susceptibility mapping (LSM) in the Urmia lake basin in Iran, which is highly susceptible to landslide hazards. The methodology comprises three stages. First, the LSM criteria are ranked and a sensitivity analysis is implemented to simulate error propagation based on the MCS. The resulting weights are expressed through probability density functions. Accordingly, within the second stage, three MCDA methods, namely analytical hierarchy process (AHP), weighted linear combination (WLC) and ordered weighted average (OWA), are used to produce the landslide susceptibility maps. In the third stage, accuracy assessments are carried out and the uncertainties of the different results are measured. We compare the accuracies of the three MCDA methods based on (1) the Dempster-Shafer theory and (2) a validation of the results using an inventory of known landslides and their respective coverage based on object-based image analysis of IRS-ID satellite images. The results of this study reveal that through the integration of GIS and MCDA models, it is possible to identify strategies for choosing an appropriate method for LSM. Furthermore, our findings indicate that the integration of MCDA and MCS can significantly improve the accuracy of the results. In LSM, the AHP method performed best, while the OWA reveals better performance in the reliability assessment. The WLC operation yielded poor results.
Adopting exergy analysis for use in aerospace
NASA Astrophysics Data System (ADS)
Hayes, David; Lone, Mudassir; Whidborne, James F.; Camberos, José; Coetzee, Etienne
2017-08-01
Thermodynamic analysis methods, based on an exergy metric, have been developed to improve system efficiency of traditional heat driven systems such as ground based power plants and aircraft propulsion systems. However, in more recent years interest in the topic has broadened to include applying these second law methods to the field of aerodynamics and complete aerospace vehicles. Work to date is based on highly simplified structures, but such a method could be shown to have benefit to the highly conservative and risk averse commercial aerospace sector. This review justifies how thermodynamic exergy analysis has the potential to facilitate a breakthrough in the optimization of aerospace vehicles based on a system of energy systems, through studying the exergy-based multidisciplinary design of future flight vehicles.
USDA-ARS?s Scientific Manuscript database
Analysis of DNA methylation patterns relies increasingly on sequencing-based profiling methods. The four most frequently used sequencing-based technologies are the bisulfite-based methods MethylC-seq and reduced representation bisulfite sequencing (RRBS), and the enrichment-based techniques methylat...
Measurement of edge residual stresses in glass by the phase-shifting method
NASA Astrophysics Data System (ADS)
Ajovalasit, A.; Petrucci, G.; Scafidi, M.
2011-05-01
Control and measurement of residual stress in glass is of great importance in the industrial field. Since glass is a birefringent material, the residual stress analysis is based mainly on the photoelastic method. This paper considers two methods of automated analysis of membrane residual stress in glass sheets, based on the phase-shifting concept in monochromatic light. In particular these methods are the automated versions of goniometric compensation methods of Tardy and Sénarmont. The proposed methods can effectively replace manual methods of compensation (goniometric compensation of Tardy and Sénarmont, Babinet and Babinet-Soleil compensators) provided by current standards on the analysis of residual stresses in glasses.
NASA Astrophysics Data System (ADS)
Yang, Yang; Peng, Zhike; Dong, Xingjian; Zhang, Wenming; Clifton, David A.
2018-03-01
A challenge in analysing non-stationary multi-component signals is to isolate nonlinearly time-varying signals especially when they are overlapped in time and frequency plane. In this paper, a framework integrating time-frequency analysis-based demodulation and a non-parametric Gaussian latent feature model is proposed to isolate and recover components of such signals. The former aims to remove high-order frequency modulation (FM) such that the latter is able to infer demodulated components while simultaneously discovering the number of the target components. The proposed method is effective in isolating multiple components that have the same FM behavior. In addition, the results show that the proposed method is superior to generalised demodulation with singular-value decomposition-based method, parametric time-frequency analysis with filter-based method and empirical model decomposition base method, in recovering the amplitude and phase of superimposed components.
Analysis of the principal component algorithm in phase-shifting interferometry.
Vargas, J; Quiroga, J Antonio; Belenguer, T
2011-06-15
We recently presented a new asynchronous demodulation method for phase-sampling interferometry. The method is based in the principal component analysis (PCA) technique. In the former work, the PCA method was derived heuristically. In this work, we present an in-depth analysis of the PCA demodulation method.
Trial Sequential Methods for Meta-Analysis
ERIC Educational Resources Information Center
Kulinskaya, Elena; Wood, John
2014-01-01
Statistical methods for sequential meta-analysis have applications also for the design of new trials. Existing methods are based on group sequential methods developed for single trials and start with the calculation of a required information size. This works satisfactorily within the framework of fixed effects meta-analysis, but conceptual…
Systems and methods for analyzing building operations sensor data
Mezic, Igor; Eisenhower, Bryan A.
2015-05-26
Systems and methods are disclosed for analyzing building sensor information and decomposing the information therein to a more manageable and more useful form. Certain embodiments integrate energy-based and spectral-based analysis methods with parameter sampling and uncertainty/sensitivity analysis to achieve a more comprehensive perspective of building behavior. The results of this analysis may be presented to a user via a plurality of visualizations and/or used to automatically adjust certain building operations. In certain embodiments, advanced spectral techniques, including Koopman-based operations, are employed to discern features from the collected building sensor data.
NASA Astrophysics Data System (ADS)
Dai, H.; Chen, X.; Ye, M.; Song, X.; Zachara, J. M.
2016-12-01
Sensitivity analysis has been an important tool in groundwater modeling to identify the influential parameters. Among various sensitivity analysis methods, the variance-based global sensitivity analysis has gained popularity for its model independence characteristic and capability of providing accurate sensitivity measurements. However, the conventional variance-based method only considers uncertainty contribution of single model parameters. In this research, we extended the variance-based method to consider more uncertainty sources and developed a new framework to allow flexible combinations of different uncertainty components. We decompose the uncertainty sources into a hierarchical three-layer structure: scenario, model and parametric. Furthermore, each layer of uncertainty source is capable of containing multiple components. An uncertainty and sensitivity analysis framework was then constructed following this three-layer structure using Bayesian network. Different uncertainty components are represented as uncertain nodes in this network. Through the framework, variance-based sensitivity analysis can be implemented with great flexibility of using different grouping strategies for uncertainty components. The variance-based sensitivity analysis thus is improved to be able to investigate the importance of an extended range of uncertainty sources: scenario, model, and other different combinations of uncertainty components which can represent certain key model system processes (e.g., groundwater recharge process, flow reactive transport process). For test and demonstration purposes, the developed methodology was implemented into a test case of real-world groundwater reactive transport modeling with various uncertainty sources. The results demonstrate that the new sensitivity analysis method is able to estimate accurate importance measurements for any uncertainty sources which were formed by different combinations of uncertainty components. The new methodology can provide useful information for environmental management and decision-makers to formulate policies and strategies.
Zhang, Xiaohua Douglas; Yang, Xiting Cindy; Chung, Namjin; Gates, Adam; Stec, Erica; Kunapuli, Priya; Holder, Dan J; Ferrer, Marc; Espeseth, Amy S
2006-04-01
RNA interference (RNAi) high-throughput screening (HTS) experiments carried out using large (>5000 short interfering [si]RNA) libraries generate a huge amount of data. In order to use these data to identify the most effective siRNAs tested, it is critical to adopt and develop appropriate statistical methods. To address the questions in hit selection of RNAi HTS, we proposed a quartile-based method which is robust to outliers, true hits and nonsymmetrical data. We compared it with the more traditional tests, mean +/- k standard deviation (SD) and median +/- 3 median of absolute deviation (MAD). The results suggested that the quartile-based method selected more hits than mean +/- k SD under the same preset error rate. The number of hits selected by median +/- k MAD was close to that by the quartile-based method. Further analysis suggested that the quartile-based method had the greatest power in detecting true hits, especially weak or moderate true hits. Our investigation also suggested that platewise analysis (determining effective siRNAs on a plate-by-plate basis) can adjust for systematic errors in different plates, while an experimentwise analysis, in which effective siRNAs are identified in an analysis of the entire experiment, cannot. However, experimentwise analysis may detect a cluster of true positive hits placed together in one or several plates, while platewise analysis may not. To display hit selection results, we designed a specific figure called a plate-well series plot. We thus suggest the following strategy for hit selection in RNAi HTS experiments. First, choose the quartile-based method, or median +/- k MAD, for identifying effective siRNAs. Second, perform the chosen method experimentwise on transformed/normalized data, such as percentage inhibition, to check the possibility of hit clusters. If a cluster of selected hits are observed, repeat the analysis based on untransformed data to determine whether the cluster is due to an artifact in the data. If no clusters of hits are observed, select hits by performing platewise analysis on transformed data. Third, adopt the plate-well series plot to visualize both the data and the hit selection results, as well as to check for artifacts.
Code of Federal Regulations, 2012 CFR
2012-01-01
... analysis method is based on accurate data and scientific principles and is statistically valid. The FAA... safety analysis must also meet the requirements for methods of analysis contained in appendices A and B... from an identical or similar launch if the analysis still applies to the later launch. (b) Method of...
Code of Federal Regulations, 2014 CFR
2014-01-01
... analysis method is based on accurate data and scientific principles and is statistically valid. The FAA... safety analysis must also meet the requirements for methods of analysis contained in appendices A and B... from an identical or similar launch if the analysis still applies to the later launch. (b) Method of...
Code of Federal Regulations, 2013 CFR
2013-01-01
... analysis method is based on accurate data and scientific principles and is statistically valid. The FAA... safety analysis must also meet the requirements for methods of analysis contained in appendices A and B... from an identical or similar launch if the analysis still applies to the later launch. (b) Method of...
ERIC Educational Resources Information Center
Flowers, Claudia P.; Raju, Nambury S.; Oshima, T. C.
Current interest in the assessment of measurement equivalence emphasizes two methods of analysis, linear, and nonlinear procedures. This study simulated data using the graded response model to examine the performance of linear (confirmatory factor analysis or CFA) and nonlinear (item-response-theory-based differential item function or IRT-Based…
Code of Federal Regulations, 2014 CFR
2014-07-01
... PM2.5 violations”) must be based on quantitative analysis using the applicable air quality models... either: (i) Quantitative methods that represent reasonable and common professional practice; or (ii) A...) The hot-spot demonstration required by § 93.116 must be based on quantitative analysis methods for the...
NASA Technical Reports Server (NTRS)
LOVE EUGENE S
1957-01-01
An analysis has been made of available experimental data to show the effects of most of the variables that are more predominant in determining base pressure at supersonic speeds. The analysis covers base pressures for two-dimensional airfoils and for bodies of revolution with and without stabilizing fins and is restricted to turbulent boundary layers. The present status of available experimental information is summarized as are the existing methods for predicting base pressure. A simple semiempirical method is presented for estimating base pressure. For two-dimensional bases, this method stems from an analogy established between the base-pressure phenomena and the peak pressure rise associated with the separation of the boundary layer. An analysis made for axially symmetric flow indicates that the base pressure for bodies of revolution is subject to the same analogy. Based upon the methods presented, estimations are made of such effects as Mach number, angle of attack, boattailing, fineness ratio, and fins. These estimations give fair predictions of experimental results. (author)
Probabilistic methods for rotordynamics analysis
NASA Technical Reports Server (NTRS)
Wu, Y.-T.; Torng, T. Y.; Millwater, H. R.; Fossum, A. F.; Rheinfurth, M. H.
1991-01-01
This paper summarizes the development of the methods and a computer program to compute the probability of instability of dynamic systems that can be represented by a system of second-order ordinary linear differential equations. Two instability criteria based upon the eigenvalues or Routh-Hurwitz test functions are investigated. Computational methods based on a fast probability integration concept and an efficient adaptive importance sampling method are proposed to perform efficient probabilistic analysis. A numerical example is provided to demonstrate the methods.
A brain-region-based meta-analysis method utilizing the Apriori algorithm.
Niu, Zhendong; Nie, Yaoxin; Zhou, Qian; Zhu, Linlin; Wei, Jieyao
2016-05-18
Brain network connectivity modeling is a crucial method for studying the brain's cognitive functions. Meta-analyses can unearth reliable results from individual studies. Meta-analytic connectivity modeling is a connectivity analysis method based on regions of interest (ROIs) which showed that meta-analyses could be used to discover brain network connectivity. In this paper, we propose a new meta-analysis method that can be used to find network connectivity models based on the Apriori algorithm, which has the potential to derive brain network connectivity models from activation information in the literature, without requiring ROIs. This method first extracts activation information from experimental studies that use cognitive tasks of the same category, and then maps the activation information to corresponding brain areas by using the automatic anatomical label atlas, after which the activation rate of these brain areas is calculated. Finally, using these brain areas, a potential brain network connectivity model is calculated based on the Apriori algorithm. The present study used this method to conduct a mining analysis on the citations in a language review article by Price (Neuroimage 62(2):816-847, 2012). The results showed that the obtained network connectivity model was consistent with that reported by Price. The proposed method is helpful to find brain network connectivity by mining the co-activation relationships among brain regions. Furthermore, results of the co-activation relationship analysis can be used as a priori knowledge for the corresponding dynamic causal modeling analysis, possibly achieving a significant dimension-reducing effect, thus increasing the efficiency of the dynamic causal modeling analysis.
Chen, Ning; Yu, Dejie; Xia, Baizhan; Liu, Jian; Ma, Zhengdong
2017-04-01
This paper presents a homogenization-based interval analysis method for the prediction of coupled structural-acoustic systems involving periodical composites and multi-scale uncertain-but-bounded parameters. In the structural-acoustic system, the macro plate structure is assumed to be composed of a periodically uniform microstructure. The equivalent macro material properties of the microstructure are computed using the homogenization method. By integrating the first-order Taylor expansion interval analysis method with the homogenization-based finite element method, a homogenization-based interval finite element method (HIFEM) is developed to solve a periodical composite structural-acoustic system with multi-scale uncertain-but-bounded parameters. The corresponding formulations of the HIFEM are deduced. A subinterval technique is also introduced into the HIFEM for higher accuracy. Numerical examples of a hexahedral box and an automobile passenger compartment are given to demonstrate the efficiency of the presented method for a periodical composite structural-acoustic system with multi-scale uncertain-but-bounded parameters.
Nested PCR and RFLP analysis based on the 16S rRNA gene
USDA-ARS?s Scientific Manuscript database
Current phytoplasma detection and identification method is primarily based on nested PCR followed by restriction fragment length polymorphism analysis and gel electrophoresis. This method can potentially detect and differentiate all phytoplasmas including those previously not described. The present ...
NASA Astrophysics Data System (ADS)
Wang, Hongliang; Liu, Baohua; Ding, Zhongjun; Wang, Xiangxin
2017-02-01
Absorption-based optical sensors have been developed for the determination of water pH. In this paper, based on the preparation of a transparent sol-gel thin film with a phenol red (PR) indicator, several calculation methods, including simple linear regression analysis, quadratic regression analysis and dual-wavelength absorbance ratio analysis, were used to calculate water pH. Results of MSSRR show that dual-wavelength absorbance ratio analysis can improve the calculation accuracy of water pH in long-term measurement.
Comparative study on gene set and pathway topology-based enrichment methods.
Bayerlová, Michaela; Jung, Klaus; Kramer, Frank; Klemm, Florian; Bleckmann, Annalen; Beißbarth, Tim
2015-10-22
Enrichment analysis is a popular approach to identify pathways or sets of genes which are significantly enriched in the context of differentially expressed genes. The traditional gene set enrichment approach considers a pathway as a simple gene list disregarding any knowledge of gene or protein interactions. In contrast, the new group of so called pathway topology-based methods integrates the topological structure of a pathway into the analysis. We comparatively investigated gene set and pathway topology-based enrichment approaches, considering three gene set and four topological methods. These methods were compared in two extensive simulation studies and on a benchmark of 36 real datasets, providing the same pathway input data for all methods. In the benchmark data analysis both types of methods showed a comparable ability to detect enriched pathways. The first simulation study was conducted with KEGG pathways, which showed considerable gene overlaps between each other. In this study with original KEGG pathways, none of the topology-based methods outperformed the gene set approach. Therefore, a second simulation study was performed on non-overlapping pathways created by unique gene IDs. Here, methods accounting for pathway topology reached higher accuracy than the gene set methods, however their sensitivity was lower. We conducted one of the first comprehensive comparative works on evaluating gene set against pathway topology-based enrichment methods. The topological methods showed better performance in the simulation scenarios with non-overlapping pathways, however, they were not conclusively better in the other scenarios. This suggests that simple gene set approach might be sufficient to detect an enriched pathway under realistic circumstances. Nevertheless, more extensive studies and further benchmark data are needed to systematically evaluate these methods and to assess what gain and cost pathway topology information introduces into enrichment analysis. Both types of methods for enrichment analysis require further improvements in order to deal with the problem of pathway overlaps.
What Touched Your Heart? Collaborative Story Analysis Emerging From an Apsáalooke Cultural Context.
Hallett, John; Held, Suzanne; McCormick, Alma Knows His Gun; Simonds, Vanessa; Real Bird, Sloane; Martin, Christine; Simpson, Colleen; Schure, Mark; Turnsplenty, Nicole; Trottier, Coleen
2017-07-01
Community-based participatory research and decolonizing research share some recommendations for best practices for conducting research. One commonality is partnering on all stages of research; co-developing methods of data analysis is one stage with a deficit of partnering examples. We present a novel community-based and developed method for analyzing qualitative data within an Indigenous health study and explain incompatibilities of existing methods for our purposes and community needs. We describe how we explored available literature, received counsel from community Elders and experts in the field, and collaboratively developed a data analysis method consonant with community values. The method of analysis, in which interview/story remained intact, team members received story, made meaning through discussion, and generated a conceptual framework to inform intervention development, is detailed. We offer the development process and method as an example for researchers working with communities who want to keep stories intact during qualitative data analysis.
NASA Technical Reports Server (NTRS)
Shiau, Jyh-Jen; Wahba, Grace; Johnson, Donald R.
1986-01-01
A new method, based on partial spline models, is developed for including specified discontinuities in otherwise smooth two- and three-dimensional objective analyses. The method is appropriate for including tropopause height information in two- and three-dimensinal temperature analyses, using the O'Sullivan-Wahba physical variational method for analysis of satellite radiance data, and may in principle be used in a combined variational analysis of observed, forecast, and climate information. A numerical method for its implementation is described and a prototype two-dimensional analysis based on simulated radiosonde and tropopause height data is shown. The method may also be appropriate for other geophysical problems, such as modeling the ocean thermocline, fronts, discontinuities, etc.
Analysis of swimming performance: perceptions and practices of US-based swimming coaches.
Mooney, Robert; Corley, Gavin; Godfrey, Alan; Osborough, Conor; Newell, John; Quinlan, Leo Richard; ÓLaighin, Gearóid
2016-01-01
In elite swimming, a broad range of methods are used to assess performance, inform coaching practices and monitor athletic progression. The aim of this paper was to examine the performance analysis practices of swimming coaches and to explore the reasons behind the decisions that coaches take when analysing performance. Survey data were analysed from 298 Level 3 competitive swimming coaches (245 male, 53 female) based in the United States. Results were compiled to provide a generalised picture of practices and perceptions and to examine key emerging themes. It was found that a disparity exists between the importance swim coaches place on biomechanical analysis of swimming performance and the types of analyses that are actually conducted. Video-based methods are most frequently employed, with over 70% of coaches using these methods at least monthly, with analyses being mainly qualitative in nature rather than quantitative. Barriers to the more widespread use of quantitative biomechanical analysis in elite swimming environments were explored. Constraints include time, cost and availability of resources, but other factors such as sources of information on swimming performance and analysis and control over service provision are also discussed, with particular emphasis on video-based methods and emerging sensor-based technologies.
Kang, Sung-Won; Lee, Woo-Jin; Choi, Soon-Chul; Lee, Sam-Sun; Heo, Min-Suk; Huh, Kyung-Hoe; Kim, Tae-Il; Yi, Won-Jin
2015-03-01
We have developed a new method of segmenting the areas of absorbable implants and bone using region-based segmentation of micro-computed tomography (micro-CT) images, which allowed us to quantify volumetric bone-implant contact (VBIC) and volumetric absorption (VA). The simple threshold technique generally used in micro-CT analysis cannot be used to segment the areas of absorbable implants and bone. Instead, a region-based segmentation method, a region-labeling method, and subsequent morphological operations were successively applied to micro-CT images. The three-dimensional VBIC and VA of the absorbable implant were then calculated over the entire volume of the implant. Two-dimensional (2D) bone-implant contact (BIC) and bone area (BA) were also measured based on the conventional histomorphometric method. VA and VBIC increased significantly with as the healing period increased (p<0.05). VBIC values were significantly correlated with VA values (p<0.05) and with 2D BIC values (p<0.05). It is possible to quantify VBIC and VA for absorbable implants using micro-CT analysis using a region-based segmentation method.
NASA Technical Reports Server (NTRS)
McGhee, David S.; Peck, Jeff A.; McDonald, Emmett J.
2012-01-01
This paper examines Probabilistic Sensitivity Analysis (PSA) methods and tools in an effort to understand their utility in vehicle loads and dynamic analysis. Specifically, this study addresses how these methods may be used to establish limits on payload mass and cg location and requirements on adaptor stiffnesses while maintaining vehicle loads and frequencies within established bounds. To this end, PSA methods and tools are applied to a realistic, but manageable, integrated launch vehicle analysis where payload and payload adaptor parameters are modeled as random variables. This analysis is used to study both Regional Response PSA (RRPSA) and Global Response PSA (GRPSA) methods, with a primary focus on sampling based techniques. For contrast, some MPP based approaches are also examined.
Kang, Sung-Won; Lee, Woo-Jin; Choi, Soon-Chul; Lee, Sam-Sun; Heo, Min-Suk; Huh, Kyung-Hoe
2015-01-01
Purpose We have developed a new method of segmenting the areas of absorbable implants and bone using region-based segmentation of micro-computed tomography (micro-CT) images, which allowed us to quantify volumetric bone-implant contact (VBIC) and volumetric absorption (VA). Materials and Methods The simple threshold technique generally used in micro-CT analysis cannot be used to segment the areas of absorbable implants and bone. Instead, a region-based segmentation method, a region-labeling method, and subsequent morphological operations were successively applied to micro-CT images. The three-dimensional VBIC and VA of the absorbable implant were then calculated over the entire volume of the implant. Two-dimensional (2D) bone-implant contact (BIC) and bone area (BA) were also measured based on the conventional histomorphometric method. Results VA and VBIC increased significantly with as the healing period increased (p<0.05). VBIC values were significantly correlated with VA values (p<0.05) and with 2D BIC values (p<0.05). Conclusion It is possible to quantify VBIC and VA for absorbable implants using micro-CT analysis using a region-based segmentation method. PMID:25793178
López-Pacheco, María G; Sánchez-Fernández, Luis P; Molina-Lozano, Herón
2014-01-15
Noise levels of common sources such as vehicles, whistles, sirens, car horns and crowd sounds are mixed in urban soundscapes. Nowadays, environmental acoustic analysis is performed based on mixture signals recorded by monitoring systems. These mixed signals make it difficult for individual analysis which is useful in taking actions to reduce and control environmental noise. This paper aims at separating, individually, the noise source from recorded mixtures in order to evaluate the noise level of each estimated source. A method based on blind deconvolution and blind source separation in the wavelet domain is proposed. This approach provides a basis to improve results obtained in monitoring and analysis of common noise sources in urban areas. The method validation is through experiments based on knowledge of the predominant noise sources in urban soundscapes. Actual recordings of common noise sources are used to acquire mixture signals using a microphone array in semi-controlled environments. The developed method has demonstrated great performance improvements in identification, analysis and evaluation of common urban sources. © 2013 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dréan, Gaël; Acosta, Oscar, E-mail: Oscar.Acosta@univ-rennes1.fr; Simon, Antoine
2016-06-15
Purpose: Recent studies revealed a trend toward voxelwise population analysis in order to understand the local dose/toxicity relationships in prostate cancer radiotherapy. Such approaches require, however, an accurate interindividual mapping of the anatomies and 3D dose distributions toward a common coordinate system. This step is challenging due to the high interindividual variability. In this paper, the authors propose a method designed for interindividual nonrigid registration of the rectum and dose mapping for population analysis. Methods: The method is based on the computation of a normalized structural description of the rectum using a Laplacian-based model. This description takes advantage of themore » tubular structure of the rectum and its centerline to be embedded in a nonrigid registration-based scheme. The performances of the method were evaluated on 30 individuals treated for prostate cancer in a leave-one-out cross validation. Results: Performance was measured using classical metrics (Dice score and Hausdorff distance), along with new metrics devised to better assess dose mapping in relation with structural deformation (dose-organ overlap). Considering these scores, the proposed method outperforms intensity-based and distance maps-based registration methods. Conclusions: The proposed method allows for accurately mapping interindividual 3D dose distributions toward a single anatomical template, opening the way for further voxelwise statistical analysis.« less
An approximation method for configuration optimization of trusses
NASA Technical Reports Server (NTRS)
Hansen, Scott R.; Vanderplaats, Garret N.
1988-01-01
Two- and three-dimensional elastic trusses are designed for minimum weight by varying the areas of the members and the location of the joints. Constraints on member stresses and Euler buckling are imposed and multiple static loading conditions are considered. The method presented here utilizes an approximate structural analysis based on first order Taylor series expansions of the member forces. A numerical optimizer minimizes the weight of the truss using information from the approximate structural analysis. Comparisons with results from other methods are made. It is shown that the method of forming an approximate structural analysis based on linearized member forces leads to a highly efficient method of truss configuration optimization.
ERIC Educational Resources Information Center
Çokluk, Ömay; Koçak, Duygu
2016-01-01
In this study, the number of factors obtained from parallel analysis, a method used for determining the number of factors in exploratory factor analysis, was compared to that of the factors obtained from eigenvalue and scree plot--two traditional methods for determining the number of factors--in terms of consistency. Parallel analysis is based on…
Frequency analysis of uncertain structures using imprecise probability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Modares, Mehdi; Bergerson, Joshua
2015-01-01
Two new methods for finite element based frequency analysis of a structure with uncertainty are developed. An imprecise probability formulation based on enveloping p-boxes is used to quantify the uncertainty present in the mechanical characteristics of the structure. For each element, independent variations are considered. Using the two developed methods, P-box Frequency Analysis (PFA) and Interval Monte-Carlo Frequency Analysis (IMFA), sharp bounds on natural circular frequencies at different probability levels are obtained. These methods establish a framework for handling incomplete information in structural dynamics. Numerical example problems are presented that illustrate the capabilities of the new methods along with discussionsmore » on their computational efficiency.« less
Śliwińska, Anna; Burchart-Korol, Dorota; Smoliński, Adam
2017-01-01
This paper presents a life cycle assessment (LCA) of greenhouse gas emissions generated through methanol and electricity co-production system based on coal gasification technology. The analysis focuses on polygeneration technologies from which two products are produced, and thus, issues related to an allocation procedure for LCA are addressed in this paper. In the LCA, two methods were used: a 'system expansion' method based on two approaches, the 'avoided burdens approach' and 'direct system enlargement' methods and an 'allocation' method involving proportional partitioning based on physical relationships in a technological process. Cause-effect relationships in the analysed production process were identified, allowing for the identification of allocation factors. The 'system expansion' method involved expanding the analysis to include five additional variants of electricity production technologies in Poland (alternative technologies). This method revealed environmental consequences of implementation for the analysed technologies. It was found that the LCA of polygeneration technologies based on the 'system expansion' method generated a more complete source of information on environmental consequences than the 'allocation' method. The analysis shows that alternative technologies chosen for generating LCA results are crucial. Life cycle assessment was performed for the analysed, reference and variant alternative technologies. Comparative analysis was performed between the analysed technologies of methanol and electricity co-production from coal gasification as well as a reference technology of methanol production from the natural gas reforming process. Copyright © 2016 Elsevier B.V. All rights reserved.
A Novel Bit-level Image Encryption Method Based on Chaotic Map and Dynamic Grouping
NASA Astrophysics Data System (ADS)
Zhang, Guo-Ji; Shen, Yan
2012-10-01
In this paper, a novel bit-level image encryption method based on dynamic grouping is proposed. In the proposed method, the plain-image is divided into several groups randomly, then permutation-diffusion process on bit level is carried out. The keystream generated by logistic map is related to the plain-image, which confuses the relationship between the plain-image and the cipher-image. The computer simulation results of statistical analysis, information entropy analysis and sensitivity analysis show that the proposed encryption method is secure and reliable enough to be used for communication application.
High resolution melting (HRM) analysis of DNA--its role and potential in food analysis.
Druml, Barbara; Cichna-Markl, Margit
2014-09-01
DNA based methods play an increasing role in food safety control and food adulteration detection. Recent papers show that high resolution melting (HRM) analysis is an interesting approach. It involves amplification of the target of interest in the presence of a saturation dye by the polymerase chain reaction (PCR) and subsequent melting of the amplicons by gradually increasing the temperature. Since the melting profile depends on the GC content, length, sequence and strand complementarity of the product, HRM analysis is highly suitable for the detection of single-base variants and small insertions or deletions. The review gives an introduction into HRM analysis, covers important aspects in the development of an HRM analysis method and describes how HRM data are analysed and interpreted. Then we discuss the potential of HRM analysis based methods in food analysis, i.e. for the identification of closely related species and cultivars and the identification of pathogenic microorganisms. Copyright © 2014 Elsevier Ltd. All rights reserved.
Sensitivity analysis and approximation methods for general eigenvalue problems
NASA Technical Reports Server (NTRS)
Murthy, D. V.; Haftka, R. T.
1986-01-01
Optimization of dynamic systems involving complex non-hermitian matrices is often computationally expensive. Major contributors to the computational expense are the sensitivity analysis and reanalysis of a modified design. The present work seeks to alleviate this computational burden by identifying efficient sensitivity analysis and approximate reanalysis methods. For the algebraic eigenvalue problem involving non-hermitian matrices, algorithms for sensitivity analysis and approximate reanalysis are classified, compared and evaluated for efficiency and accuracy. Proper eigenvector normalization is discussed. An improved method for calculating derivatives of eigenvectors is proposed based on a more rational normalization condition and taking advantage of matrix sparsity. Important numerical aspects of this method are also discussed. To alleviate the problem of reanalysis, various approximation methods for eigenvalues are proposed and evaluated. Linear and quadratic approximations are based directly on the Taylor series. Several approximation methods are developed based on the generalized Rayleigh quotient for the eigenvalue problem. Approximation methods based on trace theorem give high accuracy without needing any derivatives. Operation counts for the computation of the approximations are given. General recommendations are made for the selection of appropriate approximation technique as a function of the matrix size, number of design variables, number of eigenvalues of interest and the number of design points at which approximation is sought.
NASA Astrophysics Data System (ADS)
Zhong, Jiaqi; Zeng, Cheng; Yuan, Yupeng; Zhang, Yuzhe; Zhang, Ye
2018-04-01
The aim of this paper is to present an explicit numerical algorithm based on improved spectral Galerkin method for solving the unsteady diffusion-convection-reaction equation. The principal characteristics of this approach give the explicit eigenvalues and eigenvectors based on the time-space separation method and boundary condition analysis. With the help of Fourier series and Galerkin truncation, we can obtain the finite-dimensional ordinary differential equations which facilitate the system analysis and controller design. By comparing with the finite element method, the numerical solutions are demonstrated via two examples. It is shown that the proposed method is effective.
External Aiding Methods for IMU-Based Navigation
2016-11-26
Carlo simulation and particle filtering . This approach allows for the utilization of highly complex systems in a black box configuration with minimal...alternative method, which has the advantage of being less computationally demanding, is to use a Kalman filtering -based approach. The particular...Kalman filtering -based approach used here is known as linear covariance analysis. In linear covariance analysis, the nonlinear systems describing the
Sub-pattern based multi-manifold discriminant analysis for face recognition
NASA Astrophysics Data System (ADS)
Dai, Jiangyan; Guo, Changlu; Zhou, Wei; Shi, Yanjiao; Cong, Lin; Yi, Yugen
2018-04-01
In this paper, we present a Sub-pattern based Multi-manifold Discriminant Analysis (SpMMDA) algorithm for face recognition. Unlike existing Multi-manifold Discriminant Analysis (MMDA) approach which is based on holistic information of face image for recognition, SpMMDA operates on sub-images partitioned from the original face image and then extracts the discriminative local feature from the sub-images separately. Moreover, the structure information of different sub-images from the same face image is considered in the proposed method with the aim of further improve the recognition performance. Extensive experiments on three standard face databases (Extended YaleB, CMU PIE and AR) demonstrate that the proposed method is effective and outperforms some other sub-pattern based face recognition methods.
Innovating Method of Existing Mechanical Product Based on TRIZ Theory
NASA Astrophysics Data System (ADS)
Zhao, Cunyou; Shi, Dongyan; Wu, Han
Main way of product development is adaptive design and variant design based on existing product. In this paper, conceptual design frame and its flow model of innovating products is put forward through combining the methods of conceptual design and TRIZ theory. Process system model of innovating design that includes requirement analysis, total function analysis and decomposing, engineering problem analysis, finding solution of engineering problem and primarily design is constructed and this establishes the base for innovating design of existing product.
Kwon, Deukwoo; Hoffman, F Owen; Moroz, Brian E; Simon, Steven L
2016-02-10
Most conventional risk analysis methods rely on a single best estimate of exposure per person, which does not allow for adjustment for exposure-related uncertainty. Here, we propose a Bayesian model averaging method to properly quantify the relationship between radiation dose and disease outcomes by accounting for shared and unshared uncertainty in estimated dose. Our Bayesian risk analysis method utilizes multiple realizations of sets (vectors) of doses generated by a two-dimensional Monte Carlo simulation method that properly separates shared and unshared errors in dose estimation. The exposure model used in this work is taken from a study of the risk of thyroid nodules among a cohort of 2376 subjects who were exposed to fallout from nuclear testing in Kazakhstan. We assessed the performance of our method through an extensive series of simulations and comparisons against conventional regression risk analysis methods. When the estimated doses contain relatively small amounts of uncertainty, the Bayesian method using multiple a priori plausible draws of dose vectors gave similar results to the conventional regression-based methods of dose-response analysis. However, when large and complex mixtures of shared and unshared uncertainties are present, the Bayesian method using multiple dose vectors had significantly lower relative bias than conventional regression-based risk analysis methods and better coverage, that is, a markedly increased capability to include the true risk coefficient within the 95% credible interval of the Bayesian-based risk estimate. An evaluation of the dose-response using our method is presented for an epidemiological study of thyroid disease following radiation exposure. Copyright © 2015 John Wiley & Sons, Ltd.
Robust Mediation Analysis Based on Median Regression
Yuan, Ying; MacKinnon, David P.
2014-01-01
Mediation analysis has many applications in psychology and the social sciences. The most prevalent methods typically assume that the error distribution is normal and homoscedastic. However, this assumption may rarely be met in practice, which can affect the validity of the mediation analysis. To address this problem, we propose robust mediation analysis based on median regression. Our approach is robust to various departures from the assumption of homoscedasticity and normality, including heavy-tailed, skewed, contaminated, and heteroscedastic distributions. Simulation studies show that under these circumstances, the proposed method is more efficient and powerful than standard mediation analysis. We further extend the proposed robust method to multilevel mediation analysis, and demonstrate through simulation studies that the new approach outperforms the standard multilevel mediation analysis. We illustrate the proposed method using data from a program designed to increase reemployment and enhance mental health of job seekers. PMID:24079925
Validation of a Smartphone Image-Based Dietary Assessment Method for Pregnant Women
Ashman, Amy M.; Collins, Clare E.; Brown, Leanne J.; Rae, Kym M.; Rollo, Megan E.
2017-01-01
Image-based dietary records could lower participant burden associated with traditional prospective methods of dietary assessment. They have been used in children, adolescents and adults, but have not been evaluated in pregnant women. The current study evaluated relative validity of the DietBytes image-based dietary assessment method for assessing energy and nutrient intakes. Pregnant women collected image-based dietary records (via a smartphone application) of all food, drinks and supplements consumed over three non-consecutive days. Intakes from the image-based method were compared to intakes collected from three 24-h recalls, taken on random days; once per week, in the weeks following the image-based record. Data were analyzed using nutrient analysis software. Agreement between methods was ascertained using Pearson correlations and Bland-Altman plots. Twenty-five women (27 recruited, one withdrew, one incomplete), median age 29 years, 15 primiparas, eight Aboriginal Australians, completed image-based records for analysis. Significant correlations between the two methods were observed for energy, macronutrients and fiber (r = 0.58–0.84, all p < 0.05), and for micronutrients both including (r = 0.47–0.94, all p < 0.05) and excluding (r = 0.40–0.85, all p < 0.05) supplements in the analysis. Bland-Altman plots confirmed acceptable agreement with no systematic bias. The DietBytes method demonstrated acceptable relative validity for assessment of nutrient intakes of pregnant women. PMID:28106758
Zhang, Xiao-Chao; Wei, Zhen-Wei; Gong, Xiao-Yun; Si, Xing-Yu; Zhao, Yao-Yao; Yang, Cheng-Dui; Zhang, Si-Chun; Zhang, Xin-Rong
2016-04-29
Integrating droplet-based microfluidics with mass spectrometry is essential to high-throughput and multiple analysis of single cells. Nevertheless, matrix effects such as the interference of culture medium and intracellular components influence the sensitivity and the accuracy of results in single-cell analysis. To resolve this problem, we developed a method that integrated droplet-based microextraction with single-cell mass spectrometry. Specific extraction solvent was used to selectively obtain intracellular components of interest and remove interference of other components. Using this method, UDP-Glc-NAc, GSH, GSSG, AMP, ADP and ATP were successfully detected in single MCF-7 cells. We also applied the method to study the change of unicellular metabolites in the biological process of dysfunctional oxidative phosphorylation. The method could not only realize matrix-free, selective and sensitive detection of metabolites in single cells, but also have the capability for reliable and high-throughput single-cell analysis.
Semi-supervised vibration-based classification and condition monitoring of compressors
NASA Astrophysics Data System (ADS)
Potočnik, Primož; Govekar, Edvard
2017-09-01
Semi-supervised vibration-based classification and condition monitoring of the reciprocating compressors installed in refrigeration appliances is proposed in this paper. The method addresses the problem of industrial condition monitoring where prior class definitions are often not available or difficult to obtain from local experts. The proposed method combines feature extraction, principal component analysis, and statistical analysis for the extraction of initial class representatives, and compares the capability of various classification methods, including discriminant analysis (DA), neural networks (NN), support vector machines (SVM), and extreme learning machines (ELM). The use of the method is demonstrated on a case study which was based on industrially acquired vibration measurements of reciprocating compressors during the production of refrigeration appliances. The paper presents a comparative qualitative analysis of the applied classifiers, confirming the good performance of several nonlinear classifiers. If the model parameters are properly selected, then very good classification performance can be obtained from NN trained by Bayesian regularization, SVM and ELM classifiers. The method can be effectively applied for the industrial condition monitoring of compressors.
Identifying city PV roof resource based on Gabor filter
NASA Astrophysics Data System (ADS)
Ruhang, Xu; Zhilin, Liu; Yong, Huang; Xiaoyu, Zhang
2017-06-01
To identify a city’s PV roof resources, the area and ownership distribution of residential buildings in an urban district should be assessed. To achieve this assessment, remote sensing data analysing is a promising approach. Urban building roof area estimation is a major topic for remote sensing image information extraction. There are normally three ways to solve this problem. The first way is pixel-based analysis, which is based on mathematical morphology or statistical methods; the second way is object-based analysis, which is able to combine semantic information and expert knowledge; the third way is signal-processing view method. This paper presented a Gabor filter based method. This result shows that the method is fast and with proper accuracy.
iTemplate: A template-based eye movement data analysis approach.
Xiao, Naiqi G; Lee, Kang
2018-02-08
Current eye movement data analysis methods rely on defining areas of interest (AOIs). Due to the fact that AOIs are created and modified manually, variances in their size, shape, and location are unavoidable. These variances affect not only the consistency of the AOI definitions, but also the validity of the eye movement analyses based on the AOIs. To reduce the variances in AOI creation and modification and achieve a procedure to process eye movement data with high precision and efficiency, we propose a template-based eye movement data analysis method. Using a linear transformation algorithm, this method registers the eye movement data from each individual stimulus to a template. Thus, users only need to create one set of AOIs for the template in order to analyze eye movement data, rather than creating a unique set of AOIs for all individual stimuli. This change greatly reduces the error caused by the variance from manually created AOIs and boosts the efficiency of the data analysis. Furthermore, this method can help researchers prepare eye movement data for some advanced analysis approaches, such as iMap. We have developed software (iTemplate) with a graphic user interface to make this analysis method available to researchers.
An evaluation method for nanoscale wrinkle
NASA Astrophysics Data System (ADS)
Liu, Y. P.; Wang, C. G.; Zhang, L. M.; Tan, H. F.
2016-06-01
In this paper, a spectrum-based wrinkling analysis method via two-dimensional Fourier transformation is proposed aiming to solve the difficulty of nanoscale wrinkle evaluation. It evaluates the wrinkle characteristics including wrinkling wavelength and direction simply using a single wrinkling image. Based on this method, the evaluation results of nanoscale wrinkle characteristics show agreement with the open experimental results within an error of 6%. It is also verified to be appropriate for the macro wrinkle evaluation without scale limitations. The spectrum-based wrinkling analysis is an effective method for nanoscale evaluation, which contributes to reveal the mechanism of nanoscale wrinkling.
GLC analysis of base composition of RNA and DNA hydrolysates
NASA Technical Reports Server (NTRS)
Lakings, D. B.; Gehreke, C. W.
1971-01-01
Various methods used for the analysis of the base composition of RNA and DNA hydrolysates are presented. The methods discussed are: (1) ion-exchange chromatography, (2) paper chromatography, (3) paper electrophoresis, (4) thin layer chromatography, (5) paper chromatography and time of flight mass spectrometry, and (6) gas-liquid chromatography. The equipment required and the conditions for obtaining the best results with each method are described.
An Analysis of Periodic Components in BL Lac Object S5 0716 +714 with MUSIC Method
NASA Astrophysics Data System (ADS)
Tang, J.
2012-01-01
Multiple signal classification (MUSIC) algorithms are introduced to the estimation of the period of variation of BL Lac objects.The principle of MUSIC spectral analysis method and theoretical analysis of the resolution of frequency spectrum using analog signals are included. From a lot of literatures, we have collected a lot of effective observation data of BL Lac object S5 0716 + 714 in V, R, I bands from 1994 to 2008. The light variation periods of S5 0716 +714 are obtained by means of the MUSIC spectral analysis method and periodogram spectral analysis method. There exist two major periods: (3.33±0.08) years and (1.24±0.01) years for all bands. The estimation of the period of variation of the algorithm based on the MUSIC spectral analysis method is compared with that of the algorithm based on the periodogram spectral analysis method. It is a super-resolution algorithm with small data length, and could be used to detect the period of variation of weak signals.
2017-01-01
Amplicon (targeted) sequencing by massively parallel sequencing (PCR-MPS) is a potential method for use in forensic DNA analyses. In this application, PCR-MPS may supplement or replace other instrumental analysis methods such as capillary electrophoresis and Sanger sequencing for STR and mitochondrial DNA typing, respectively. PCR-MPS also may enable the expansion of forensic DNA analysis methods to include new marker systems such as single nucleotide polymorphisms (SNPs) and insertion/deletions (indels) that currently are assayable using various instrumental analysis methods including microarray and quantitative PCR. Acceptance of PCR-MPS as a forensic method will depend in part upon developing protocols and criteria that define the limitations of a method, including a defensible analytical threshold or method detection limit. This paper describes an approach to establish objective analytical thresholds suitable for multiplexed PCR-MPS methods. A definition is proposed for PCR-MPS method background noise, and an analytical threshold based on background noise is described. PMID:28542338
Young, Brian; King, Jonathan L; Budowle, Bruce; Armogida, Luigi
2017-01-01
Amplicon (targeted) sequencing by massively parallel sequencing (PCR-MPS) is a potential method for use in forensic DNA analyses. In this application, PCR-MPS may supplement or replace other instrumental analysis methods such as capillary electrophoresis and Sanger sequencing for STR and mitochondrial DNA typing, respectively. PCR-MPS also may enable the expansion of forensic DNA analysis methods to include new marker systems such as single nucleotide polymorphisms (SNPs) and insertion/deletions (indels) that currently are assayable using various instrumental analysis methods including microarray and quantitative PCR. Acceptance of PCR-MPS as a forensic method will depend in part upon developing protocols and criteria that define the limitations of a method, including a defensible analytical threshold or method detection limit. This paper describes an approach to establish objective analytical thresholds suitable for multiplexed PCR-MPS methods. A definition is proposed for PCR-MPS method background noise, and an analytical threshold based on background noise is described.
Time Series Analysis Based on Running Mann Whitney Z Statistics
USDA-ARS?s Scientific Manuscript database
A sensitive and objective time series analysis method based on the calculation of Mann Whitney U statistics is described. This method samples data rankings over moving time windows, converts those samples to Mann-Whitney U statistics, and then normalizes the U statistics to Z statistics using Monte-...
Zhao, Jiaduo; Gong, Weiguo; Tang, Yuzhen; Li, Weihong
2016-01-20
In this paper, we propose an effective human and nonhuman pyroelectric infrared (PIR) signal recognition method to reduce PIR detector false alarms. First, using the mathematical model of the PIR detector, we analyze the physical characteristics of the human and nonhuman PIR signals; second, based on the analysis results, we propose an empirical mode decomposition (EMD)-based symbolic dynamic analysis method for the recognition of human and nonhuman PIR signals. In the proposed method, first, we extract the detailed features of a PIR signal into five symbol sequences using an EMD-based symbolization method, then, we generate five feature descriptors for each PIR signal through constructing five probabilistic finite state automata with the symbol sequences. Finally, we use a weighted voting classification strategy to classify the PIR signals with their feature descriptors. Comparative experiments show that the proposed method can effectively classify the human and nonhuman PIR signals and reduce PIR detector's false alarms.
District nursing workforce planning: a review of the methods.
Reid, Bernie; Kane, Kay; Curran, Carol
2008-11-01
District nursing services in Northern Ireland face increasing demands and challenges which may be responded to by effective and efficient workforce planning and development. The aim of this paper is to critically analyse district nursing workforce planning and development methods, in an attempt to find a suitable method for Northern Ireland. A systematic analysis of the literature reveals four methods: professional judgement; population-based health needs; caseload analysis and dependency-acuity. Each method has strengths and weaknesses. Professional judgement offers a 'belt and braces' approach but lacks sensitivity to fluctuating patient numbers. Population-based health needs methods develop staffing algorithms that reflect deprivation and geographical spread, but are poorly understood by district nurses. Caseload analysis promotes equitable workloads but poorly performing district nursing localities may continue if benchmarking processes only consider local data. Dependency-acuity methods provide a means of equalizing and prioritizing workload but are prone to district nurses overstating factors in patient dependency or understating carers' capability. In summary a mixed method approach is advocated to evaluate and adjust the size and mix of district nursing teams using empirically determined patient dependency and activity-based variables based on the population's health needs.
Xu, Ning; Zhou, Guofu; Li, Xiaojuan; Lu, Heng; Meng, Fanyun; Zhai, Huaqiang
2017-05-01
A reliable and comprehensive method for identifying the origin and assessing the quality of Epimedium has been developed. The method is based on analysis of HPLC fingerprints, combined with similarity analysis, hierarchical cluster analysis (HCA), principal component analysis (PCA) and multi-ingredient quantitative analysis. Nineteen batches of Epimedium, collected from different areas in the western regions of China, were used to establish the fingerprints and 18 peaks were selected for the analysis. Similarity analysis, HCA and PCA all classified the 19 areas into three groups. Simultaneous quantification of the five major bioactive ingredients in the Epimedium samples was also carried out to confirm the consistency of the quality tests. These methods were successfully used to identify the geographical origin of the Epimedium samples and to evaluate their quality. Copyright © 2016 John Wiley & Sons, Ltd.
Moazami-Goudarzi, K; Laloë, D
2002-01-01
To determine the relationships among closely related populations or species, two methods are commonly used in the literature: phylogenetic reconstruction or multivariate analysis. The aim of this article is to assess the reliability of multivariate analysis. We describe a method that is based on principal component analysis and Mantel correlations, using a two-step process: The first step consists of a single-marker analysis and the second step tests if each marker reveals the same typology concerning population differentiation. We conclude that if single markers are not congruent, the compromise structure is not meaningful. Our model is not based on any particular mutation process and it can be applied to most of the commonly used genetic markers. This method is also useful to determine the contribution of each marker to the typology of populations. We test whether our method is efficient with two real data sets based on microsatellite markers. Our analysis suggests that for closely related populations, it is not always possible to accept the hypothesis that an increase in the number of markers will increase the reliability of the typology analysis. PMID:12242255
Fault feature analysis of cracked gear based on LOD and analytical-FE method
NASA Astrophysics Data System (ADS)
Wu, Jiateng; Yang, Yu; Yang, Xingkai; Cheng, Junsheng
2018-01-01
At present, there are two main ideas for gear fault diagnosis. One is the model-based gear dynamic analysis; the other is signal-based gear vibration diagnosis. In this paper, a method for fault feature analysis of gear crack is presented, which combines the advantages of dynamic modeling and signal processing. Firstly, a new time-frequency analysis method called local oscillatory-characteristic decomposition (LOD) is proposed, which has the attractive feature of extracting fault characteristic efficiently and accurately. Secondly, an analytical-finite element (analytical-FE) method which is called assist-stress intensity factor (assist-SIF) gear contact model, is put forward to calculate the time-varying mesh stiffness (TVMS) under different crack states. Based on the dynamic model of the gear system with 6 degrees of freedom, the dynamic simulation response was obtained for different tooth crack depths. For the dynamic model, the corresponding relation between the characteristic parameters and the degree of the tooth crack is established under a specific condition. On the basis of the methods mentioned above, a novel gear tooth root crack diagnosis method which combines the LOD with the analytical-FE is proposed. Furthermore, empirical mode decomposition (EMD) and ensemble empirical mode decomposition (EEMD) are contrasted with the LOD by gear crack fault vibration signals. The analysis results indicate that the proposed method performs effectively and feasibility for the tooth crack stiffness calculation and the gear tooth crack fault diagnosis.
Multivariate analysis: A statistical approach for computations
NASA Astrophysics Data System (ADS)
Michu, Sachin; Kaushik, Vandana
2014-10-01
Multivariate analysis is a type of multivariate statistical approach commonly used in, automotive diagnosis, education evaluating clusters in finance etc and more recently in the health-related professions. The objective of the paper is to provide a detailed exploratory discussion about factor analysis (FA) in image retrieval method and correlation analysis (CA) of network traffic. Image retrieval methods aim to retrieve relevant images from a collected database, based on their content. The problem is made more difficult due to the high dimension of the variable space in which the images are represented. Multivariate correlation analysis proposes an anomaly detection and analysis method based on the correlation coefficient matrix. Anomaly behaviors in the network include the various attacks on the network like DDOs attacks and network scanning.
NASA Astrophysics Data System (ADS)
Hsiao, Y. R.; Tsai, C.
2017-12-01
As the WHO Air Quality Guideline indicates, ambient air pollution exposes world populations under threat of fatal symptoms (e.g. heart disease, lung cancer, asthma etc.), raising concerns of air pollution sources and relative factors. This study presents a novel approach to investigating the multiscale variations of PM2.5 in southern Taiwan over the past decade, with four meteorological influencing factors (Temperature, relative humidity, precipitation and wind speed),based on Noise-assisted Multivariate Empirical Mode Decomposition(NAMEMD) algorithm, Hilbert Spectral Analysis(HSA) and Time-dependent Intrinsic Correlation(TDIC) method. NAMEMD algorithm is a fully data-driven approach designed for nonlinear and nonstationary multivariate signals, and is performed to decompose multivariate signals into a collection of channels of Intrinsic Mode Functions (IMFs). TDIC method is an EMD-based method using a set of sliding window sizes to quantify localized correlation coefficients for multiscale signals. With the alignment property and quasi-dyadic filter bank of NAMEMD algorithm, one is able to produce same number of IMFs for all variables and estimates the cross correlation in a more accurate way. The performance of spectral representation of NAMEMD-HSA method is compared with Complementary Empirical Mode Decomposition/ Hilbert Spectral Analysis (CEEMD-HSA) and Wavelet Analysis. The nature of NAMAMD-based TDICC analysis is then compared with CEEMD-based TDIC analysis and the traditional correlation analysis.
A Visual Analytics Approach for Station-Based Air Quality Data
Du, Yi; Ma, Cuixia; Wu, Chao; Xu, Xiaowei; Guo, Yike; Zhou, Yuanchun; Li, Jianhui
2016-01-01
With the deployment of multi-modality and large-scale sensor networks for monitoring air quality, we are now able to collect large and multi-dimensional spatio-temporal datasets. For these sensed data, we present a comprehensive visual analysis approach for air quality analysis. This approach integrates several visual methods, such as map-based views, calendar views, and trends views, to assist the analysis. Among those visual methods, map-based visual methods are used to display the locations of interest, and the calendar and the trends views are used to discover the linear and periodical patterns. The system also provides various interaction tools to combine the map-based visualization, trends view, calendar view and multi-dimensional view. In addition, we propose a self-adaptive calendar-based controller that can flexibly adapt the changes of data size and granularity in trends view. Such a visual analytics system would facilitate big-data analysis in real applications, especially for decision making support. PMID:28029117
A Visual Analytics Approach for Station-Based Air Quality Data.
Du, Yi; Ma, Cuixia; Wu, Chao; Xu, Xiaowei; Guo, Yike; Zhou, Yuanchun; Li, Jianhui
2016-12-24
With the deployment of multi-modality and large-scale sensor networks for monitoring air quality, we are now able to collect large and multi-dimensional spatio-temporal datasets. For these sensed data, we present a comprehensive visual analysis approach for air quality analysis. This approach integrates several visual methods, such as map-based views, calendar views, and trends views, to assist the analysis. Among those visual methods, map-based visual methods are used to display the locations of interest, and the calendar and the trends views are used to discover the linear and periodical patterns. The system also provides various interaction tools to combine the map-based visualization, trends view, calendar view and multi-dimensional view. In addition, we propose a self-adaptive calendar-based controller that can flexibly adapt the changes of data size and granularity in trends view. Such a visual analytics system would facilitate big-data analysis in real applications, especially for decision making support.
Kittell, David E; Mares, Jesus O; Son, Steven F
2015-04-01
Two time-frequency analysis methods based on the short-time Fourier transform (STFT) and continuous wavelet transform (CWT) were used to determine time-resolved detonation velocities with microwave interferometry (MI). The results were directly compared to well-established analysis techniques consisting of a peak-picking routine as well as a phase unwrapping method (i.e., quadrature analysis). The comparison is conducted on experimental data consisting of transient detonation phenomena observed in triaminotrinitrobenzene and ammonium nitrate-urea explosives, representing high and low quality MI signals, respectively. Time-frequency analysis proved much more capable of extracting useful and highly resolved velocity information from low quality signals than the phase unwrapping and peak-picking methods. Additionally, control of the time-frequency methods is mainly constrained to a single parameter which allows for a highly unbiased analysis method to extract velocity information. In contrast, the phase unwrapping technique introduces user based variability while the peak-picking technique does not achieve a highly resolved velocity result. Both STFT and CWT methods are proposed as improved additions to the analysis methods applied to MI detonation experiments, and may be useful in similar applications.
Feizizadeh, Bakhtiar; Blaschke, Thomas
2014-01-01
GIS-based multicriteria decision analysis (MCDA) methods are increasingly being used in landslide susceptibility mapping. However, the uncertainties that are associated with MCDA techniques may significantly impact the results. This may sometimes lead to inaccurate outcomes and undesirable consequences. This article introduces a new GIS-based MCDA approach. We illustrate the consequences of applying different MCDA methods within a decision-making process through uncertainty analysis. Three GIS-MCDA methods in conjunction with Monte Carlo simulation (MCS) and Dempster–Shafer theory are analyzed for landslide susceptibility mapping (LSM) in the Urmia lake basin in Iran, which is highly susceptible to landslide hazards. The methodology comprises three stages. First, the LSM criteria are ranked and a sensitivity analysis is implemented to simulate error propagation based on the MCS. The resulting weights are expressed through probability density functions. Accordingly, within the second stage, three MCDA methods, namely analytical hierarchy process (AHP), weighted linear combination (WLC) and ordered weighted average (OWA), are used to produce the landslide susceptibility maps. In the third stage, accuracy assessments are carried out and the uncertainties of the different results are measured. We compare the accuracies of the three MCDA methods based on (1) the Dempster–Shafer theory and (2) a validation of the results using an inventory of known landslides and their respective coverage based on object-based image analysis of IRS-ID satellite images. The results of this study reveal that through the integration of GIS and MCDA models, it is possible to identify strategies for choosing an appropriate method for LSM. Furthermore, our findings indicate that the integration of MCDA and MCS can significantly improve the accuracy of the results. In LSM, the AHP method performed best, while the OWA reveals better performance in the reliability assessment. The WLC operation yielded poor results. PMID:27019609
RUAN, XIYUN; LI, HONGYUN; LIU, BO; CHEN, JIE; ZHANG, SHIBAO; SUN, ZEQIANG; LIU, SHUANGQING; SUN, FAHAI; LIU, QINGYONG
2015-01-01
The aim of the present study was to develop a novel method for identifying pathways associated with renal cell carcinoma (RCC) based on a gene co-expression network. A framework was established where a co-expression network was derived from the database as well as various co-expression approaches. First, the backbone of the network based on differentially expressed (DE) genes between RCC patients and normal controls was constructed by the Search Tool for the Retrieval of Interacting Genes/Proteins (STRING) database. The differentially co-expressed links were detected by Pearson’s correlation, the empirical Bayesian (EB) approach and Weighted Gene Co-expression Network Analysis (WGCNA). The co-expressed gene pairs were merged by a rank-based algorithm. We obtained 842; 371; 2,883 and 1,595 co-expressed gene pairs from the co-expression networks of the STRING database, Pearson’s correlation EB method and WGCNA, respectively. Two hundred and eighty-one differentially co-expressed (DC) gene pairs were obtained from the merged network using this novel method. Pathway enrichment analysis based on the Kyoto Encyclopedia of Genes and Genomes (KEGG) database and the network enrichment analysis (NEA) method were performed to verify feasibility of the merged method. Results of the KEGG and NEA pathway analyses showed that the network was associated with RCC. The suggested method was computationally efficient to identify pathways associated with RCC and has been identified as a useful complement to traditional co-expression analysis. PMID:26058425
NASA Astrophysics Data System (ADS)
Yoo, Byungjin; Hirata, Katsuhiro; Oonishi, Atsurou
In this study, a coupled analysis method for flat panel speakers driven by giant magnetostrictive material (GMM) based actuator was developed. The sound field produced by a flat panel speaker that is driven by a GMM actuator depends on the vibration of the flat panel, this vibration is a result of magnetostriction property of the GMM. In this case, to predict the sound pressure level (SPL) in the audio-frequency range, it is necessary to take into account not only the magnetostriction property of the GMM but also the effect of eddy current and the vibration characteristics of the actuator and the flat panel. In this paper, a coupled electromagnetic-structural-acoustic analysis method is presented; this method was developed by using the finite element method (FEM). This analysis method is used to predict the performance of a flat panel speaker in the audio-frequency range. The validity of the analysis method is verified by comparing with the measurement results of a prototype speaker.
Methods and approaches in the topology-based analysis of biological pathways
Mitrea, Cristina; Taghavi, Zeinab; Bokanizad, Behzad; Hanoudi, Samer; Tagett, Rebecca; Donato, Michele; Voichiţa, Călin; Drăghici, Sorin
2013-01-01
The goal of pathway analysis is to identify the pathways significantly impacted in a given phenotype. Many current methods are based on algorithms that consider pathways as simple gene lists, dramatically under-utilizing the knowledge that such pathways are meant to capture. During the past few years, a plethora of methods claiming to incorporate various aspects of the pathway topology have been proposed. These topology-based methods, sometimes referred to as “third generation,” have the potential to better model the phenomena described by pathways. Although there is now a large variety of approaches used for this purpose, no review is currently available to offer guidance for potential users and developers. This review covers 22 such topology-based pathway analysis methods published in the last decade. We compare these methods based on: type of pathways analyzed (e.g., signaling or metabolic), input (subset of genes, all genes, fold changes, gene p-values, etc.), mathematical models, pathway scoring approaches, output (one or more pathway scores, p-values, etc.) and implementation (web-based, standalone, etc.). We identify and discuss challenges, arising both in methodology and in pathway representation, including inconsistent terminology, different data formats, lack of meaningful benchmarks, and the lack of tissue and condition specificity. PMID:24133454
Zou, Ling; Guo, Qian; Xu, Yi; Yang, Biao; Jiao, Zhuqing; Xiang, Jianbo
2016-04-29
Functional magnetic resonance imaging (fMRI) is an important tool in neuroscience for assessing connectivity and interactions between distant areas of the brain. To find and characterize the coherent patterns of brain activity as a means of identifying brain systems for the cognitive reappraisal of the emotion task, both density-based k-means clustering and independent component analysis (ICA) methods can be applied to characterize the interactions between brain regions involved in cognitive reappraisal of emotion. Our results reveal that compared with the ICA method, the density-based k-means clustering method provides a higher sensitivity of polymerization. In addition, it is more sensitive to those relatively weak functional connection regions. Thus, the study concludes that in the process of receiving emotional stimuli, the relatively obvious activation areas are mainly distributed in the frontal lobe, cingulum and near the hypothalamus. Furthermore, density-based k-means clustering method creates a more reliable method for follow-up studies of brain functional connectivity.
Coformer screening using thermal analysis based on binary phase diagrams.
Yamashita, Hiroyuki; Hirakura, Yutaka; Yuda, Masamichi; Terada, Katsuhide
2014-08-01
The advent of cocrystals has demonstrated a growing need for efficient and comprehensive coformer screening in search of better development forms, including salt forms. Here, we investigated a coformer screening system for salts and cocrystals based on binary phase diagrams using thermal analysis and examined the effectiveness of the method. Indomethacin and tenoxicam were used as models of active pharmaceutical ingredients (APIs). Physical mixtures of an API and 42 kinds of coformers were analyzed using Differential Scanning Calorimetry (DSC) and X-ray DSC. We also conducted coformer screening using a conventional slurry method and compared these results with those from the thermal analysis method and previous studies. Compared with the slurry method, the thermal analysis method was a high-performance screening system, particularly for APIs with low solubility and/or propensity to form solvates. However, this method faced hurdles for screening coformers combined with an API in the presence of kinetic hindrance for salt or cocrystal formation during heating or if there is degradation near the metastable eutectic temperature. The thermal analysis and slurry methods are considered complementary to each other for coformer screening. Feasibility of the thermal analysis method in drug discovery practice is ensured given its small scale and high throughput.
Wen, Dong; Jia, Peilei; Lian, Qiusheng; Zhou, Yanhong; Lu, Chengbiao
2016-01-01
At present, the sparse representation-based classification (SRC) has become an important approach in electroencephalograph (EEG) signal analysis, by which the data is sparsely represented on the basis of a fixed dictionary or learned dictionary and classified based on the reconstruction criteria. SRC methods have been used to analyze the EEG signals of epilepsy, cognitive impairment and brain computer interface (BCI), which made rapid progress including the improvement in computational accuracy, efficiency and robustness. However, these methods have deficiencies in real-time performance, generalization ability and the dependence of labeled sample in the analysis of the EEG signals. This mini review described the advantages and disadvantages of the SRC methods in the EEG signal analysis with the expectation that these methods can provide the better tools for analyzing EEG signals. PMID:27458376
Liu, Bao; Fan, Xiaoming; Huo, Shengnan; Zhou, Lili; Wang, Jun; Zhang, Hui; Hu, Mei; Zhu, Jianhua
2011-12-01
A method was established to analyse the overlapped chromatographic peaks based on the chromatographic-spectra data detected by the diode-array ultraviolet detector. In the method, the three-dimensional data were de-noised and normalized firstly; secondly the differences and clustering analysis of the spectra at different time points were calculated; then the purity of the whole chromatographic peak were analysed and the region were sought out in which the spectra of different time points were stable. The feature spectra were extracted from the spectrum-stable region as the basic foundation. The nonnegative least-square method was chosen to separate the overlapped peaks and get the flow curve which was based on the feature spectrum. The three-dimensional divided chromatographic-spectrum peak could be gained by the matrix operations of the feature spectra with the flow curve. The results displayed that this method could separate the overlapped peaks.
What Touched Your Heart? Collaborative Story Analysis Emerging From an Apsáalooke Cultural Context
Hallett, John; Held, Suzanne; McCormick, Alma Knows His Gun; Simonds, Vanessa; Bird, Sloane Real; Martin, Christine; Simpson, Colleen; Schure, Mark; Turnsplenty, Nicole; Trottier, Coleen
2017-01-01
Community-based participatory research and decolonizing research share some recommendations for best practices for conducting research. One commonality is partnering on all stages of research; co-developing methods of data analysis is one stage with a deficit of partnering examples. We present a novel community-based and developed method for analyzing qualitative data within an Indigenous health study and explain incompatibilities of existing methods for our purposes and community needs. We describe how we explored available literature, received counsel from community Elders and experts in the field, and collaboratively developed a data analysis method consonant with community values. The method of analysis, in which interview/story remained intact, team members received story, made meaning through discussion, and generated a conceptual framework to inform intervention development, is detailed. We offer the development process and method as an example for researchers working with communities who want to keep stories intact during qualitative data analysis. PMID:27659019
A full potential inverse method based on a density linearization scheme for wing design
NASA Technical Reports Server (NTRS)
Shankar, V.
1982-01-01
A mixed analysis inverse procedure based on the full potential equation in conservation form was developed to recontour a given base wing to produce density linearization scheme in applying the pressure boundary condition in terms of the velocity potential. The FL030 finite volume analysis code was modified to include the inverse option. The new surface shape information, associated with the modified pressure boundary condition, is calculated at a constant span station based on a mass flux integration. The inverse method is shown to recover the original shape when the analysis pressure is not altered. Inverse calculations for weakening of a strong shock system and for a laminar flow control (LFC) pressure distribution are presented. Two methods for a trailing edge closure model are proposed for further study.
Mokhtari, Amirhossein; Christopher Frey, H; Zheng, Junyu
2006-11-01
Sensitivity analyses of exposure or risk models can help identify the most significant factors to aid in risk management or to prioritize additional research to reduce uncertainty in the estimates. However, sensitivity analysis is challenged by non-linearity, interactions between inputs, and multiple days or time scales. Selected sensitivity analysis methods are evaluated with respect to their applicability to human exposure models with such features using a testbed. The testbed is a simplified version of a US Environmental Protection Agency's Stochastic Human Exposure and Dose Simulation (SHEDS) model. The methods evaluated include the Pearson and Spearman correlation, sample and rank regression, analysis of variance, Fourier amplitude sensitivity test (FAST), and Sobol's method. The first five methods are known as "sampling-based" techniques, wheras the latter two methods are known as "variance-based" techniques. The main objective of the test cases was to identify the main and total contributions of individual inputs to the output variance. Sobol's method and FAST directly quantified these measures of sensitivity. Results show that sensitivity of an input typically changed when evaluated under different time scales (e.g., daily versus monthly). All methods provided similar insights regarding less important inputs; however, Sobol's method and FAST provided more robust insights with respect to sensitivity of important inputs compared to the sampling-based techniques. Thus, the sampling-based methods can be used in a screening step to identify unimportant inputs, followed by application of more computationally intensive refined methods to a smaller set of inputs. The implications of time variation in sensitivity results for risk management are briefly discussed.
Fan, Chunlin; Deng, Jiewei; Yang, Yunyun; Liu, Junshan; Wang, Ying; Zhang, Xiaoqi; Fai, Kuokchiu; Zhang, Qingwen; Ye, Wencai
2013-10-01
An ultra-performance liquid chromatography coupled with quadrupole time-of-flight mass spectrometry (UPLC-QTOF-MS) method integrating multi-ingredients determination and fingerprint analysis has been established for quality assessment and control of leaves from Ilex latifolia. The method possesses the advantages of speediness, efficiency, accuracy, and allows the multi-ingredients determination and fingerprint analysis in one chromatographic run within 13min. Multi-ingredients determination was performed based on the extracted ion chromatograms of the exact pseudo-molecular ions (with a 0.01Da window), and fingerprint analysis was performed based on the base peak chromatograms, obtained by negative-ion electrospray ionization QTOF-MS. The method validation results demonstrated our developed method possessing desirable specificity, linearity, precision and accuracy. The method was utilized to analyze 22 I. latifolia samples from different origins. The quality assessment was achieved by using both similarity analysis (SA) and principal component analysis (PCA), and the results from SA were consistent with those from PCA. Our experimental results demonstrate that the strategy integrated multi-ingredients determination and fingerprint analysis using UPLC-QTOF-MS technique is a useful approach for rapid pharmaceutical analysis, with promising prospects for the differentiation of origin, the determination of authenticity, and the overall quality assessment of herbal medicines. Copyright © 2013 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Hao, Jiangang; Smith, Lawrence; Mislevy, Robert; von Davier, Alina; Bauer, Malcolm
2016-01-01
Extracting information efficiently from game/simulation-based assessment (G/SBA) logs requires two things: a well-structured log file and a set of analysis methods. In this report, we propose a generic data model specified as an extensible markup language (XML) schema for the log files of G/SBAs. We also propose a set of analysis methods for…
NASA Astrophysics Data System (ADS)
Feng, Guixiang; Ming, Dongping; Wang, Min; Yang, Jianyu
2017-06-01
Scale problems are a major source of concern in the field of remote sensing. Since the remote sensing is a complex technology system, there is a lack of enough cognition on the connotation of scale and scale effect in remote sensing. Thus, this paper first introduces the connotations of pixel-based scale and summarizes the general understanding of pixel-based scale effect. Pixel-based scale effect analysis is essentially important for choosing the appropriate remote sensing data and the proper processing parameters. Fractal dimension is a useful measurement to analysis pixel-based scale. However in traditional fractal dimension calculation, the impact of spatial resolution is not considered, which leads that the scale effect change with spatial resolution can't be clearly reflected. Therefore, this paper proposes to use spatial resolution as the modified scale parameter of two fractal methods to further analyze the pixel-based scale effect. To verify the results of two modified methods (MFBM (Modified Windowed Fractal Brownian Motion Based on the Surface Area) and MDBM (Modified Windowed Double Blanket Method)); the existing scale effect analysis method (information entropy method) is used to evaluate. And six sub-regions of building areas and farmland areas were cut out from QuickBird images to be used as the experimental data. The results of the experiment show that both the fractal dimension and information entropy present the same trend with the decrease of spatial resolution, and some inflection points appear at the same feature scales. Further analysis shows that these feature scales (corresponding to the inflection points) are related to the actual sizes of the geo-object, which results in fewer mixed pixels in the image, and these inflection points are significantly indicative of the observed features. Therefore, the experiment results indicate that the modified fractal methods are effective to reflect the pixel-based scale effect existing in remote sensing data and it is helpful to analyze the observation scale from different aspects. This research will ultimately benefit for remote sensing data selection and application.
Weir, Christopher J; Butcher, Isabella; Assi, Valentina; Lewis, Stephanie C; Murray, Gordon D; Langhorne, Peter; Brady, Marian C
2018-03-07
Rigorous, informative meta-analyses rely on availability of appropriate summary statistics or individual participant data. For continuous outcomes, especially those with naturally skewed distributions, summary information on the mean or variability often goes unreported. While full reporting of original trial data is the ideal, we sought to identify methods for handling unreported mean or variability summary statistics in meta-analysis. We undertook two systematic literature reviews to identify methodological approaches used to deal with missing mean or variability summary statistics. Five electronic databases were searched, in addition to the Cochrane Colloquium abstract books and the Cochrane Statistics Methods Group mailing list archive. We also conducted cited reference searching and emailed topic experts to identify recent methodological developments. Details recorded included the description of the method, the information required to implement the method, any underlying assumptions and whether the method could be readily applied in standard statistical software. We provided a summary description of the methods identified, illustrating selected methods in example meta-analysis scenarios. For missing standard deviations (SDs), following screening of 503 articles, fifteen methods were identified in addition to those reported in a previous review. These included Bayesian hierarchical modelling at the meta-analysis level; summary statistic level imputation based on observed SD values from other trials in the meta-analysis; a practical approximation based on the range; and algebraic estimation of the SD based on other summary statistics. Following screening of 1124 articles for methods estimating the mean, one approximate Bayesian computation approach and three papers based on alternative summary statistics were identified. Illustrative meta-analyses showed that when replacing a missing SD the approximation using the range minimised loss of precision and generally performed better than omitting trials. When estimating missing means, a formula using the median, lower quartile and upper quartile performed best in preserving the precision of the meta-analysis findings, although in some scenarios, omitting trials gave superior results. Methods based on summary statistics (minimum, maximum, lower quartile, upper quartile, median) reported in the literature facilitate more comprehensive inclusion of randomised controlled trials with missing mean or variability summary statistics within meta-analyses.
Hybrid statistics-simulations based method for atom-counting from ADF STEM images.
De Wael, Annelies; De Backer, Annick; Jones, Lewys; Nellist, Peter D; Van Aert, Sandra
2017-06-01
A hybrid statistics-simulations based method for atom-counting from annular dark field scanning transmission electron microscopy (ADF STEM) images of monotype crystalline nanostructures is presented. Different atom-counting methods already exist for model-like systems. However, the increasing relevance of radiation damage in the study of nanostructures demands a method that allows atom-counting from low dose images with a low signal-to-noise ratio. Therefore, the hybrid method directly includes prior knowledge from image simulations into the existing statistics-based method for atom-counting, and accounts in this manner for possible discrepancies between actual and simulated experimental conditions. It is shown by means of simulations and experiments that this hybrid method outperforms the statistics-based method, especially for low electron doses and small nanoparticles. The analysis of a simulated low dose image of a small nanoparticle suggests that this method allows for far more reliable quantitative analysis of beam-sensitive materials. Copyright © 2017 Elsevier B.V. All rights reserved.
Pieterman, Elise D; Budde, Ricardo P J; Robbers-Visser, Daniëlle; van Domburg, Ron T; Helbing, Willem A
2017-09-01
Follow-up of right ventricular performance is important for patients with congenital heart disease. Cardiac magnetic resonance imaging is optimal for this purpose. However, observer-dependency of manual analysis of right ventricular volumes limit its use. Knowledge-based reconstruction is a new semiautomatic analysis tool that uses a database including knowledge of right ventricular shape in various congenital heart diseases. We evaluated whether knowledge-based reconstruction is a good alternative for conventional analysis. To assess the inter- and intra-observer variability and agreement of knowledge-based versus conventional analysis of magnetic resonance right ventricular volumes, analysis was done by two observers in a mixed group of 22 patients with congenital heart disease affecting right ventricular loading conditions (dextro-transposition of the great arteries and right ventricle to pulmonary artery conduit) and a group of 17 healthy children. We used Bland-Altman analysis and coefficient of variation. Comparison between the conventional method and the knowledge-based method showed a systematically higher volume for the latter group. We found an overestimation for end-diastolic volume (bias -40 ± 24 mL, r = .956), end-systolic volume (bias -34 ± 24 mL, r = .943), stroke volume (bias -6 ± 17 mL, r = .735) and an underestimation of ejection fraction (bias 7 ± 7%, r = .671) by knowledge-based reconstruction. The intra-observer variability of knowledge-based reconstruction varied with a coefficient of variation of 9% for end-diastolic volume and 22% for stroke volume. The same trend was noted for inter-observer variability. A systematic difference (overestimation) was noted for right ventricular size as assessed with knowledge-based reconstruction compared with conventional methods for analysis. Observer variability for the new method was comparable to what has been reported for the right ventricle in children and congenital heart disease with conventional analysis. © 2017 Wiley Periodicals, Inc.
Comparison of detrending methods for fluctuation analysis in hydrology
NASA Astrophysics Data System (ADS)
Zhang, Qiang; Zhou, Yu; Singh, Vijay P.; Chen, Yongqin David
2011-03-01
SummaryTrends within a hydrologic time series can significantly influence the scaling results of fluctuation analysis, such as rescaled range (RS) analysis and (multifractal) detrended fluctuation analysis (MF-DFA). Therefore, removal of trends is important in the study of scaling properties of the time series. In this study, three detrending methods, including adaptive detrending algorithm (ADA), Fourier-based method, and average removing technique, were evaluated by analyzing numerically generated series and observed streamflow series with obvious relative regular periodic trend. Results indicated that: (1) the Fourier-based detrending method and ADA were similar in detrending practices, and given proper parameters, these two methods can produce similarly satisfactory results; (2) detrended series by Fourier-based detrending method and ADA lose the fluctuation information at larger time scales, and the location of crossover points is heavily impacted by the chosen parameters of these two methods; and (3) the average removing method has an advantage over the other two methods, i.e., the fluctuation information at larger time scales is kept well-an indication of relatively reliable performance in detrending. In addition, the average removing method performed reasonably well in detrending a time series with regular periods or trends. In this sense, the average removing method should be preferred in the study of scaling properties of the hydrometeorolgical series with relative regular periodic trend using MF-DFA.
Effects of Problem-Based Learning on Attitude: A Meta-Analysis Study
ERIC Educational Resources Information Center
Demirel, Melek; Dagyar, Miray
2016-01-01
To date, researchers have frequently investigated students' attitudes toward courses supported by problem-based learning. There are several studies with different results in the literature. It is necessary to combine and interpret the findings of these studies through a meta-analysis method. This method aims to combine different results of similar…
Liang, Sai; Qu, Shen; Xu, Ming
2016-02-02
To develop industry-specific policies for mitigating environmental pressures, previous studies primarily focus on identifying sectors that directly generate large amounts of environmental pressures (a.k.a. production-based method) or indirectly drive large amounts of environmental pressures through supply chains (e.g., consumption-based method). In addition to those sectors as important environmental pressure producers or drivers, there exist sectors that are also important to environmental pressure mitigation as transmission centers. Economy-wide environmental pressure mitigation might be achieved by improving production efficiency of these key transmission sectors, that is, using less upstream inputs to produce unitary output. We develop a betweenness-based method to measure the importance of transmission sectors, borrowing the betweenness concept from network analysis. We quantify the betweenness of sectors by examining supply chain paths extracted from structural path analysis that pass through a particular sector. We take China as an example and find that those critical transmission sectors identified by betweenness-based method are not always identifiable by existing methods. This indicates that betweenness-based method can provide additional insights that cannot be obtained with existing methods on the roles individual sectors play in generating economy-wide environmental pressures. Betweenness-based method proposed here can therefore complement existing methods for guiding sector-level environmental pressure mitigation strategies.
NASA Technical Reports Server (NTRS)
Ratcliffe, James G.; Jackson, Wade C.
2008-01-01
A simple analysis method has been developed for predicting the residual compressive strength of impact-damaged sandwich panels. The method is tailored for honeycomb core-based sandwich specimens that exhibit an indentation growth failure mode under axial compressive loading, which is driven largely by the crushing behavior of the core material. The analysis method is in the form of a finite element model, where the impact-damaged facesheet is represented using shell elements and the core material is represented using spring elements, aligned in the thickness direction of the core. The nonlinear crush response of the core material used in the analysis is based on data from flatwise compression tests. A comparison with a previous analysis method and some experimental data shows good agreement with results from this new approach.
NASA Technical Reports Server (NTRS)
Ratcliffe, James G.; Jackson, Wade C.
2008-01-01
A simple analysis method has been developed for predicting the residual compression strength of impact-damaged sandwich panels. The method is tailored for honeycomb core-based sandwich specimens that exhibit an indentation growth failure mode under axial compression loading, which is driven largely by the crushing behavior of the core material. The analysis method is in the form of a finite element model, where the impact-damaged facesheet is represented using shell elements and the core material is represented using spring elements, aligned in the thickness direction of the core. The nonlinear crush response of the core material used in the analysis is based on data from flatwise compression tests. A comparison with a previous analysis method and some experimental data shows good agreement with results from this new approach.
3-D surface profilometry based on modulation measurement by applying wavelet transform method
NASA Astrophysics Data System (ADS)
Zhong, Min; Chen, Feng; Xiao, Chao; Wei, Yongchao
2017-01-01
A new analysis of 3-D surface profilometry based on modulation measurement technique by the application of Wavelet Transform method is proposed. As a tool excelling for its multi-resolution and localization in the time and frequency domains, Wavelet Transform method with good localized time-frequency analysis ability and effective de-noizing capacity can extract the modulation distribution more accurately than Fourier Transform method. Especially for the analysis of complex object, more details of the measured object can be well remained. In this paper, the theoretical derivation of Wavelet Transform method that obtains the modulation values from a captured fringe pattern is given. Both computer simulation and elementary experiment are used to show the validity of the proposed method by making a comparison with the results of Fourier Transform method. The results show that the Wavelet Transform method has a better performance than the Fourier Transform method in modulation values retrieval.
Hercegová, Andrea; Dömötörová, Milena; Kruzlicová, Dása; Matisová, Eva
2006-05-01
Four sample preparation techniques were compared for the ultratrace analysis of pesticide residues in baby food: (a) modified Schenck's method based on ACN extraction with SPE cleaning; (b) quick, easy, cheap, effective, rugged, and safe (QuEChERS) method based on ACN extraction and dispersive SPE; (c) modified QuEChERS method which utilizes column-based SPE instead of dispersive SPE; and (d) matrix solid phase dispersion (MSPD). The methods were combined with fast gas chromatographic-mass spectrometric analysis. The effectiveness of clean-up of the final extract was determined by comparison of the chromatograms obtained. Time consumption, laboriousness, demands on glassware and working place, and consumption of chemicals, especially solvents, increase in the following order QuEChERS < modified QuEChERS < MSPD < modified Schenck's method. All methods offer satisfactory analytical characteristics at the concentration levels of 5, 10, and 100 microg/kg in terms of recoveries and repeatability. Recoveries obtained for the modified QuEChERS method were lower than for the original QuEChERS. In general the best LOQs were obtained for the modified Schenck's method. Modified QuEChERS method provides 21-72% better LOQs than the original method.
Optimization of Turbine Engine Cycle Analysis with Analytic Derivatives
NASA Technical Reports Server (NTRS)
Hearn, Tristan; Hendricks, Eric; Chin, Jeffrey; Gray, Justin; Moore, Kenneth T.
2016-01-01
A new engine cycle analysis tool, called Pycycle, was recently built using the OpenMDAO framework. This tool uses equilibrium chemistry based thermodynamics, and provides analytic derivatives. This allows for stable and efficient use of gradient-based optimization and sensitivity analysis methods on engine cycle models, without requiring the use of finite difference derivative approximation methods. To demonstrate this, a gradient-based design optimization was performed on a multi-point turbofan engine model. Results demonstrate very favorable performance compared to an optimization of an identical model using finite-difference approximated derivatives.
ERIC Educational Resources Information Center
Davis, Eric J.; Pauls, Steve; Dick, Jonathan
2017-01-01
Presented is a project-based learning (PBL) laboratory approach for an upper-division environmental chemistry or quantitative analysis course. In this work, a combined laboratory class of 11 environmental chemistry students developed a method based on published EPA methods for the extraction of dichlorodiphenyltrichloroethane (DDT) and its…
Gimenez, Thais; Braga, Mariana Minatel; Raggio, Daniela Procida; Deery, Chris; Ricketts, David N; Mendes, Fausto Medeiros
2013-01-01
Fluorescence-based methods have been proposed to aid caries lesion detection. Summarizing and analysing findings of studies about fluorescence-based methods could clarify their real benefits. We aimed to perform a comprehensive systematic review and meta-analysis to evaluate the accuracy of fluorescence-based methods in detecting caries lesions. Two independent reviewers searched PubMed, Embase and Scopus through June 2012 to identify papers/articles published. Other sources were checked to identify non-published literature. STUDY ELIGIBILITY CRITERIA, PARTICIPANTS AND DIAGNOSTIC METHODS: The eligibility criteria were studies that: (1) have assessed the accuracy of fluorescence-based methods of detecting caries lesions on occlusal, approximal or smooth surfaces, in both primary or permanent human teeth, in the laboratory or clinical setting; (2) have used a reference standard; and (3) have reported sufficient data relating to the sample size and the accuracy of methods. A diagnostic 2×2 table was extracted from included studies to calculate the pooled sensitivity, specificity and overall accuracy parameters (Diagnostic Odds Ratio and Summary Receiver-Operating curve). The analyses were performed separately for each method and different characteristics of the studies. The quality of the studies and heterogeneity were also evaluated. Seventy five studies met the inclusion criteria from the 434 articles initially identified. The search of the grey or non-published literature did not identify any further studies. In general, the analysis demonstrated that the fluorescence-based method tend to have similar accuracy for all types of teeth, dental surfaces or settings. There was a trend of better performance of fluorescence methods in detecting more advanced caries lesions. We also observed moderate to high heterogeneity and evidenced publication bias. Fluorescence-based devices have similar overall performance; however, better accuracy in detecting more advanced caries lesions has been observed.
On Bi-Grid Local Mode Analysis of Solution Techniques for 3-D Euler and Navier-Stokes Equations
NASA Technical Reports Server (NTRS)
Ibraheem, S. O.; Demuren, A. O.
1994-01-01
A procedure is presented for utilizing a bi-grid stability analysis as a practical tool for predicting multigrid performance in a range of numerical methods for solving Euler and Navier-Stokes equations. Model problems based on the convection, diffusion and Burger's equation are used to illustrate the superiority of the bi-grid analysis as a predictive tool for multigrid performance in comparison to the smoothing factor derived from conventional von Neumann analysis. For the Euler equations, bi-grid analysis is presented for three upwind difference based factorizations, namely Spatial, Eigenvalue and Combination splits, and two central difference based factorizations, namely LU and ADI methods. In the former, both the Steger-Warming and van Leer flux-vector splitting methods are considered. For the Navier-Stokes equations, only the Beam-Warming (ADI) central difference scheme is considered. In each case, estimates of multigrid convergence rates from the bi-grid analysis are compared to smoothing factors obtained from single-grid stability analysis. Effects of grid aspect ratio and flow skewness are examined. Both predictions are compared with practical multigrid convergence rates for 2-D Euler and Navier-Stokes solutions based on the Beam-Warming central scheme.
Design and performance analysis of gas and liquid radial turbines
NASA Astrophysics Data System (ADS)
Tan, Xu
In the first part of the research, pumps running in reverse as turbines are studied. This work uses experimental data of wide range of pumps representing the centrifugal pumps' configurations in terms of specific speed. Based on specific speed and specific diameter an accurate correlation is developed to predict the performances at best efficiency point of the centrifugal pump in its turbine mode operation. The proposed prediction method yields very good results to date compared to previous such attempts. The present method is compared to nine previous methods found in the literature. The comparison results show that the method proposed in this paper is the most accurate. The proposed method can be further complemented and supplemented by more future tests to increase its accuracy. The proposed method is meaningful because it is based both specific speed and specific diameter. The second part of the research is focused on the design and analysis of the radial gas turbine. The specification of the turbine is obtained from the solar biogas hybrid system. The system is theoretically analyzed and constructed based on the purchased compressor. Theoretical analysis results in a specification of 100lb/min, 900ºC inlet total temperature and 1.575atm inlet total pressure. 1-D and 3-D geometry of the rotor is generated based on Aungier's method. 1-D loss model analysis and 3-D CFD simulations are performed to examine the performances of the rotor. The total-to-total efficiency of the rotor is more than 90%. With the help of CFD analysis, modifications on the preliminary design obtained optimized aerodynamic performances. At last, the theoretical performance analysis on the hybrid system is performed with the designed turbine.
ERIC Educational Resources Information Center
Hale, Robert L.; Dougherty, Donna
1988-01-01
Compared the efficacy of two methods of cluster analysis, the unweighted pair-groups method using arithmetic averages (UPGMA) and Ward's method, for students grouped on intelligence, achievement, and social adjustment by both clustering methods. Found UPGMA more efficacious based on output, on cophenetic correlation coefficients generated by each…
The Analysis of Seawater: A Laboratory-Centered Learning Project in General Chemistry.
ERIC Educational Resources Information Center
Selco, Jodye I.; Roberts, Julian L., Jr.; Wacks, Daniel B.
2003-01-01
Describes a sea-water analysis project that introduces qualitative and quantitative analysis methods and laboratory methods such as gravimetric analysis, potentiometric titration, ion-selective electrodes, and the use of calibration curves. Uses a problem-based cooperative teaching approach. (Contains 24 references.) (YDS)
Stability analysis for a multi-camera photogrammetric system.
Habib, Ayman; Detchev, Ivan; Kwak, Eunju
2014-08-18
Consumer-grade digital cameras suffer from geometrical instability that may cause problems when used in photogrammetric applications. This paper provides a comprehensive review of this issue of interior orientation parameter variation over time, it explains the common ways used for coping with the issue, and describes the existing methods for performing stability analysis for a single camera. The paper then points out the lack of coverage of stability analysis for multi-camera systems, suggests a modification of the collinearity model to be used for the calibration of an entire photogrammetric system, and proposes three methods for system stability analysis. The proposed methods explore the impact of the changes in interior orientation and relative orientation/mounting parameters on the reconstruction process. Rather than relying on ground truth in real datasets to check the system calibration stability, the proposed methods are simulation-based. Experiment results are shown, where a multi-camera photogrammetric system was calibrated three times, and stability analysis was performed on the system calibration parameters from the three sessions. The proposed simulation-based methods provided results that were compatible with a real-data based approach for evaluating the impact of changes in the system calibration parameters on the three-dimensional reconstruction.
Stability Analysis for a Multi-Camera Photogrammetric System
Habib, Ayman; Detchev, Ivan; Kwak, Eunju
2014-01-01
Consumer-grade digital cameras suffer from geometrical instability that may cause problems when used in photogrammetric applications. This paper provides a comprehensive review of this issue of interior orientation parameter variation over time, it explains the common ways used for coping with the issue, and describes the existing methods for performing stability analysis for a single camera. The paper then points out the lack of coverage of stability analysis for multi-camera systems, suggests a modification of the collinearity model to be used for the calibration of an entire photogrammetric system, and proposes three methods for system stability analysis. The proposed methods explore the impact of the changes in interior orientation and relative orientation/mounting parameters on the reconstruction process. Rather than relying on ground truth in real datasets to check the system calibration stability, the proposed methods are simulation-based. Experiment results are shown, where a multi-camera photogrammetric system was calibrated three times, and stability analysis was performed on the system calibration parameters from the three sessions. The proposed simulation-based methods provided results that were compatible with a real-data based approach for evaluating the impact of changes in the system calibration parameters on the three-dimensional reconstruction. PMID:25196012
Initial assessment of facial nerve paralysis based on motion analysis using an optical flow method.
Samsudin, Wan Syahirah W; Sundaraj, Kenneth; Ahmad, Amirozi; Salleh, Hasriah
2016-01-01
An initial assessment method that can classify as well as categorize the severity of paralysis into one of six levels according to the House-Brackmann (HB) system based on facial landmarks motion using an Optical Flow (OF) algorithm is proposed. The desired landmarks were obtained from the video recordings of 5 normal and 3 Bell's Palsy subjects and tracked using the Kanade-Lucas-Tomasi (KLT) method. A new scoring system based on the motion analysis using area measurement is proposed. This scoring system uses the individual scores from the facial exercises and grades the paralysis based on the HB system. The proposed method has obtained promising results and may play a pivotal role towards improved rehabilitation programs for patients.
Pathway analysis with next-generation sequencing data.
Zhao, Jinying; Zhu, Yun; Boerwinkle, Eric; Xiong, Momiao
2015-04-01
Although pathway analysis methods have been developed and successfully applied to association studies of common variants, the statistical methods for pathway-based association analysis of rare variants have not been well developed. Many investigators observed highly inflated false-positive rates and low power in pathway-based tests of association of rare variants. The inflated false-positive rates and low true-positive rates of the current methods are mainly due to their lack of ability to account for gametic phase disequilibrium. To overcome these serious limitations, we develop a novel statistic that is based on the smoothed functional principal component analysis (SFPCA) for pathway association tests with next-generation sequencing data. The developed statistic has the ability to capture position-level variant information and account for gametic phase disequilibrium. By intensive simulations, we demonstrate that the SFPCA-based statistic for testing pathway association with either rare or common or both rare and common variants has the correct type 1 error rates. Also the power of the SFPCA-based statistic and 22 additional existing statistics are evaluated. We found that the SFPCA-based statistic has a much higher power than other existing statistics in all the scenarios considered. To further evaluate its performance, the SFPCA-based statistic is applied to pathway analysis of exome sequencing data in the early-onset myocardial infarction (EOMI) project. We identify three pathways significantly associated with EOMI after the Bonferroni correction. In addition, our preliminary results show that the SFPCA-based statistic has much smaller P-values to identify pathway association than other existing methods.
Kishikawa, Naoya
2010-10-01
Quinones are compounds that have various characteristics such as a biological electron transporter, an industrial product and a harmful environmental pollutant. Therefore, an effective determination method for quinones is required in many fields. This review describes the development of sensitive and selective determination methods for quinones based on some detection principles and their application to analyses in environmental, pharmaceutical and biological samples. Firstly, a fluorescence method was developed based on fluorogenic derivatization of quinones and applied to environmental analysis. Secondly, a luminol chemiluminescence method was developed based on generation of reactive oxygen species through the redox cycle of quinone and applied to pharmaceutical analysis. Thirdly, a photo-induced chemiluminescence method was developed based on formation of reactive oxygen species and fluorophore or chemiluminescence enhancer by the photoreaction of quinones and applied to biological and environmental analyses.
Comparisons of non-Gaussian statistical models in DNA methylation analysis.
Ma, Zhanyu; Teschendorff, Andrew E; Yu, Hong; Taghia, Jalil; Guo, Jun
2014-06-16
As a key regulatory mechanism of gene expression, DNA methylation patterns are widely altered in many complex genetic diseases, including cancer. DNA methylation is naturally quantified by bounded support data; therefore, it is non-Gaussian distributed. In order to capture such properties, we introduce some non-Gaussian statistical models to perform dimension reduction on DNA methylation data. Afterwards, non-Gaussian statistical model-based unsupervised clustering strategies are applied to cluster the data. Comparisons and analysis of different dimension reduction strategies and unsupervised clustering methods are presented. Experimental results show that the non-Gaussian statistical model-based methods are superior to the conventional Gaussian distribution-based method. They are meaningful tools for DNA methylation analysis. Moreover, among several non-Gaussian methods, the one that captures the bounded nature of DNA methylation data reveals the best clustering performance.
Comparisons of Non-Gaussian Statistical Models in DNA Methylation Analysis
Ma, Zhanyu; Teschendorff, Andrew E.; Yu, Hong; Taghia, Jalil; Guo, Jun
2014-01-01
As a key regulatory mechanism of gene expression, DNA methylation patterns are widely altered in many complex genetic diseases, including cancer. DNA methylation is naturally quantified by bounded support data; therefore, it is non-Gaussian distributed. In order to capture such properties, we introduce some non-Gaussian statistical models to perform dimension reduction on DNA methylation data. Afterwards, non-Gaussian statistical model-based unsupervised clustering strategies are applied to cluster the data. Comparisons and analysis of different dimension reduction strategies and unsupervised clustering methods are presented. Experimental results show that the non-Gaussian statistical model-based methods are superior to the conventional Gaussian distribution-based method. They are meaningful tools for DNA methylation analysis. Moreover, among several non-Gaussian methods, the one that captures the bounded nature of DNA methylation data reveals the best clustering performance. PMID:24937687
Using Willie's Acid-Base Box for Blood Gas Analysis
ERIC Educational Resources Information Center
Dietz, John R.
2011-01-01
In this article, the author describes a method developed by Dr. William T. Lipscomb for teaching blood gas analysis of acid-base status and provides three examples using Willie's acid-base box. Willie's acid-base box is constructed using three of the parameters of standard arterial blood gas analysis: (1) pH; (2) bicarbonate; and (3) CO[subscript…
NASA Astrophysics Data System (ADS)
Dai, Heng; Chen, Xingyuan; Ye, Ming; Song, Xuehang; Zachara, John M.
2017-05-01
Sensitivity analysis is an important tool for development and improvement of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study, we developed a new sensitivity analysis method that integrates the concept of variance-based method with a hierarchical uncertainty quantification framework. Different uncertain inputs are grouped and organized into a multilayer framework based on their characteristics and dependency relationships to reduce the dimensionality of the sensitivity analysis. A set of new sensitivity indices are defined for the grouped inputs using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially distributed input variables.
NASA Astrophysics Data System (ADS)
Dai, H.; Chen, X.; Ye, M.; Song, X.; Zachara, J. M.
2017-12-01
Sensitivity analysis is an important tool for development and improvement of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study we developed a new sensitivity analysis method that integrates the concept of variance-based method with a hierarchical uncertainty quantification framework. Different uncertain inputs are grouped and organized into a multi-layer framework based on their characteristics and dependency relationships to reduce the dimensionality of the sensitivity analysis. A set of new sensitivity indices are defined for the grouped inputs using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially-distributed input variables.
NASA Astrophysics Data System (ADS)
Ahn, Jae-Jun; Akram, Kashif; Shahbaz, Hafiz Muhammad; Kwon, Joong-Ho
2014-12-01
Frozen fish fillets (walleye Pollack and Japanese Spanish mackerel) were selected as samples for irradiation (0-10 kGy) detection trials using different hydrolysis methods. Photostimulated luminescence (PSL)-based screening analysis for gamma-irradiated frozen fillets showed low sensitivity due to limited silicate mineral contents on the samples. Same limitations were found in the thermoluminescence (TL) analysis on mineral samples isolated by density separation method. However, acid (HCl) and alkali (KOH) hydrolysis methods were effective in getting enough minerals to carry out TL analysis, which was reconfirmed through the normalization step by calculating the TL ratios (TL1/TL2). For improved electron spin resonance (ESR) analysis, alkali and enzyme (alcalase) hydrolysis methods were compared in separating minute-bone fractions. The enzymatic method provided more clear radiation-specific hydroxyapatite radicals than that of the alkaline method. Different hydrolysis methods could extend the application of TL and ESR techniques in identifying the irradiation history of frozen fish fillets.
Compensation of hospital-based physicians.
Steinwald, B
1983-01-01
This study is concerned with methods of compensating hospital-based physicians (HBPs) in five medical specialties: anesthesiology, pathology, radiology, cardiology, and emergency medicine. Data on 2232 nonfederal, short-term general hospitals came from a mail questionnaire survey conducted in Fall 1979. The data indicate that numerous compensation methods exist but these methods, without much loss of precision, can be reduced to salary, percentage of department revenue, and fee-for-service. When HBPs are compensated by salary or percentage methods, most patient billing is conducted by the hospital. In contrast, most fee-for-service HBPs bill their patients directly. Determinants of HBP compensation methods are investigated via multinomial logit analysis. This analysis indicates that choice of HBP compensation methods are investigated via multinomial logit analysis. This analysis indicates that choice of HBP compensation methods is sensitive to a number of hospital characteristics and attributes of both the hospital and physicians' services markets. The empirical findings are discussed in light of past conceptual and empirical research on physician compensation, and current policy issues in the health services sector. PMID:6841112
Aerodynamic design optimization using sensitivity analysis and computational fluid dynamics
NASA Technical Reports Server (NTRS)
Baysal, Oktay; Eleshaky, Mohamed E.
1991-01-01
A new and efficient method is presented for aerodynamic design optimization, which is based on a computational fluid dynamics (CFD)-sensitivity analysis algorithm. The method is applied to design a scramjet-afterbody configuration for an optimized axial thrust. The Euler equations are solved for the inviscid analysis of the flow, which in turn provides the objective function and the constraints. The CFD analysis is then coupled with the optimization procedure that uses a constrained minimization method. The sensitivity coefficients, i.e. gradients of the objective function and the constraints, needed for the optimization are obtained using a quasi-analytical method rather than the traditional brute force method of finite difference approximations. During the one-dimensional search of the optimization procedure, an approximate flow analysis (predicted flow) based on a first-order Taylor series expansion is used to reduce the computational cost. Finally, the sensitivity of the optimum objective function to various design parameters, which are kept constant during the optimization, is computed to predict new optimum solutions. The flow analysis of the demonstrative example are compared with the experimental data. It is shown that the method is more efficient than the traditional methods.
Barlow, Paul M.; Cunningham, William L.; Zhai, Tong; Gray, Mark
2015-01-01
This report is a user guide for the streamflow-hydrograph analysis methods provided with version 1.0 of the U.S. Geological Survey (USGS) Groundwater Toolbox computer program. These include six hydrograph-separation methods to determine the groundwater-discharge (base-flow) and surface-runoff components of streamflow—the Base-Flow Index (BFI; Standard and Modified), HYSEP (Fixed Interval, Sliding Interval, and Local Minimum), and PART methods—and the RORA recession-curve displacement method and associated RECESS program to estimate groundwater recharge from streamflow data. The Groundwater Toolbox is a customized interface built on the nonproprietary, open source MapWindow geographic information system software. The program provides graphing, mapping, and analysis capabilities in a Microsoft Windows computing environment. In addition to the four hydrograph-analysis methods, the Groundwater Toolbox allows for the retrieval of hydrologic time-series data (streamflow, groundwater levels, and precipitation) from the USGS National Water Information System, downloading of a suite of preprocessed geographic information system coverages and meteorological data from the National Oceanic and Atmospheric Administration National Climatic Data Center, and analysis of data with several preprocessing and postprocessing utilities. With its data retrieval and analysis tools, the Groundwater Toolbox provides methods to estimate many of the components of the water budget for a hydrologic basin, including precipitation; streamflow; base flow; runoff; groundwater recharge; and total, groundwater, and near-surface evapotranspiration.
Computerized Spiral Analysis Using the iPad
Sisti, Jonathan A.; Christophe, Brandon; Seville, Audrey Rakovich; Garton, Andrew L.A.; Gupta, Vivek P.; Bandin, Alexander J.; Yu, Qiping; Pullman, Seth L.
2017-01-01
Background Digital analysis of writing and drawing has become a valuable research and clinical tool for the study of upper limb motor dysfunction in patients with essential tremor, Parkinson’s disease, dystonia, and related disorders. We developed a validated method of computerized spiral analysis of hand-drawn Archimedean spirals that provides insight into movement dynamics beyond subjective visual assessment using a Wacom graphics tablet. While the Wacom tablet method provides robust data, more widely available mobile technology platforms exist. New Method We introduce a novel adaptation of the Wacom-based method for the collection of hand-drawn kinematic data using an Apple iPad. This iPad-based system is stand-alone, easy-to-use, can capture drawing data with either a finger or capacitive stylus, is precise, and potentially ubiquitous. Results The iPad-based system acquires position and time data that is fully compatible with our original spiral analysis program. All of the important indices including degree of severity, speed, presence of tremor, tremor amplitude, tremor frequency, variability of pressure, and tightness are calculated from the digital spiral data, which the application is able to transmit. Comparison with Existing Method While the iPad method is limited by current touch screen technology, it does collect data with acceptable congruence compared to the current Wacom-based method while providing the advantages of accessibility and ease of use. Conclusions The iPad is capable of capturing precise digital spiral data for analysis of motor dysfunction while also providing a convenient, easy-to-use modality in clinics and potentially at home. PMID:27840146
NASA Astrophysics Data System (ADS)
Shen, Wei; Li, Dongsheng; Zhang, Shuaifang; Ou, Jinping
2017-07-01
This paper presents a hybrid method that combines the B-spline wavelet on the interval (BSWI) finite element method and spectral analysis based on fast Fourier transform (FFT) to study wave propagation in One-Dimensional (1D) structures. BSWI scaling functions are utilized to approximate the theoretical wave solution in the spatial domain and construct a high-accuracy dynamic stiffness matrix. Dynamic reduction on element level is applied to eliminate the interior degrees of freedom of BSWI elements and substantially reduce the size of the system matrix. The dynamic equations of the system are then transformed and solved in the frequency domain through FFT-based spectral analysis which is especially suitable for parallel computation. A comparative analysis of four different finite element methods is conducted to demonstrate the validity and efficiency of the proposed method when utilized in high-frequency wave problems. Other numerical examples are utilized to simulate the influence of crack and delamination on wave propagation in 1D rods and beams. Finally, the errors caused by FFT and their corresponding solutions are presented.
Evaluation of Low-Voltage Distribution Network Index Based on Improved Principal Component Analysis
NASA Astrophysics Data System (ADS)
Fan, Hanlu; Gao, Suzhou; Fan, Wenjie; Zhong, Yinfeng; Zhu, Lei
2018-01-01
In order to evaluate the development level of the low-voltage distribution network objectively and scientifically, chromatography analysis method is utilized to construct evaluation index model of low-voltage distribution network. Based on the analysis of principal component and the characteristic of logarithmic distribution of the index data, a logarithmic centralization method is adopted to improve the principal component analysis algorithm. The algorithm can decorrelate and reduce the dimensions of the evaluation model and the comprehensive score has a better dispersion degree. The clustering method is adopted to analyse the comprehensive score because the comprehensive score of the courts is concentrated. Then the stratification evaluation of the courts is realized. An example is given to verify the objectivity and scientificity of the evaluation method.
Brodsky, Leonid; Leontovich, Andrei; Shtutman, Michael; Feinstein, Elena
2004-01-01
Mathematical methods of analysis of microarray hybridizations deal with gene expression profiles as elementary units. However, some of these profiles do not reflect a biologically relevant transcriptional response, but rather stem from technical artifacts. Here, we describe two technically independent but rationally interconnected methods for identification of such artifactual profiles. Our diagnostics are based on detection of deviations from uniformity, which is assumed as the main underlying principle of microarray design. Method 1 is based on detection of non-uniformity of microarray distribution of printed genes that are clustered based on the similarity of their expression profiles. Method 2 is based on evaluation of the presence of gene-specific microarray spots within the slides’ areas characterized by an abnormal concentration of low/high differential expression values, which we define as ‘patterns of differentials’. Applying two novel algorithms, for nested clustering (method 1) and for pattern detection (method 2), we can make a dual estimation of the profile’s quality for almost every printed gene. Genes with artifactual profiles detected by method 1 may then be removed from further analysis. Suspicious differential expression values detected by method 2 may be either removed or weighted according to the probabilities of patterns that cover them, thus diminishing their input in any further data analysis. PMID:14999086
Ozerov, Ivan V; Lezhnina, Ksenia V; Izumchenko, Evgeny; Artemov, Artem V; Medintsev, Sergey; Vanhaelen, Quentin; Aliper, Alexander; Vijg, Jan; Osipov, Andreyan N; Labat, Ivan; West, Michael D; Buzdin, Anton; Cantor, Charles R; Nikolsky, Yuri; Borisov, Nikolay; Irincheeva, Irina; Khokhlovich, Edward; Sidransky, David; Camargo, Miguel Luiz; Zhavoronkov, Alex
2016-11-16
Signalling pathway activation analysis is a powerful approach for extracting biologically relevant features from large-scale transcriptomic and proteomic data. However, modern pathway-based methods often fail to provide stable pathway signatures of a specific phenotype or reliable disease biomarkers. In the present study, we introduce the in silico Pathway Activation Network Decomposition Analysis (iPANDA) as a scalable robust method for biomarker identification using gene expression data. The iPANDA method combines precalculated gene coexpression data with gene importance factors based on the degree of differential gene expression and pathway topology decomposition for obtaining pathway activation scores. Using Microarray Analysis Quality Control (MAQC) data sets and pretreatment data on Taxol-based neoadjuvant breast cancer therapy from multiple sources, we demonstrate that iPANDA provides significant noise reduction in transcriptomic data and identifies highly robust sets of biologically relevant pathway signatures. We successfully apply iPANDA for stratifying breast cancer patients according to their sensitivity to neoadjuvant therapy.
Accurate airway centerline extraction based on topological thinning using graph-theoretic analysis.
Bian, Zijian; Tan, Wenjun; Yang, Jinzhu; Liu, Jiren; Zhao, Dazhe
2014-01-01
The quantitative analysis of the airway tree is of critical importance in the CT-based diagnosis and treatment of popular pulmonary diseases. The extraction of airway centerline is a precursor to identify airway hierarchical structure, measure geometrical parameters, and guide visualized detection. Traditional methods suffer from extra branches and circles due to incomplete segmentation results, which induce false analysis in applications. This paper proposed an automatic and robust centerline extraction method for airway tree. First, the centerline is located based on the topological thinning method; border voxels are deleted symmetrically to preserve topological and geometrical properties iteratively. Second, the structural information is generated using graph-theoretic analysis. Then inaccurate circles are removed with a distance weighting strategy, and extra branches are pruned according to clinical anatomic knowledge. The centerline region without false appendices is eventually determined after the described phases. Experimental results show that the proposed method identifies more than 96% branches and keep consistency across different cases and achieves superior circle-free structure and centrality.
Kim, Ki Wan; Hong, Hyung Gil; Nam, Gi Pyo; Park, Kang Ryoung
2017-06-30
The necessity for the classification of open and closed eyes is increasing in various fields, including analysis of eye fatigue in 3D TVs, analysis of the psychological states of test subjects, and eye status tracking-based driver drowsiness detection. Previous studies have used various methods to distinguish between open and closed eyes, such as classifiers based on the features obtained from image binarization, edge operators, or texture analysis. However, when it comes to eye images with different lighting conditions and resolutions, it can be difficult to find an optimal threshold for image binarization or optimal filters for edge and texture extraction. In order to address this issue, we propose a method to classify open and closed eye images with different conditions, acquired by a visible light camera, using a deep residual convolutional neural network. After conducting performance analysis on both self-collected and open databases, we have determined that the classification accuracy of the proposed method is superior to that of existing methods.
Ozerov, Ivan V.; Lezhnina, Ksenia V.; Izumchenko, Evgeny; Artemov, Artem V.; Medintsev, Sergey; Vanhaelen, Quentin; Aliper, Alexander; Vijg, Jan; Osipov, Andreyan N.; Labat, Ivan; West, Michael D.; Buzdin, Anton; Cantor, Charles R.; Nikolsky, Yuri; Borisov, Nikolay; Irincheeva, Irina; Khokhlovich, Edward; Sidransky, David; Camargo, Miguel Luiz; Zhavoronkov, Alex
2016-01-01
Signalling pathway activation analysis is a powerful approach for extracting biologically relevant features from large-scale transcriptomic and proteomic data. However, modern pathway-based methods often fail to provide stable pathway signatures of a specific phenotype or reliable disease biomarkers. In the present study, we introduce the in silico Pathway Activation Network Decomposition Analysis (iPANDA) as a scalable robust method for biomarker identification using gene expression data. The iPANDA method combines precalculated gene coexpression data with gene importance factors based on the degree of differential gene expression and pathway topology decomposition for obtaining pathway activation scores. Using Microarray Analysis Quality Control (MAQC) data sets and pretreatment data on Taxol-based neoadjuvant breast cancer therapy from multiple sources, we demonstrate that iPANDA provides significant noise reduction in transcriptomic data and identifies highly robust sets of biologically relevant pathway signatures. We successfully apply iPANDA for stratifying breast cancer patients according to their sensitivity to neoadjuvant therapy. PMID:27848968
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S
The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components requiredmore » for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the Dakota software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of Dakota-related research publications in the areas of surrogate-based optimization, uncertainty quanti cation, and optimization under uncertainty that provide the foundation for many of Dakota's iterative analysis capabilities.« less
Sievers, Aaron; Bosiek, Katharina; Bisch, Marc; Dreessen, Chris; Riedel, Jascha; Froß, Patrick; Hausmann, Michael; Hildenbrand, Georg
2017-01-01
In genome analysis, k-mer-based comparison methods have become standard tools. However, even though they are able to deliver reliable results, other algorithms seem to work better in some cases. To improve k-mer-based DNA sequence analysis and comparison, we successfully checked whether adding positional resolution is beneficial for finding and/or comparing interesting organizational structures. A simple but efficient algorithm for extracting and saving local k-mer spectra (frequency distribution of k-mers) was developed and used. The results were analyzed by including positional information based on visualizations as genomic maps and by applying basic vector correlation methods. This analysis was concentrated on small word lengths (1 ≤ k ≤ 4) on relatively small viral genomes of Papillomaviridae and Herpesviridae, while also checking its usability for larger sequences, namely human chromosome 2 and the homologous chromosomes (2A, 2B) of a chimpanzee. Using this alignment-free analysis, several regions with specific characteristics in Papillomaviridae and Herpesviridae formerly identified by independent, mostly alignment-based methods, were confirmed. Correlations between the k-mer content and several genes in these genomes have been found, showing similarities between classified and unclassified viruses, which may be potentially useful for further taxonomic research. Furthermore, unknown k-mer correlations in the genomes of Human Herpesviruses (HHVs), which are probably of major biological function, are found and described. Using the chromosomes of a chimpanzee and human that are currently known, identities between the species on every analyzed chromosome were reproduced. This demonstrates the feasibility of our approach for large data sets of complex genomes. Based on these results, we suggest k-mer analysis with positional resolution as a method for closing a gap between the effectiveness of alignment-based methods (like NCBI BLAST) and the high pace of standard k-mer analysis. PMID:28422050
Dai, Sheng-Yun; Xu, Bing; Shi, Xin-Yuan; Xu, Xiang; Sun, Ying-Qiang; Qiao, Yan-Jiang
2017-03-01
This study is aimed to propose a continual improvement strategy based on quality by design (QbD). An ultra high performance liquid chromatography (UPLC) method was developed to accomplish the method transformation from HPLC to UPLC of Panax notogineng saponins (PNS) and achieve the continual improvement of PNS based on QbD, for example. Plackett-Burman screening design and Box-Behnken optimization design were employed to further understand the relationship between the critical method parameters (CMPs) and critical method attributes (CMAs). And then the Bayesian design space was built. The separation degree of the critical peaks (ginsenoside Rg₁ and ginsenoside Re) was over 2.0 and the analysis time was less than 17 min by a method chosen from the design space with 20% of the initial concentration of the acetonitrile, 10 min of the isocratic time and 6%•min⁻¹ of the gradient slope. At last, the optimum method was validated by accuracy profile. Based on the same analytical target profile (ATP), the comparison of HPLC and UPLC including chromatograph method, CMA identification, CMP-CMA model and system suitability test (SST) indicated that the UPLC method could shorten the analysis time, improve the critical separation and satisfy the requirement of the SST. In all, HPLC method could be replaced by UPLC for the quantity analysis of PNS. Copyright© by the Chinese Pharmaceutical Association.
Factor Retention in Exploratory Factor Analysis: A Comparison of Alternative Methods.
ERIC Educational Resources Information Center
Mumford, Karen R.; Ferron, John M.; Hines, Constance V.; Hogarty, Kristine Y.; Kromrey, Jeffery D.
This study compared the effectiveness of 10 methods of determining the number of factors to retain in exploratory common factor analysis. The 10 methods included the Kaiser rule and a modified Kaiser criterion, 3 variations of parallel analysis, 4 regression-based variations of the scree procedure, and the minimum average partial procedure. The…
Chen, Yikai; Wang, Kai; Xu, Chengcheng; Shi, Qin; He, Jie; Li, Peiqing; Shi, Ting
2018-05-19
To overcome the limitations of previous highway alignment safety evaluation methods, this article presents a highway alignment safety evaluation method based on fault tree analysis (FTA) and the characteristics of vehicle safety boundaries, within the framework of dynamic modeling of the driver-vehicle-road system. Approaches for categorizing the vehicle failure modes while driving on highways and the corresponding safety boundaries were comprehensively investigated based on vehicle system dynamics theory. Then, an overall crash probability model was formulated based on FTA considering the risks of 3 failure modes: losing steering capability, losing track-holding capability, and rear-end collision. The proposed method was implemented on a highway segment between Bengbu and Nanjing in China. A driver-vehicle-road multibody dynamics model was developed based on the 3D alignments of the Bengbu to Nanjing section of Ning-Luo expressway using Carsim, and the dynamics indices, such as sideslip angle and, yaw rate were obtained. Then, the average crash probability of each road section was calculated with a fixed-length method. Finally, the average crash probability was validated against the crash frequency per kilometer to demonstrate the accuracy of the proposed method. The results of the regression analysis and correlation analysis indicated good consistency between the results of the safety evaluation and the crash data and that it outperformed the safety evaluation methods used in previous studies. The proposed method has the potential to be used in practical engineering applications to identify crash-prone locations and alignment deficiencies on highways in the planning and design phases, as well as those in service.
Pabon, Peter; Ternström, Sten; Lamarche, Anick
2011-06-01
To describe a method for unified description, statistical modeling, and comparison of voice range profile (VRP) contours, even from diverse sources. A morphologic modeling technique, which is based on Fourier descriptors (FDs), is applied to the VRP contour. The technique, which essentially involves resampling of the curve of the contour, is assessed and also is compared to density-based VRP averaging methods that use the overlap count. VRP contours can be usefully described and compared using FDs. The method also permits the visualization of the local covariation along the contour average. For example, the FD-based analysis shows that the population variance for ensembles of VRP contours is usually smallest at the upper left part of the VRP. To illustrate the method's advantages and possible further application, graphs are given that compare the averaged contours from different authors and recording devices--for normal, trained, and untrained male and female voices as well as for child voices. The proposed technique allows any VRP shape to be brought to the same uniform base. On this uniform base, VRP contours or contour elements coming from a variety of sources may be placed within the same graph for comparison and for statistical analysis.
ERIC Educational Resources Information Center
Kim, Kyung Hi
2014-01-01
This research, based on a case study of vulnerable children in Korea, used a mixed methods transformative approach to explore strategies to support and help disadvantaged children. The methodological approach includes three phases: a mixed methods contextual analysis, a qualitative dominant analysis based on Sen's capability approach and critical…
Stepwise Analysis of Differential Item Functioning Based on Multiple-Group Partial Credit Model.
ERIC Educational Resources Information Center
Muraki, Eiji
1999-01-01
Extended an Item Response Theory (IRT) method for detection of differential item functioning to the partial credit model and applied the method to simulated data using a stepwise procedure. Then applied the stepwise DIF analysis based on the multiple-group partial credit model to writing trend data from the National Assessment of Educational…
Acoustics based assessment of respiratory diseases using GMM classification.
Mayorga, P; Druzgalski, C; Morelos, R L; Gonzalez, O H; Vidales, J
2010-01-01
The focus of this paper is to present a method utilizing lung sounds for a quantitative assessment of patient health as it relates to respiratory disorders. In order to accomplish this, applicable traditional techniques within the speech processing domain were utilized to evaluate lung sounds obtained with a digital stethoscope. Traditional methods utilized in the evaluation of asthma involve auscultation and spirometry, but utilization of more sensitive electronic stethoscopes, which are currently available, and application of quantitative signal analysis methods offer opportunities of improved diagnosis. In particular we propose an acoustic evaluation methodology based on the Gaussian Mixed Models (GMM) which should assist in broader analysis, identification, and diagnosis of asthma based on the frequency domain analysis of wheezing and crackles.
Tooth shape optimization of brushless permanent magnet motors for reducing torque ripples
NASA Astrophysics Data System (ADS)
Hsu, Liang-Yi; Tsai, Mi-Ching
2004-11-01
This paper presents a tooth shape optimization method based on a generic algorithm to reduce the torque ripple of brushless permanent magnet motors under two different magnetization directions. The analysis of this design method mainly focuses on magnetic saturation and cogging torque and the computation of the optimization process is based on an equivalent magnetic network circuit. The simulation results, obtained from the finite element analysis, are used to confirm the accuracy and performance. Finite element analysis results from different tooth shapes are compared to show the effectiveness of the proposed method.
Reuse of imputed data in microarray analysis increases imputation efficiency
Kim, Ki-Yeol; Kim, Byoung-Jin; Yi, Gwan-Su
2004-01-01
Background The imputation of missing values is necessary for the efficient use of DNA microarray data, because many clustering algorithms and some statistical analysis require a complete data set. A few imputation methods for DNA microarray data have been introduced, but the efficiency of the methods was low and the validity of imputed values in these methods had not been fully checked. Results We developed a new cluster-based imputation method called sequential K-nearest neighbor (SKNN) method. This imputes the missing values sequentially from the gene having least missing values, and uses the imputed values for the later imputation. Although it uses the imputed values, the efficiency of this new method is greatly improved in its accuracy and computational complexity over the conventional KNN-based method and other methods based on maximum likelihood estimation. The performance of SKNN was in particular higher than other imputation methods for the data with high missing rates and large number of experiments. Application of Expectation Maximization (EM) to the SKNN method improved the accuracy, but increased computational time proportional to the number of iterations. The Multiple Imputation (MI) method, which is well known but not applied previously to microarray data, showed a similarly high accuracy as the SKNN method, with slightly higher dependency on the types of data sets. Conclusions Sequential reuse of imputed data in KNN-based imputation greatly increases the efficiency of imputation. The SKNN method should be practically useful to save the data of some microarray experiments which have high amounts of missing entries. The SKNN method generates reliable imputed values which can be used for further cluster-based analysis of microarray data. PMID:15504240
Dong, Xinran; Hao, Yun; Wang, Xiao; Tian, Weidong
2016-01-01
Pathway or gene set over-representation analysis (ORA) has become a routine task in functional genomics studies. However, currently widely used ORA tools employ statistical methods such as Fisher’s exact test that reduce a pathway into a list of genes, ignoring the constitutive functional non-equivalent roles of genes and the complex gene-gene interactions. Here, we develop a novel method named LEGO (functional Link Enrichment of Gene Ontology or gene sets) that takes into consideration these two types of information by incorporating network-based gene weights in ORA analysis. In three benchmarks, LEGO achieves better performance than Fisher and three other network-based methods. To further evaluate LEGO’s usefulness, we compare LEGO with five gene expression-based and three pathway topology-based methods using a benchmark of 34 disease gene expression datasets compiled by a recent publication, and show that LEGO is among the top-ranked methods in terms of both sensitivity and prioritization for detecting target KEGG pathways. In addition, we develop a cluster-and-filter approach to reduce the redundancy among the enriched gene sets, making the results more interpretable to biologists. Finally, we apply LEGO to two lists of autism genes, and identify relevant gene sets to autism that could not be found by Fisher. PMID:26750448
Dong, Xinran; Hao, Yun; Wang, Xiao; Tian, Weidong
2016-01-11
Pathway or gene set over-representation analysis (ORA) has become a routine task in functional genomics studies. However, currently widely used ORA tools employ statistical methods such as Fisher's exact test that reduce a pathway into a list of genes, ignoring the constitutive functional non-equivalent roles of genes and the complex gene-gene interactions. Here, we develop a novel method named LEGO (functional Link Enrichment of Gene Ontology or gene sets) that takes into consideration these two types of information by incorporating network-based gene weights in ORA analysis. In three benchmarks, LEGO achieves better performance than Fisher and three other network-based methods. To further evaluate LEGO's usefulness, we compare LEGO with five gene expression-based and three pathway topology-based methods using a benchmark of 34 disease gene expression datasets compiled by a recent publication, and show that LEGO is among the top-ranked methods in terms of both sensitivity and prioritization for detecting target KEGG pathways. In addition, we develop a cluster-and-filter approach to reduce the redundancy among the enriched gene sets, making the results more interpretable to biologists. Finally, we apply LEGO to two lists of autism genes, and identify relevant gene sets to autism that could not be found by Fisher.
Monte Carlo based statistical power analysis for mediation models: methods and software.
Zhang, Zhiyong
2014-12-01
The existing literature on statistical power analysis for mediation models often assumes data normality and is based on a less powerful Sobel test instead of the more powerful bootstrap test. This study proposes to estimate statistical power to detect mediation effects on the basis of the bootstrap method through Monte Carlo simulation. Nonnormal data with excessive skewness and kurtosis are allowed in the proposed method. A free R package called bmem is developed to conduct the power analysis discussed in this study. Four examples, including a simple mediation model, a multiple-mediator model with a latent mediator, a multiple-group mediation model, and a longitudinal mediation model, are provided to illustrate the proposed method.
NASA Technical Reports Server (NTRS)
Pera, R. J.; Onat, E.; Klees, G. W.; Tjonneland, E.
1977-01-01
Weight and envelope dimensions of aircraft gas turbine engines are estimated within plus or minus 5% to 10% using a computer method based on correlations of component weight and design features of 29 data base engines. Rotating components are estimated by a preliminary design procedure where blade geometry, operating conditions, material properties, shaft speed, hub-tip ratio, etc., are the primary independent variables used. The development and justification of the method selected, the various methods of analysis, the use of the program, and a description of the input/output data are discussed.
New method for designing serial resonant power converters
NASA Astrophysics Data System (ADS)
Hinov, Nikolay
2017-12-01
In current work is presented one comprehensive method for design of serial resonant energy converters. The method is based on new simplified approach in analysis of such kind power electronic devices. It is grounded on supposing resonant mode of operation when finding relation between input and output voltage regardless of other operational modes (when controlling frequency is below or above resonant frequency). This approach is named `quasiresonant method of analysis', because it is based on assuming that all operational modes are `sort of' resonant modes. An estimation of error was made because of the a.m. hypothesis and is compared to the classic analysis. The `quasiresonant method' of analysis gains two main advantages: speed and easiness in designing of presented power circuits. Hence it is very useful in practice and in teaching Power Electronics. Its applicability is proven with mathematic modelling and computer simulation.
Probabilistic finite elements for transient analysis in nonlinear continua
NASA Technical Reports Server (NTRS)
Liu, W. K.; Belytschko, T.; Mani, A.
1985-01-01
The probabilistic finite element method (PFEM), which is a combination of finite element methods and second-moment analysis, is formulated for linear and nonlinear continua with inhomogeneous random fields. Analogous to the discretization of the displacement field in finite element methods, the random field is also discretized. The formulation is simplified by transforming the correlated variables to a set of uncorrelated variables through an eigenvalue orthogonalization. Furthermore, it is shown that a reduced set of the uncorrelated variables is sufficient for the second-moment analysis. Based on the linear formulation of the PFEM, the method is then extended to transient analysis in nonlinear continua. The accuracy and efficiency of the method is demonstrated by application to a one-dimensional, elastic/plastic wave propagation problem. The moments calculated compare favorably with those obtained by Monte Carlo simulation. Also, the procedure is amenable to implementation in deterministic FEM based computer programs.
NASA Astrophysics Data System (ADS)
Chen, Yuebiao; Zhou, Yiqi; Yu, Gang; Lu, Dan
In order to analyze the effect of engine vibration on cab noise of construction machinery in multi-frequency bands, a new method based on ensemble empirical mode decomposition (EEMD) and spectral correlation analysis is proposed. Firstly, the intrinsic mode functions (IMFs) of vibration and noise signals were obtained by EEMD method, and then the IMFs which have the same frequency bands were selected. Secondly, we calculated the spectral correlation coefficients between the selected IMFs, getting the main frequency bands in which engine vibration has significant impact on cab noise. Thirdly, the dominated frequencies were picked out and analyzed by spectral analysis method. The study result shows that the main frequency bands and dominated frequencies in which engine vibration have serious impact on cab noise can be identified effectively by the proposed method, which provides effective guidance to noise reduction of construction machinery.
Meshfree truncated hierarchical refinement for isogeometric analysis
NASA Astrophysics Data System (ADS)
Atri, H. R.; Shojaee, S.
2018-05-01
In this paper truncated hierarchical B-spline (THB-spline) is coupled with reproducing kernel particle method (RKPM) to blend advantages of the isogeometric analysis and meshfree methods. Since under certain conditions, the isogeometric B-spline and NURBS basis functions are exactly represented by reproducing kernel meshfree shape functions, recursive process of producing isogeometric bases can be omitted. More importantly, a seamless link between meshfree methods and isogeometric analysis can be easily defined which provide an authentic meshfree approach to refine the model locally in isogeometric analysis. This procedure can be accomplished using truncated hierarchical B-splines to construct new bases and adaptively refine them. It is also shown that the THB-RKPM method can provide efficient approximation schemes for numerical simulations and represent a promising performance in adaptive refinement of partial differential equations via isogeometric analysis. The proposed approach for adaptive locally refinement is presented in detail and its effectiveness is investigated through well-known benchmark examples.
Probabilistic structural analysis methods for improving Space Shuttle engine reliability
NASA Technical Reports Server (NTRS)
Boyce, L.
1989-01-01
Probabilistic structural analysis methods are particularly useful in the design and analysis of critical structural components and systems that operate in very severe and uncertain environments. These methods have recently found application in space propulsion systems to improve the structural reliability of Space Shuttle Main Engine (SSME) components. A computer program, NESSUS, based on a deterministic finite-element program and a method of probabilistic analysis (fast probability integration) provides probabilistic structural analysis for selected SSME components. While computationally efficient, it considers both correlated and nonnormal random variables as well as an implicit functional relationship between independent and dependent variables. The program is used to determine the response of a nickel-based superalloy SSME turbopump blade. Results include blade tip displacement statistics due to the variability in blade thickness, modulus of elasticity, Poisson's ratio or density. Modulus of elasticity significantly contributed to blade tip variability while Poisson's ratio did not. Thus, a rational method for choosing parameters to be modeled as random is provided.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S.
The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components requiredmore » for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the Dakota software and provides capability overviews and procedures for software execution, as well as a variety of example studies.« less
Tracking B-Cell Repertoires and Clonal Histories in Normal and Malignant Lymphocytes.
Weston-Bell, Nicola J; Cowan, Graeme; Sahota, Surinder S
2017-01-01
Methods for tracking B-cell repertoires and clonal history in normal and malignant B-cells based on immunoglobulin variable region (IGV) gene analysis have developed rapidly with the advent of massive parallel next-generation sequencing (mpNGS) protocols. mpNGS permits a depth of analysis of IGV genes not hitherto feasible, and presents challenges of bioinformatics analysis, which can be readily met by current pipelines. This strategy offers a potential resolution of B-cell usage at a depth that may capture fully the natural state, in a given biological setting. Conventional methods based on RT-PCR amplification and Sanger sequencing are also available where mpNGS is not accessible. Each method offers distinct advantages. Conventional methods for IGV gene sequencing are readily adaptable to most laboratories and provide an ease of analysis to capture salient features of B-cell use. This chapter describes two methods in detail for analysis of IGV genes, mpNGS and conventional RT-PCR with Sanger sequencing.
The Researches on Damage Detection Method for Truss Structures
NASA Astrophysics Data System (ADS)
Wang, Meng Hong; Cao, Xiao Nan
2018-06-01
This paper presents an effective method to detect damage in truss structures. Numerical simulation and experimental analysis were carried out on a damaged truss structure under instantaneous excitation. The ideal excitation point and appropriate hammering method were determined to extract time domain signals under two working conditions. The frequency response function and principal component analysis were used for data processing, and the angle between the frequency response function vectors was selected as a damage index to ascertain the location of a damaged bar in the truss structure. In the numerical simulation, the time domain signal of all nodes was extracted to determine the location of the damaged bar. In the experimental analysis, the time domain signal of a portion of the nodes was extracted on the basis of an optimal sensor placement method based on the node strain energy coefficient. The results of the numerical simulation and experimental analysis showed that the damage detection method based on the frequency response function and principal component analysis could locate the damaged bar accurately.
2011-01-01
Background The computer-aided identification of specific gait patterns is an important issue in the assessment of Parkinson's disease (PD). In this study, a computer vision-based gait analysis approach is developed to assist the clinical assessments of PD with kernel-based principal component analysis (KPCA). Method Twelve PD patients and twelve healthy adults with no neurological history or motor disorders within the past six months were recruited and separated according to their "Non-PD", "Drug-On", and "Drug-Off" states. The participants were asked to wear light-colored clothing and perform three walking trials through a corridor decorated with a navy curtain at their natural pace. The participants' gait performance during the steady-state walking period was captured by a digital camera for gait analysis. The collected walking image frames were then transformed into binary silhouettes for noise reduction and compression. Using the developed KPCA-based method, the features within the binary silhouettes can be extracted to quantitatively determine the gait cycle time, stride length, walking velocity, and cadence. Results and Discussion The KPCA-based method uses a feature-extraction approach, which was verified to be more effective than traditional image area and principal component analysis (PCA) approaches in classifying "Non-PD" controls and "Drug-Off/On" PD patients. Encouragingly, this method has a high accuracy rate, 80.51%, for recognizing different gaits. Quantitative gait parameters are obtained, and the power spectrums of the patients' gaits are analyzed. We show that that the slow and irregular actions of PD patients during walking tend to transfer some of the power from the main lobe frequency to a lower frequency band. Our results indicate the feasibility of using gait performance to evaluate the motor function of patients with PD. Conclusion This KPCA-based method requires only a digital camera and a decorated corridor setup. The ease of use and installation of the current method provides clinicians and researchers a low cost solution to monitor the progression of and the treatment to PD. In summary, the proposed method provides an alternative to perform gait analysis for patients with PD. PMID:22074315
Model-based clustering for RNA-seq data.
Si, Yaqing; Liu, Peng; Li, Pinghua; Brutnell, Thomas P
2014-01-15
RNA-seq technology has been widely adopted as an attractive alternative to microarray-based methods to study global gene expression. However, robust statistical tools to analyze these complex datasets are still lacking. By grouping genes with similar expression profiles across treatments, cluster analysis provides insight into gene functions and networks, and hence is an important technique for RNA-seq data analysis. In this manuscript, we derive clustering algorithms based on appropriate probability models for RNA-seq data. An expectation-maximization algorithm and another two stochastic versions of expectation-maximization algorithms are described. In addition, a strategy for initialization based on likelihood is proposed to improve the clustering algorithms. Moreover, we present a model-based hybrid-hierarchical clustering method to generate a tree structure that allows visualization of relationships among clusters as well as flexibility of choosing the number of clusters. Results from both simulation studies and analysis of a maize RNA-seq dataset show that our proposed methods provide better clustering results than alternative methods such as the K-means algorithm and hierarchical clustering methods that are not based on probability models. An R package, MBCluster.Seq, has been developed to implement our proposed algorithms. This R package provides fast computation and is publicly available at http://www.r-project.org
NASA Astrophysics Data System (ADS)
Su, Ray Kai Leung; Lee, Chien-Liang
2013-06-01
This study presents a seismic fragility analysis and ultimate spectral displacement assessment of regular low-rise masonry infilled (MI) reinforced concrete (RC) buildings using a coefficient-based method. The coefficient-based method does not require a complicated finite element analysis; instead, it is a simplified procedure for assessing the spectral acceleration and displacement of buildings subjected to earthquakes. A regression analysis was first performed to obtain the best-fitting equations for the inter-story drift ratio (IDR) and period shift factor of low-rise MI RC buildings in response to the peak ground acceleration of earthquakes using published results obtained from shaking table tests. Both spectral acceleration- and spectral displacement-based fragility curves under various damage states (in terms of IDR) were then constructed using the coefficient-based method. Finally, the spectral displacements of low-rise MI RC buildings at the ultimate (or nearcollapse) state obtained from this paper and the literature were compared. The simulation results indicate that the fragility curves obtained from this study and other previous work correspond well. Furthermore, most of the spectral displacements of low-rise MI RC buildings at the ultimate state from the literature fall within the bounded spectral displacements predicted by the coefficient-based method.
Selecting supplier combination based on fuzzy multicriteria analysis
NASA Astrophysics Data System (ADS)
Han, Zhi-Qiu; Luo, Xin-Xing; Chen, Xiao-Hong; Yang, Wu-E.
2015-07-01
Existing multicriteria analysis (MCA) methods are probably ineffective in selecting a supplier combination. Thus, an MCA-based fuzzy 0-1 programming method is introduced. The programming relates to a simple MCA matrix that is used to select a single supplier. By solving the programming, the most feasible combination of suppliers is selected. Importantly, this result differs from selecting suppliers one by one according to a single-selection order, which is used to rank sole suppliers in existing MCA methods. An example highlights such difference and illustrates the proposed method.
NASA Technical Reports Server (NTRS)
Ryan, Robert S.; Townsend, John S.
1993-01-01
The prospective improvement of probabilistic methods for space program analysis/design entails the further development of theories, codes, and tools which match specific areas of application, the drawing of lessons from previous uses of probability and statistics data bases, the enlargement of data bases (especially in the field of structural failures), and the education of engineers and managers on the advantages of these methods. An evaluation is presently made of the current limitations of probabilistic engineering methods. Recommendations are made for specific applications.
Jun, Goo; Flickinger, Matthew; Hetrick, Kurt N.; Romm, Jane M.; Doheny, Kimberly F.; Abecasis, Gonçalo R.; Boehnke, Michael; Kang, Hyun Min
2012-01-01
DNA sample contamination is a serious problem in DNA sequencing studies and may result in systematic genotype misclassification and false positive associations. Although methods exist to detect and filter out cross-species contamination, few methods to detect within-species sample contamination are available. In this paper, we describe methods to identify within-species DNA sample contamination based on (1) a combination of sequencing reads and array-based genotype data, (2) sequence reads alone, and (3) array-based genotype data alone. Analysis of sequencing reads allows contamination detection after sequence data is generated but prior to variant calling; analysis of array-based genotype data allows contamination detection prior to generation of costly sequence data. Through a combination of analysis of in silico and experimentally contaminated samples, we show that our methods can reliably detect and estimate levels of contamination as low as 1%. We evaluate the impact of DNA contamination on genotype accuracy and propose effective strategies to screen for and prevent DNA contamination in sequencing studies. PMID:23103226
Komorowski, Dariusz; Pietraszek, Stanislaw
2016-01-01
This paper presents the analysis of multi-channel electrogastrographic (EGG) signals using the continuous wavelet transform based on the fast Fourier transform (CWTFT). The EGG analysis was based on the determination of the several signal parameters such as dominant frequency (DF), dominant power (DP) and index of normogastria (NI). The use of continuous wavelet transform (CWT) allows for better visible localization of the frequency components in the analyzed signals, than commonly used short-time Fourier transform (STFT). Such an analysis is possible by means of a variable width window, which corresponds to the scale time of observation (analysis). Wavelet analysis allows using long time windows when we need more precise low-frequency information, and shorter when we need high frequency information. Since the classic CWT transform requires considerable computing power and time, especially while applying it to the analysis of long signals, the authors used the CWT analysis based on the fast Fourier transform (FFT). The CWT was obtained using properties of the circular convolution to improve the speed of calculation. This method allows to obtain results for relatively long records of EGG in a fairly short time, much faster than using the classical methods based on running spectrum analysis (RSA). In this study authors indicate the possibility of a parametric analysis of EGG signals using continuous wavelet transform which is the completely new solution. The results obtained with the described method are shown in the example of an analysis of four-channel EGG recordings, performed for a non-caloric meal.
A two-step FEM-SEM approach for wave propagation analysis in cable structures
NASA Astrophysics Data System (ADS)
Zhang, Songhan; Shen, Ruili; Wang, Tao; De Roeck, Guido; Lombaert, Geert
2018-02-01
Vibration-based methods are among the most widely studied in structural health monitoring (SHM). It is well known, however, that the low-order modes, characterizing the global dynamic behaviour of structures, are relatively insensitive to local damage. Such local damage may be easier to detect by methods based on wave propagation which involve local high frequency behaviour. The present work considers the numerical analysis of wave propagation in cables. A two-step approach is proposed which allows taking into account the cable sag and the distribution of the axial forces in the wave propagation analysis. In the first step, the static deformation and internal forces are obtained by the finite element method (FEM), taking into account geometric nonlinear effects. In the second step, the results from the static analysis are used to define the initial state of the dynamic analysis which is performed by means of the spectral element method (SEM). The use of the SEM in the second step of the analysis allows for a significant reduction in computational costs as compared to a FE analysis. This methodology is first verified by means of a full FE analysis for a single stretched cable. Next, simulations are made to study the effects of damage in a single stretched cable and a cable-supported truss. The results of the simulations show how damage significantly affects the high frequency response, confirming the potential of wave propagation based methods for SHM.
1975-12-01
is determined by combustion in an induction furnace with iron as a flux. The methods for moisture, loss in ignition, water-soluble matter, acid... determination of talc in nitro- cellulose-base propellants. The first method (which is the method recom- mended for the usual nitrocellulose -base...In the present report an improved scheme is proposed for the analysis of talc. The silica and magnesium oxide are determined by fusion with sodium
NEAT: an efficient network enrichment analysis test.
Signorelli, Mirko; Vinciotti, Veronica; Wit, Ernst C
2016-09-05
Network enrichment analysis is a powerful method, which allows to integrate gene enrichment analysis with the information on relationships between genes that is provided by gene networks. Existing tests for network enrichment analysis deal only with undirected networks, they can be computationally slow and are based on normality assumptions. We propose NEAT, a test for network enrichment analysis. The test is based on the hypergeometric distribution, which naturally arises as the null distribution in this context. NEAT can be applied not only to undirected, but to directed and partially directed networks as well. Our simulations indicate that NEAT is considerably faster than alternative resampling-based methods, and that its capacity to detect enrichments is at least as good as the one of alternative tests. We discuss applications of NEAT to network analyses in yeast by testing for enrichment of the Environmental Stress Response target gene set with GO Slim and KEGG functional gene sets, and also by inspecting associations between functional sets themselves. NEAT is a flexible and efficient test for network enrichment analysis that aims to overcome some limitations of existing resampling-based tests. The method is implemented in the R package neat, which can be freely downloaded from CRAN ( https://cran.r-project.org/package=neat ).
2012-06-01
Military Operational Research , with special theme ‘The use of ‘soft’ methods in OR’. OR52 (7 – 9 September 2010, Royal Holloway University of London...on human judgement. Judgement-based OA applies the methods of ‘Soft Operational Research ’ developed in academia. It has appeared, however, that the...similarity between judgemental methods in operational research practice and a number of other modes of professional analytical practice. The closest
Method of Testing and Predicting Failures of Electronic Mechanical Systems
NASA Technical Reports Server (NTRS)
Iverson, David L.; Patterson-Hine, Frances A.
1996-01-01
A method employing a knowledge base of human expertise comprising a reliability model analysis implemented for diagnostic routines is disclosed. The reliability analysis comprises digraph models that determine target events created by hardware failures human actions, and other factors affecting the system operation. The reliability analysis contains a wealth of human expertise information that is used to build automatic diagnostic routines and which provides a knowledge base that can be used to solve other artificial intelligence problems.
Modeling Eye Gaze Patterns in Clinician-Patient Interaction with Lag Sequential Analysis
Montague, E; Xu, J; Asan, O; Chen, P; Chewning, B; Barrett, B
2011-01-01
Objective The aim of this study was to examine whether lag-sequential analysis could be used to describe eye gaze orientation between clinicians and patients in the medical encounter. This topic is particularly important as new technologies are implemented into multi-user health care settings where trust is critical and nonverbal cues are integral to achieving trust. This analysis method could lead to design guidelines for technologies and more effective assessments of interventions. Background Nonverbal communication patterns are important aspects of clinician-patient interactions and may impact patient outcomes. Method Eye gaze behaviors of clinicians and patients in 110-videotaped medical encounters were analyzed using the lag-sequential method to identify significant behavior sequences. Lag-sequential analysis included both event-based lag and time-based lag. Results Results from event-based lag analysis showed that the patients’ gaze followed that of clinicians, while clinicians did not follow patients. Time-based sequential analysis showed that responses from the patient usually occurred within two seconds after the initial behavior of the clinician. Conclusion Our data suggest that the clinician’s gaze significantly affects the medical encounter but not the converse. Application Findings from this research have implications for the design of clinical work systems and modeling interactions. Similar research methods could be used to identify different behavior patterns in clinical settings (physical layout, technology, etc.) to facilitate and evaluate clinical work system designs. PMID:22046723
Maikusa, Norihide; Yamashita, Fumio; Tanaka, Kenichiro; Abe, Osamu; Kawaguchi, Atsushi; Kabasawa, Hiroyuki; Chiba, Shoma; Kasahara, Akihiro; Kobayashi, Nobuhisa; Yuasa, Tetsuya; Sato, Noriko; Matsuda, Hiroshi; Iwatsubo, Takeshi
2013-06-01
Serial magnetic resonance imaging (MRI) images acquired from multisite and multivendor MRI scanners are widely used in measuring longitudinal structural changes in the brain. Precise and accurate measurements are important in understanding the natural progression of neurodegenerative disorders such as Alzheimer's disease. However, geometric distortions in MRI images decrease the accuracy and precision of volumetric or morphometric measurements. To solve this problem, the authors suggest a commercially available phantom-based distortion correction method that accommodates the variation in geometric distortion within MRI images obtained with multivendor MRI scanners. The authors' method is based on image warping using a polynomial function. The method detects fiducial points within a phantom image using phantom analysis software developed by the Mayo Clinic and calculates warping functions for distortion correction. To quantify the effectiveness of the authors' method, the authors corrected phantom images obtained from multivendor MRI scanners and calculated the root-mean-square (RMS) of fiducial errors and the circularity ratio as evaluation values. The authors also compared the performance of the authors' method with that of a distortion correction method based on a spherical harmonics description of the generic gradient design parameters. Moreover, the authors evaluated whether this correction improves the test-retest reproducibility of voxel-based morphometry in human studies. A Wilcoxon signed-rank test with uncorrected and corrected images was performed. The root-mean-square errors and circularity ratios for all slices significantly improved (p < 0.0001) after the authors' distortion correction. Additionally, the authors' method was significantly better than a distortion correction method based on a description of spherical harmonics in improving the distortion of root-mean-square errors (p < 0.001 and 0.0337, respectively). Moreover, the authors' method reduced the RMS error arising from gradient nonlinearity more than gradwarp methods. In human studies, the coefficient of variation of voxel-based morphometry analysis of the whole brain improved significantly from 3.46% to 2.70% after distortion correction of the whole gray matter using the authors' method (Wilcoxon signed-rank test, p < 0.05). The authors proposed a phantom-based distortion correction method to improve reproducibility in longitudinal structural brain analysis using multivendor MRI. The authors evaluated the authors' method for phantom images in terms of two geometrical values and for human images in terms of test-retest reproducibility. The results showed that distortion was corrected significantly using the authors' method. In human studies, the reproducibility of voxel-based morphometry analysis for the whole gray matter significantly improved after distortion correction using the authors' method.
A Government/Industry Summary of the Design Analysis Methods for Vibrations (DAMVIBS) Program
NASA Technical Reports Server (NTRS)
Kvaternik, Raymond G. (Compiler)
1993-01-01
The NASA Langley Research Center in 1984 initiated a rotorcraft structural dynamics program, designated DAMVIBS (Design Analysis Methods for VIBrationS), with the objective of establishing the technology base needed by the rotorcraft industry for developing an advanced finite-element-based dynamics design analysis capability for vibrations. An assessment of the program showed that the DAMVIBS Program has resulted in notable technical achievements and major changes in industrial design practice, all of which have significantly advanced the industry's capability to use and rely on finite-element-based dynamics analyses during the design process.
ERIC Educational Resources Information Center
Larkin, Wallace; Hawkins, Renee O.; Collins, Tai
2016-01-01
Functional behavior assessments and function-based interventions are effective methods for addressing the challenging behaviors of children; however, traditional functional analysis has limitations that impact usability in applied settings. Trial-based functional analysis addresses concerns relating to the length of time, level of expertise…
NASA Astrophysics Data System (ADS)
Xu, Jun; Dang, Chao; Kong, Fan
2017-10-01
This paper presents a new method for efficient structural reliability analysis. In this method, a rotational quasi-symmetric point method (RQ-SPM) is proposed for evaluating the fractional moments of the performance function. Then, the derivation of the performance function's probability density function (PDF) is carried out based on the maximum entropy method in which constraints are specified in terms of fractional moments. In this regard, the probability of failure can be obtained by a simple integral over the performance function's PDF. Six examples, including a finite element-based reliability analysis and a dynamic system with strong nonlinearity, are used to illustrate the efficacy of the proposed method. All the computed results are compared with those by Monte Carlo simulation (MCS). It is found that the proposed method can provide very accurate results with low computational effort.
Fasihi, Yasser; Fooladi, Saba; Mohammadi, Mohammad Ali; Emaneini, Mohammad; Kalantar-Neyestanaki, Davood
2017-09-06
Molecular typing is an important tool for control and prevention of infection. A suitable molecular typing method for epidemiological investigation must be easy to perform, highly reproducible, inexpensive, rapid and easy to interpret. In this study, two molecular typing methods including the conventional PCR-sequencing method and high resolution melting (HRM) analysis were used for staphylococcal protein A (spa) typing of 30 Methicillin-resistant Staphylococcus aureus (MRSA) isolates recovered from clinical samples. Based on PCR-sequencing method results, 16 different spa types were identified among the 30 MRSA isolates. Among the 16 different spa types, 14 spa types separated by HRM method. Two spa types including t4718 and t2894 were not separated from each other. According to our results, spa typing based on HRM analysis method is very rapid, easy to perform and cost-effective, but this method must be standardized for different regions, spa types, and real-time machinery.
NASA Astrophysics Data System (ADS)
Cai, Jiaxin; Chen, Tingting; Li, Yan; Zhu, Nenghui; Qiu, Xuan
2018-03-01
In order to analysis the fibrosis stage and inflammatory activity grade of chronic hepatitis C, a novel classification method based on collaborative representation (CR) with smoothly clipped absolute deviation penalty (SCAD) penalty term, called CR-SCAD classifier, is proposed for pattern recognition. After that, an auto-grading system based on CR-SCAD classifier is introduced for the prediction of fibrosis stage and inflammatory activity grade of chronic hepatitis C. The proposed method has been tested on 123 clinical cases of chronic hepatitis C based on serological indexes. Experimental results show that the performance of the proposed method outperforms the state-of-the-art baselines for the classification of fibrosis stage and inflammatory activity grade of chronic hepatitis C.
A simple method for processing data with least square method
NASA Astrophysics Data System (ADS)
Wang, Chunyan; Qi, Liqun; Chen, Yongxiang; Pang, Guangning
2017-08-01
The least square method is widely used in data processing and error estimation. The mathematical method has become an essential technique for parameter estimation, data processing, regression analysis and experimental data fitting, and has become a criterion tool for statistical inference. In measurement data analysis, the distribution of complex rules is usually based on the least square principle, i.e., the use of matrix to solve the final estimate and to improve its accuracy. In this paper, a new method is presented for the solution of the method which is based on algebraic computation and is relatively straightforward and easy to understand. The practicability of this method is described by a concrete example.
A Kalman Filter for SINS Self-Alignment Based on Vector Observation.
Xu, Xiang; Xu, Xiaosu; Zhang, Tao; Li, Yao; Tong, Jinwu
2017-01-29
In this paper, a self-alignment method for strapdown inertial navigation systems based on the q -method is studied. In addition, an improved method based on integrating gravitational apparent motion to form apparent velocity is designed, which can reduce the random noises of the observation vectors. For further analysis, a novel self-alignment method using a Kalman filter based on adaptive filter technology is proposed, which transforms the self-alignment procedure into an attitude estimation using the observation vectors. In the proposed method, a linear psuedo-measurement equation is adopted by employing the transfer method between the quaternion and the observation vectors. Analysis and simulation indicate that the accuracy of the self-alignment is improved. Meanwhile, to improve the convergence rate of the proposed method, a new method based on parameter recognition and a reconstruction algorithm for apparent gravitation is devised, which can reduce the influence of the random noises of the observation vectors. Simulations and turntable tests are carried out, and the results indicate that the proposed method can acquire sound alignment results with lower standard variances, and can obtain higher alignment accuracy and a faster convergence rate.
A Kalman Filter for SINS Self-Alignment Based on Vector Observation
Xu, Xiang; Xu, Xiaosu; Zhang, Tao; Li, Yao; Tong, Jinwu
2017-01-01
In this paper, a self-alignment method for strapdown inertial navigation systems based on the q-method is studied. In addition, an improved method based on integrating gravitational apparent motion to form apparent velocity is designed, which can reduce the random noises of the observation vectors. For further analysis, a novel self-alignment method using a Kalman filter based on adaptive filter technology is proposed, which transforms the self-alignment procedure into an attitude estimation using the observation vectors. In the proposed method, a linear psuedo-measurement equation is adopted by employing the transfer method between the quaternion and the observation vectors. Analysis and simulation indicate that the accuracy of the self-alignment is improved. Meanwhile, to improve the convergence rate of the proposed method, a new method based on parameter recognition and a reconstruction algorithm for apparent gravitation is devised, which can reduce the influence of the random noises of the observation vectors. Simulations and turntable tests are carried out, and the results indicate that the proposed method can acquire sound alignment results with lower standard variances, and can obtain higher alignment accuracy and a faster convergence rate. PMID:28146059
SSAW: A new sequence similarity analysis method based on the stationary discrete wavelet transform.
Lin, Jie; Wei, Jing; Adjeroh, Donald; Jiang, Bing-Hua; Jiang, Yue
2018-05-02
Alignment-free sequence similarity analysis methods often lead to significant savings in computational time over alignment-based counterparts. A new alignment-free sequence similarity analysis method, called SSAW is proposed. SSAW stands for Sequence Similarity Analysis using the Stationary Discrete Wavelet Transform (SDWT). It extracts k-mers from a sequence, then maps each k-mer to a complex number field. Then, the series of complex numbers formed are transformed into feature vectors using the stationary discrete wavelet transform. After these steps, the original sequence is turned into a feature vector with numeric values, which can then be used for clustering and/or classification. Using two different types of applications, namely, clustering and classification, we compared SSAW against the the-state-of-the-art alignment free sequence analysis methods. SSAW demonstrates competitive or superior performance in terms of standard indicators, such as accuracy, F-score, precision, and recall. The running time was significantly better in most cases. These make SSAW a suitable method for sequence analysis, especially, given the rapidly increasing volumes of sequence data required by most modern applications.
Emergy analysis of an industrial park: the case of Dalian, China.
Geng, Yong; Zhang, Pan; Ulgiati, Sergio; Sarkis, Joseph
2010-10-15
With the rapid development of eco-industrial park projects in China, evaluating their overall eco-efficiency is becoming an important need and a big challenge academically. Developing ecologically conscious industrial park management requires analysis of both industrial and ecological systems. Traditional evaluation methods based on neoclassical economics and embodied energy and exergy analyses have certain limitations due to their focus with environmental issues considered secondary to the maximization of economic and technical objectives. Such methods focus primarily on the environmental impact of emissions and their economic consequences. These approaches ignore the contribution of ecological products and services as well as the load placed on environmental systems and related problems of carrying capacity of economic and industrial development. This paper presents a new method, based upon emergy analysis and synthesis. Such a method links economic and ecological systems together, highlighting the internal relations among the different subsystems and components. The emergy-based method provides insight into the environmental performance and sustainability of an industrial park. This paper depicts the methodology of emergy analysis at the industrial park level and provides a series of emergy-based indices. A case study is investigated and discussed in order to show the emergy method's practical potential. Results from DEDZ (Dalian Economic Development Zone) case show us the potential of emergy synthesis method at the industrial park level for environmental policy making. Its advantages and limitations are also discussed with avenues for future research identified. Copyright © 2010 Elsevier B.V. All rights reserved.
Wakui, Takashi; Matsumoto, Tsuyoshi; Matsubara, Kenta; Kawasaki, Tomoyuki; Yamaguchi, Hiroshi; Akutsu, Hidenori
2017-10-01
We propose an image analysis method for quality evaluation of human pluripotent stem cells based on biologically interpretable features. It is important to maintain the undifferentiated state of induced pluripotent stem cells (iPSCs) while culturing the cells during propagation. Cell culture experts visually select good quality cells exhibiting the morphological features characteristic of undifferentiated cells. Experts have empirically determined that these features comprise prominent and abundant nucleoli, less intercellular spacing, and fewer differentiating cellular nuclei. We quantified these features based on experts' visual inspection of phase contrast images of iPSCs and found that these features are effective for evaluating iPSC quality. We then developed an iPSC quality evaluation method using an image analysis technique. The method allowed accurate classification, equivalent to visual inspection by experts, of three iPSC cell lines.
Xiao, Haopeng; Chen, Weixuan; Smeekens, Johanna M; Wu, Ronghu
2018-04-27
Protein glycosylation is ubiquitous in biological systems and essential for cell survival. However, the heterogeneity of glycans and the low abundance of many glycoproteins complicate their global analysis. Chemical methods based on reversible covalent interactions between boronic acid and glycans have great potential to enrich glycopeptides, but the binding affinity is typically not strong enough to capture low-abundance species. Here, we develop a strategy using dendrimer-conjugated benzoboroxole to enhance the glycopeptide enrichment. We test the performance of several boronic acid derivatives, showing that benzoboroxole markedly increases glycopeptide coverage from human cell lysates. The enrichment is further improved by conjugating benzoboroxole to a dendrimer, which enables synergistic benzoboroxole-glycan interactions. This robust and simple method is highly effective for sensitive glycoproteomics analysis, especially capturing low-abundance glycopeptides. Importantly, the enriched glycopeptides remain intact, making the current method compatible with mass-spectrometry-based approaches to identify glycosylation sites and glycan structures.
Detection of dopamine in dopaminergic cell using nanoparticles-based barcode DNA analysis.
An, Jeung Hee; Kim, Tae-Hyung; Oh, Byung-Keun; Choi, Jeong Woo
2012-01-01
Nanotechnology-based bio-barcode-amplification analysis may be an innovative approach to dopamine detection. In this study, we evaluated the efficacy of this bio-barcode DNA method in detecting dopamine from dopaminergic cells. Herein, a combination DNA barcode and bead-based immunoassay for neurotransmitter detection with PCR-like sensitivity is described. This method relies on magnetic nanoparticles with antibodies and nanoparticles that are encoded with DNA, and antibodies that can sandwich the target protein captured by the nanoparticle-bound antibodies. The aggregate sandwich structures are magnetically separated from solution, and treated in order to remove the conjugated barcode DNA. The DNA barcodes were then identified via PCR analysis. The dopamine concentration in dopaminergic cells can be readily and rapidly detected via the bio-barcode assay method. The bio-barcode assay method is, therefore, a rapid and high-throughput screening tool for the detection of neurotransmitters such as dopamine.
NASA Astrophysics Data System (ADS)
Wang, Qi; Dong, Xufeng; Li, Luyu; Ou, Jinping
2018-06-01
As constitutive models are too complicated and existing mechanical models lack universality, these models are beyond satisfaction for magnetorheological elastomer (MRE) devices. In this article, a novel universal method is proposed to build concise mechanical models. Constitutive model and electromagnetic analysis were applied in this method to ensure universality, while a series of derivations and simplifications were carried out to obtain a concise formulation. To illustrate the proposed modeling method, a conical MRE isolator was introduced. Its basic mechanical equations were built based on equilibrium, deformation compatibility, constitutive equations and electromagnetic analysis. An iteration model and a highly efficient differential equation editor based model were then derived to solve the basic mechanical equations. The final simplified mechanical equations were obtained by re-fitting the simulations with a novel optimal algorithm. In the end, verification test of the isolator has proved the accuracy of the derived mechanical model and the modeling method.
Meta-analysis of pathway enrichment: combining independent and dependent omics data sets.
Kaever, Alexander; Landesfeind, Manuel; Feussner, Kirstin; Morgenstern, Burkhard; Feussner, Ivo; Meinicke, Peter
2014-01-01
A major challenge in current systems biology is the combination and integrative analysis of large data sets obtained from different high-throughput omics platforms, such as mass spectrometry based Metabolomics and Proteomics or DNA microarray or RNA-seq-based Transcriptomics. Especially in the case of non-targeted Metabolomics experiments, where it is often impossible to unambiguously map ion features from mass spectrometry analysis to metabolites, the integration of more reliable omics technologies is highly desirable. A popular method for the knowledge-based interpretation of single data sets is the (Gene) Set Enrichment Analysis. In order to combine the results from different analyses, we introduce a methodical framework for the meta-analysis of p-values obtained from Pathway Enrichment Analysis (Set Enrichment Analysis based on pathways) of multiple dependent or independent data sets from different omics platforms. For dependent data sets, e.g. obtained from the same biological samples, the framework utilizes a covariance estimation procedure based on the nonsignificant pathways in single data set enrichment analysis. The framework is evaluated and applied in the joint analysis of Metabolomics mass spectrometry and Transcriptomics DNA microarray data in the context of plant wounding. In extensive studies of simulated data set dependence, the introduced correlation could be fully reconstructed by means of the covariance estimation based on pathway enrichment. By restricting the range of p-values of pathways considered in the estimation, the overestimation of correlation, which is introduced by the significant pathways, could be reduced. When applying the proposed methods to the real data sets, the meta-analysis was shown not only to be a powerful tool to investigate the correlation between different data sets and summarize the results of multiple analyses but also to distinguish experiment-specific key pathways.
NASA Astrophysics Data System (ADS)
Bhushan, A.; Sharker, M. H.; Karimi, H. A.
2015-07-01
In this paper, we address outliers in spatiotemporal data streams obtained from sensors placed across geographically distributed locations. Outliers may appear in such sensor data due to various reasons such as instrumental error and environmental change. Real-time detection of these outliers is essential to prevent propagation of errors in subsequent analyses and results. Incremental Principal Component Analysis (IPCA) is one possible approach for detecting outliers in such type of spatiotemporal data streams. IPCA has been widely used in many real-time applications such as credit card fraud detection, pattern recognition, and image analysis. However, the suitability of applying IPCA for outlier detection in spatiotemporal data streams is unknown and needs to be investigated. To fill this research gap, this paper contributes by presenting two new IPCA-based outlier detection methods and performing a comparative analysis with the existing IPCA-based outlier detection methods to assess their suitability for spatiotemporal sensor data streams.
Li, Zhi; Chen, Weidong; Lian, Feiyu; Ge, Hongyi; Guan, Aihong
2017-12-01
Quantitative analysis of component mixtures is an important application of terahertz time-domain spectroscopy (THz-TDS) and has attracted broad interest in recent research. Although the accuracy of quantitative analysis using THz-TDS is affected by a host of factors, wavelength selection from the sample's THz absorption spectrum is the most crucial component. The raw spectrum consists of signals from the sample and scattering and other random disturbances that can critically influence the quantitative accuracy. For precise quantitative analysis using THz-TDS, the signal from the sample needs to be retained while the scattering and other noise sources are eliminated. In this paper, a novel wavelength selection method based on differential evolution (DE) is investigated. By performing quantitative experiments on a series of binary amino acid mixtures using THz-TDS, we demonstrate the efficacy of the DE-based wavelength selection method, which yields an error rate below 5%.
NASA Astrophysics Data System (ADS)
Wang, Hong; Lu, Kaiyu; Pu, Ruiliang
2016-10-01
The Robinia pseudoacacia forest in the Yellow River delta of China has been planted since the 1970s, and a large area of dieback of the forest has occurred since the 1990s. To assess the condition of the R. pseudoacacia forest in three forest areas (i.e., Gudao, Machang, and Abandoned Yellow River) in the delta, we combined an estimation of scale parameters tool and geometry/topology assessment criteria to determine the optimal scale parameters, selected optimal predictive variables determined by stepwise discriminant analysis, and compared object-based image analysis (OBIA) and pixel-based approaches using IKONOS data. The experimental results showed that the optimal segmentation scale is 5 for both the Gudao and Machang forest areas, and 12 for the Abandoned Yellow River forest area. The results produced by the OBIA method were much better than those created by the pixel-based method. The overall accuracy of the OBIA method was 93.7% (versus 85.4% by the pixel-based) for Gudao, 89.0% (versus 72.7%) for Abandoned Yellow River, and 91.7% (versus 84.4%) for Machang. Our analysis results demonstrated that the OBIA method was an effective tool for rapidly mapping and assessing the health levels of forest.
Improving the Accuracy of Software-Based Energy Analysis for Residential Buildings (Presentation)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Polly, B.
2011-09-01
This presentation describes the basic components of software-based energy analysis for residential buildings, explores the concepts of 'error' and 'accuracy' when analysis predictions are compared to measured data, and explains how NREL is working to continuously improve the accuracy of energy analysis methods.
Applications of modern statistical methods to analysis of data in physical science
NASA Astrophysics Data System (ADS)
Wicker, James Eric
Modern methods of statistical and computational analysis offer solutions to dilemmas confronting researchers in physical science. Although the ideas behind modern statistical and computational analysis methods were originally introduced in the 1970's, most scientists still rely on methods written during the early era of computing. These researchers, who analyze increasingly voluminous and multivariate data sets, need modern analysis methods to extract the best results from their studies. The first section of this work showcases applications of modern linear regression. Since the 1960's, many researchers in spectroscopy have used classical stepwise regression techniques to derive molecular constants. However, problems with thresholds of entry and exit for model variables plagues this analysis method. Other criticisms of this kind of stepwise procedure include its inefficient searching method, the order in which variables enter or leave the model and problems with overfitting data. We implement an information scoring technique that overcomes the assumptions inherent in the stepwise regression process to calculate molecular model parameters. We believe that this kind of information based model evaluation can be applied to more general analysis situations in physical science. The second section proposes new methods of multivariate cluster analysis. The K-means algorithm and the EM algorithm, introduced in the 1960's and 1970's respectively, formed the basis of multivariate cluster analysis methodology for many years. However, several shortcomings of these methods include strong dependence on initial seed values and inaccurate results when the data seriously depart from hypersphericity. We propose new cluster analysis methods based on genetic algorithms that overcomes the strong dependence on initial seed values. In addition, we propose a generalization of the Genetic K-means algorithm which can accurately identify clusters with complex hyperellipsoidal covariance structures. We then use this new algorithm in a genetic algorithm based Expectation-Maximization process that can accurately calculate parameters describing complex clusters in a mixture model routine. Using the accuracy of this GEM algorithm, we assign information scores to cluster calculations in order to best identify the number of mixture components in a multivariate data set. We will showcase how these algorithms can be used to process multivariate data from astronomical observations.
Using a bead-based method for multiplexed analysis of community DNA, the dynamics of aquatic microbial communities can be assessed. Capture probes, specific for a genus or species of bacteria, are attached to the surface of uniquely labeled, microscopic polystyrene beads. Primers...
Comparison of Methods for Evaluating Urban Transportation Alternatives
DOT National Transportation Integrated Search
1975-02-01
The objective of the report was to compare five alternative methods for evaluating urban transportation improvement options: unaided judgmental evaluation cost-benefit analysis, cost-effectiveness analysis based on a single measure of effectiveness, ...
Xiao, Qiyang; Li, Jian; Bai, Zhiliang; Sun, Jiedi; Zhou, Nan; Zeng, Zhoumo
2016-12-13
In this study, a small leak detection method based on variational mode decomposition (VMD) and ambiguity correlation classification (ACC) is proposed. The signals acquired from sensors were decomposed using the VMD, and numerous components were obtained. According to the probability density function (PDF), an adaptive de-noising algorithm based on VMD is proposed for noise component processing and de-noised components reconstruction. Furthermore, the ambiguity function image was employed for analysis of the reconstructed signals. Based on the correlation coefficient, ACC is proposed to detect the small leak of pipeline. The analysis of pipeline leakage signals, using 1 mm and 2 mm leaks, has shown that proposed detection method can detect a small leak accurately and effectively. Moreover, the experimental results have shown that the proposed method achieved better performances than support vector machine (SVM) and back propagation neural network (BP) methods.
Xiao, Qiyang; Li, Jian; Bai, Zhiliang; Sun, Jiedi; Zhou, Nan; Zeng, Zhoumo
2016-01-01
In this study, a small leak detection method based on variational mode decomposition (VMD) and ambiguity correlation classification (ACC) is proposed. The signals acquired from sensors were decomposed using the VMD, and numerous components were obtained. According to the probability density function (PDF), an adaptive de-noising algorithm based on VMD is proposed for noise component processing and de-noised components reconstruction. Furthermore, the ambiguity function image was employed for analysis of the reconstructed signals. Based on the correlation coefficient, ACC is proposed to detect the small leak of pipeline. The analysis of pipeline leakage signals, using 1 mm and 2 mm leaks, has shown that proposed detection method can detect a small leak accurately and effectively. Moreover, the experimental results have shown that the proposed method achieved better performances than support vector machine (SVM) and back propagation neural network (BP) methods. PMID:27983577
Ren, Jingzheng; Manzardo, Alessandro; Mazzi, Anna; Fedele, Andrea; Scipioni, Antonio
2013-01-01
Biodiesel as a promising alternative energy resource has been a hot spot in chemical engineering nowadays, but there is also an argument about the sustainability of biodiesel. In order to analyze the sustainability of biodiesel production systems and select the most sustainable scenario, various kinds of crop-based biodiesel including soybean-, rapeseed-, sunflower-, jatropha- and palm-based biodiesel production options are studied by emergy analysis; soybean-based scenario is recognized as the most sustainable scenario that should be chosen for further study in China. DEA method is used to evaluate the sustainability efficiencies of these options, and the biodiesel production systems based on soybean, sunflower, and palm are considered as DEA efficient, whereas rapeseed-based and jatropha-based scenarios are needed to be improved, and the improved methods have also been specified. PMID:23766723
Protocol vulnerability detection based on network traffic analysis and binary reverse engineering.
Wen, Shameng; Meng, Qingkun; Feng, Chao; Tang, Chaojing
2017-01-01
Network protocol vulnerability detection plays an important role in many domains, including protocol security analysis, application security, and network intrusion detection. In this study, by analyzing the general fuzzing method of network protocols, we propose a novel approach that combines network traffic analysis with the binary reverse engineering method. For network traffic analysis, the block-based protocol description language is introduced to construct test scripts, while the binary reverse engineering method employs the genetic algorithm with a fitness function designed to focus on code coverage. This combination leads to a substantial improvement in fuzz testing for network protocols. We build a prototype system and use it to test several real-world network protocol implementations. The experimental results show that the proposed approach detects vulnerabilities more efficiently and effectively than general fuzzing methods such as SPIKE.
NASA Astrophysics Data System (ADS)
Wang, Yan-Jun; Liu, Qun
1999-03-01
Analysis of stock-recruitment (SR) data is most often done by fitting various SR relationship curves to the data. Fish population dynamics data often have stochastic variations and measurement errors, which usually result in a biased regression analysis. This paper presents a robust regression method, least median of squared orthogonal distance (LMD), which is insensitive to abnormal values in the dependent and independent variables in a regression analysis. Outliers that have significantly different variance from the rest of the data can be identified in a residual analysis. Then, the least squares (LS) method is applied to the SR data with defined outliers being down weighted. The application of LMD and LMD-based Reweighted Least Squares (RLS) method to simulated and real fisheries SR data is explored.
NASA Astrophysics Data System (ADS)
García-Moreno, Angel-Iván; González-Barbosa, José-Joel; Ramírez-Pedraza, Alfonso; Hurtado-Ramos, Juan B.; Ornelas-Rodriguez, Francisco-Javier
2016-04-01
Computer-based reconstruction models can be used to approximate urban environments. These models are usually based on several mathematical approximations and the usage of different sensors, which implies dependency on many variables. The sensitivity analysis presented in this paper is used to weigh the relative importance of each uncertainty contributor into the calibration of a panoramic camera-LiDAR system. Both sensors are used for three-dimensional urban reconstruction. Simulated and experimental tests were conducted. For the simulated tests we analyze and compare the calibration parameters using the Monte Carlo and Latin hypercube sampling techniques. Sensitivity analysis for each variable involved into the calibration was computed by the Sobol method, which is based on the analysis of the variance breakdown, and the Fourier amplitude sensitivity test method, which is based on Fourier's analysis. Sensitivity analysis is an essential tool in simulation modeling and for performing error propagation assessments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coolens, Catherine, E-mail: catherine.coolens@rmp.uhn.on.ca; Department of Radiation Oncology, University of Toronto, Toronto, Ontario; Institute of Biomaterials and Biomedical Engineering, University of Toronto, Toronto, Ontario
2015-01-01
Objectives: Development of perfusion imaging as a biomarker requires more robust methodologies for quantification of tumor physiology that allow assessment of volumetric tumor heterogeneity over time. This study proposes a parametric method for automatically analyzing perfused tissue from volumetric dynamic contrast-enhanced (DCE) computed tomography (CT) scans and assesses whether this 4-dimensional (4D) DCE approach is more robust and accurate than conventional, region-of-interest (ROI)-based CT methods in quantifying tumor perfusion with preliminary evaluation in metastatic brain cancer. Methods and Materials: Functional parameter reproducibility and analysis of sensitivity to imaging resolution and arterial input function were evaluated in image sets acquired from amore » 320-slice CT with a controlled flow phantom and patients with brain metastases, whose treatments were planned for stereotactic radiation surgery and who consented to a research ethics board-approved prospective imaging biomarker study. A voxel-based temporal dynamic analysis (TDA) methodology was used at baseline, at day 7, and at day 20 after treatment. The ability to detect changes in kinetic parameter maps in clinical data sets was investigated for both 4D TDA and conventional 2D ROI-based analysis methods. Results: A total of 7 brain metastases in 3 patients were evaluated over the 3 time points. The 4D TDA method showed improved spatial efficacy and accuracy of perfusion parameters compared to ROI-based DCE analysis (P<.005), with a reproducibility error of less than 2% when tested with DCE phantom data. Clinically, changes in transfer constant from the blood plasma into the extracellular extravascular space (K{sub trans}) were seen when using TDA, with substantially smaller errors than the 2D method on both day 7 post radiation surgery (±13%; P<.05) and by day 20 (±12%; P<.04). Standard methods showed a decrease in K{sub trans} but with large uncertainty (111.6 ± 150.5) %. Conclusions: Parametric voxel-based analysis of 4D DCE CT data resulted in greater accuracy and reliability in measuring changes in perfusion CT-based kinetic metrics, which have the potential to be used as biomarkers in patients with metastatic brain cancer.« less
A new image segmentation method based on multifractal detrended moving average analysis
NASA Astrophysics Data System (ADS)
Shi, Wen; Zou, Rui-biao; Wang, Fang; Su, Le
2015-08-01
In order to segment and delineate some regions of interest in an image, we propose a novel algorithm based on the multifractal detrended moving average analysis (MF-DMA). In this method, the generalized Hurst exponent h(q) is calculated for every pixel firstly and considered as the local feature of a surface. And then a multifractal detrended moving average spectrum (MF-DMS) D(h(q)) is defined by the idea of box-counting dimension method. Therefore, we call the new image segmentation method MF-DMS-based algorithm. The performance of the MF-DMS-based method is tested by two image segmentation experiments of rapeseed leaf image of potassium deficiency and magnesium deficiency under three cases, namely, backward (θ = 0), centered (θ = 0.5) and forward (θ = 1) with different q values. The comparison experiments are conducted between the MF-DMS method and other two multifractal segmentation methods, namely, the popular MFS-based and latest MF-DFS-based methods. The results show that our MF-DMS-based method is superior to the latter two methods. The best segmentation result for the rapeseed leaf image of potassium deficiency and magnesium deficiency is from the same parameter combination of θ = 0.5 and D(h(- 10)) when using the MF-DMS-based method. An interesting finding is that the D(h(- 10)) outperforms other parameters for both the MF-DMS-based method with centered case and MF-DFS-based algorithms. By comparing the multifractal nature between nutrient deficiency and non-nutrient deficiency areas determined by the segmentation results, an important finding is that the gray value's fluctuation in nutrient deficiency area is much severer than that in non-nutrient deficiency area.
Study on Collision of Ship Side Structure by Simplified Plastic Analysis Method
NASA Astrophysics Data System (ADS)
Sun, C. J.; Zhou, J. H.; Wu, W.
2017-10-01
During its lifetime, a ship may encounter collision or grounding and sustain permanent damage after these types of accidents. Crashworthiness has been based on two kinds of main methods: simplified plastic analysis and numerical simulation. A simplified plastic analysis method is presented in this paper. Numerical methods using the non-linear finite-element software LS-DYNA are conducted to validate the method. The results show that, as for the accuracy of calculation results, the simplified plasticity analysis are in good agreement with the finite element simulation, which reveals that the simplified plasticity analysis method can quickly and accurately estimate the crashworthiness of the side structure during the collision process and can be used as a reliable risk assessment method.
Sayyah, Mehdi; Shirbandi, Kiarash; Saki-Malehi, Amal; Rahim, Fakher
2017-01-01
Objectives The aim of this systematic review and meta-analysis was to evaluate the problem-based learning (PBL) method as an alternative to conventional educational methods in Iranian undergraduate medical courses. Materials and methods We systematically searched international datasets banks, including PubMed, Scopus, and Embase, and internal resources of banks, including MagirIran, IranMedex, IranDoc, and Scientific Information Database (SID), using appropriate search terms, such as “PBL”, “problem-based learning”, “based on problems”, “active learning”, and“ learner centered”, to identify PBL studies, and these were combined with other key terms such as “medical”, “undergraduate”, “Iranian”, “Islamic Republic of Iran”, “I.R. of Iran”, and “Iran”. The search included the period from 1980 to 2016 with no language limits. Results Overall, a total of 1,057 relevant studies were initially found, of which 21 studies were included in the systematic review and meta-analysis. Of the 21 studies, 12 (57.14%) had a high methodological quality. Considering the pooled effect size data, there was a significant difference in the scores (standardized mean difference [SMD]=0.80, 95% CI [0.52, 1.08], P<0.000) in favor of PBL, compared with the lecture-based method. Subgroup analysis revealed that using PBL alone is more favorable compared to using a mixed model with other learning methods such as lecture-based learning (LBL). Conclusion The results of this systematic review showed that using PBL may have a positive effect on the academic achievement of undergraduate medical courses. The results suggest that teachers and medical education decision makers give more attention on using this method for effective and proper training. PMID:29042827
A comprehensive probabilistic analysis model of oil pipelines network based on Bayesian network
NASA Astrophysics Data System (ADS)
Zhang, C.; Qin, T. X.; Jiang, B.; Huang, C.
2018-02-01
Oil pipelines network is one of the most important facilities of energy transportation. But oil pipelines network accident may result in serious disasters. Some analysis models for these accidents have been established mainly based on three methods, including event-tree, accident simulation and Bayesian network. Among these methods, Bayesian network is suitable for probabilistic analysis. But not all the important influencing factors are considered and the deployment rule of the factors has not been established. This paper proposed a probabilistic analysis model of oil pipelines network based on Bayesian network. Most of the important influencing factors, including the key environment condition and emergency response are considered in this model. Moreover, the paper also introduces a deployment rule for these factors. The model can be used in probabilistic analysis and sensitive analysis of oil pipelines network accident.
Pistón, Mariela; Knochen, Moisés
2012-01-01
Two flow methods, based, respectively, on flow-injection analysis (FIA) and on multicommutated flow analysis (MCFA), were compared with regard to their use for the determination of total selenium in infant formulas by hydride-generation atomic absorption spectrometry. The method based on multicommutation provided lower detection and quantification limits (0.08 and 0.27 μg L−1 compared to 0.59 and 1.95 μ L−1, resp.), higher sampling frequency (160 versus. 70 samples per hour), and reduced reagent consumption. Linearity, precision, and accuracy were similar for the two methods compared. It was concluded that, while both methods proved to be appropriate for the purpose, the MCFA-based method exhibited a better performance. PMID:22505923
Liu, Xia; Xu, Yongdong; Li, Zhi; Jiang, Shengwei; Yao, Shuo; Wu, Rina; An, Yingfeng
2018-04-21
A silica sands-based method has been developed to isolate high quality genomic DNAs from cells of animals, plants and microorganisms, such as Hemisalanx prognathus, Spinacia oleracea, Pichia pastoris, Bacillus licheniformis and Escherichia coli. To the best of our knowledge, no DNA isolation method has so wide application until now. In addition, this method and a commercially available kit were compared in analysis of microbial communities using high-throughput 16s rDNA sequencing. As a result, the silica sands-based method was found to be even more efficient in isolating genomic DNA from gram-positive bacteria than the kit, indicating that it would become a very valuable choice to faithfully reflect the composition of microbial communities.
Ma, Ruijie; Lin, Xianming
2015-12-01
The problem based teaching (PBT) has been the main approach to the training in the universities o the world. Combined with the team oriented learning method, PBT will become the method available to the education in medical universities. In the paper, based on the common questions in teaching Jingluo Shuxue Xue (Science of Meridian and Acupoint), the concepts and characters of PBT and the team oriented learning method were analyzed. The implementation steps of PBT were set up in reference to the team oriented learning method. By quoting the original text in Beiji Qianjin Yaofang (Essential recipes for emergent use worth a thousand gold), the case analysis on "the thirteen devil points" was established with PBT.
Assessment of current AASHTO LRFD methods for static pile capacity analysis in Rhode Island soils.
DOT National Transportation Integrated Search
2013-07-01
This report presents an assessment of current AASHTO LRFD methods for static pile capacity analysis in Rhode : Island soils. Current static capacity methods and associated resistance factors are based on pile load test data in sands : and clays. Some...
Target Detection and Classification Using Seismic and PIR Sensors
2012-06-01
time series analysis via wavelet - based partitioning,” Signal Process...regard, this paper presents a wavelet - based method for target detection and classification. The proposed method has been validated on data sets of...The work reported in this paper makes use of a wavelet - based feature extraction method , called Symbolic Dynamic Filtering (SDF) [12]–[14]. The
Dunn, Heather; Quinn, Laurie; Corbridge, Susan J; Eldeirawi, Kamal; Kapella, Mary; Collins, Eileen G
2017-05-01
The use of cluster analysis in the nursing literature is limited to the creation of classifications of homogeneous groups and the discovery of new relationships. As such, it is important to provide clarity regarding its use and potential. The purpose of this article is to provide an introduction to distance-based, partitioning-based, and model-based cluster analysis methods commonly utilized in the nursing literature, provide a brief historical overview on the use of cluster analysis in nursing literature, and provide suggestions for future research. An electronic search included three bibliographic databases, PubMed, CINAHL and Web of Science. Key terms were cluster analysis and nursing. The use of cluster analysis in the nursing literature is increasing and expanding. The increased use of cluster analysis in the nursing literature is positioning this statistical method to result in insights that have the potential to change clinical practice.
NASA Astrophysics Data System (ADS)
Hu, Zhan; Zheng, Gangtie
2016-08-01
A combined analysis method is developed in the present paper for studying the dynamic properties of a type of geometrically nonlinear vibration isolator, which is composed of push-pull configuration rings. This method combines the geometrically nonlinear theory of curved beams and the Harmonic Balance Method to overcome the difficulty in calculating the vibration and vibration transmissibility under large deformations of the ring structure. Using the proposed method, nonlinear dynamic behaviors of this isolator, such as the lock situation due to the coulomb damping and the usual jump resulting from the nonlinear stiffness, can be investigated. Numerical solutions based on the primary harmonic balance are first verified by direct integration results. Then, the whole procedure of this combined analysis method is demonstrated and validated by slowly sinusoidal sweeping experiments with different amplitudes of the base excitation. Both numerical and experimental results indicate that this type of isolator behaves as a hardening spring with increasing amplitude of the base excitation, which makes it suitable for isolating both steady-state vibrations and transient shocks.
Muro-de-la-Herran, Alvaro; Garcia-Zapirain, Begonya; Mendez-Zorrilla, Amaia
2014-01-01
This article presents a review of the methods used in recognition and analysis of the human gait from three different approaches: image processing, floor sensors and sensors placed on the body. Progress in new technologies has led the development of a series of devices and techniques which allow for objective evaluation, making measurements more efficient and effective and providing specialists with reliable information. Firstly, an introduction of the key gait parameters and semi-subjective methods is presented. Secondly, technologies and studies on the different objective methods are reviewed. Finally, based on the latest research, the characteristics of each method are discussed. 40% of the reviewed articles published in late 2012 and 2013 were related to non-wearable systems, 37.5% presented inertial sensor-based systems, and the remaining 22.5% corresponded to other wearable systems. An increasing number of research works demonstrate that various parameters such as precision, conformability, usability or transportability have indicated that the portable systems based on body sensors are promising methods for gait analysis. PMID:24556672
Distefano, Gaetano; Caruso, Marco; La Malfa, Stefano; Gentile, Alessandra; Wu, Shu-Biao
2012-01-01
High resolution melting curve analysis (HRM) has been used as an efficient, accurate and cost-effective tool to detect single nucleotide polymorphisms (SNPs) or insertions or deletions (INDELs). However, its efficiency, accuracy and applicability to discriminate microsatellite polymorphism have not been extensively assessed. The traditional protocols used for SSR genotyping include PCR amplification of the DNA fragment and the separation of the fragments on electrophoresis-based platform. However, post-PCR handling processes are laborious and costly. Furthermore, SNPs present in the sequences flanking repeat motif cannot be detected by polyacrylamide-gel-electrophoresis based methods. In the present study, we compared the discriminating power of HRM with the traditional electrophoresis-based methods and provided a panel of primers for HRM genotyping in Citrus. The results showed that sixteen SSR markers produced distinct polymorphic melting curves among the Citrus spp investigated through HRM analysis. Among those, 10 showed more genotypes by HRM analysis than capillary electrophoresis owing to the presence of SNPs in the amplicons. For the SSR markers without SNPs present in the flanking region, HRM also gave distinct melting curves which detected same genotypes as were shown in capillary electrophoresis (CE) analysis. Moreover, HRM analysis allowed the discrimination of most of the 15 citrus genotypes and the resulting genetic distance analysis clustered them into three main branches. In conclusion, it has been approved that HRM is not only an efficient and cost-effective alternative of electrophoresis-based method for SSR markers, but also a method to uncover more polymorphisms contributed by SNPs present in SSRs. It was therefore suggested that the panel of SSR markers could be used in a variety of applications in the citrus biodiversity and breeding programs using HRM analysis. Furthermore, we speculate that the HRM analysis can be employed to analyse SSR markers in a wide range of applications in all other species.
Analysis of Electrowetting Dynamics with Level Set Method
NASA Astrophysics Data System (ADS)
Park, Jun Kwon; Hong, Jiwoo; Kang, Kwan Hyoung
2009-11-01
Electrowetting is a versatile tool to handle tiny droplets and forms a backbone of digital microfluidics. Numerical analysis is necessary to fully understand the dynamics of electrowetting, especially in designing electrowetting-based liquid lenses and reflective displays. We developed a numerical method to analyze the general contact-line problems, incorporating dynamic contact angle models. The method was applied to the analysis of spreading process of a sessile droplet for step input voltages in electrowetting. The result was compared with experimental data and analytical result which is based on the spectral method. It is shown that contact line friction significantly affects the contact line motion and the oscillation amplitude. The pinning process of contact line was well represented by including the hysteresis effect in the contact angle models.
Yong, Alan; Hough, Susan E.; Cox, Brady R.; Rathje, Ellen M.; Bachhuber, Jeff; Dulberg, Ranon; Hulslander, David; Christiansen, Lisa; and Abrams, Michael J.
2011-01-01
We report about a preliminary study to evaluate the use of semi-automated imaging analysis of remotely-sensed DEM and field geophysical measurements to develop a seismic-zonation map of Port-au-Prince, Haiti. For in situ data, VS30 values are derived from the MASW technique deployed in and around the city. For satellite imagery, we use an ASTER GDEM of Hispaniola. We apply both pixel- and object-based imaging methods on the ASTER GDEM to explore local topography (absolute elevation values) and classify terrain types such as mountains, alluvial fans and basins/near-shore regions. We assign NEHRP seismic site class ranges based on available VS30 values. A comparison of results from imagery-based methods to results from traditional geologic-based approaches reveals good overall correspondence. We conclude that image analysis of RS data provides reliable first-order site characterization results in the absence of local data and can be useful to refine detailed site maps with sparse local data.
Tani, Kazuki; Mio, Motohira; Toyofuku, Tatsuo; Kato, Shinichi; Masumoto, Tomoya; Ijichi, Tetsuya; Matsushima, Masatoshi; Morimoto, Shoichi; Hirata, Takumi
2017-01-01
Spatial normalization is a significant image pre-processing operation in statistical parametric mapping (SPM) analysis. The purpose of this study was to clarify the optimal method of spatial normalization for improving diagnostic accuracy in SPM analysis of arterial spin-labeling (ASL) perfusion images. We evaluated the SPM results of five spatial normalization methods obtained by comparing patients with Alzheimer's disease or normal pressure hydrocephalus complicated with dementia and cognitively healthy subjects. We used the following methods: 3DT1-conventional based on spatial normalization using anatomical images; 3DT1-DARTEL based on spatial normalization with DARTEL using anatomical images; 3DT1-conventional template and 3DT1-DARTEL template, created by averaging cognitively healthy subjects spatially normalized using the above methods; and ASL-DARTEL template created by averaging cognitively healthy subjects spatially normalized with DARTEL using ASL images only. Our results showed that ASL-DARTEL template was small compared with the other two templates. Our SPM results obtained with ASL-DARTEL template method were inaccurate. Also, there were no significant differences between 3DT1-conventional and 3DT1-DARTEL template methods. In contrast, the 3DT1-DARTEL method showed higher detection sensitivity, and precise anatomical location. Our SPM results suggest that we should perform spatial normalization with DARTEL using anatomical images.
Method of Real-Time Principal-Component Analysis
NASA Technical Reports Server (NTRS)
Duong, Tuan; Duong, Vu
2005-01-01
Dominant-element-based gradient descent and dynamic initial learning rate (DOGEDYN) is a method of sequential principal-component analysis (PCA) that is well suited for such applications as data compression and extraction of features from sets of data. In comparison with a prior method of gradient-descent-based sequential PCA, this method offers a greater rate of learning convergence. Like the prior method, DOGEDYN can be implemented in software. However, the main advantage of DOGEDYN over the prior method lies in the facts that it requires less computation and can be implemented in simpler hardware. It should be possible to implement DOGEDYN in compact, low-power, very-large-scale integrated (VLSI) circuitry that could process data in real time.
Systems and methods for detection of blowout precursors in combustors
Lieuwen, Tim C.; Nair, Suraj
2006-08-15
The present invention comprises systems and methods for detecting flame blowout precursors in combustors. The blowout precursor detection system comprises a combustor, a pressure measuring device, and blowout precursor detection unit. A combustion controller may also be used to control combustor parameters. The methods of the present invention comprise receiving pressure data measured by an acoustic pressure measuring device, performing one or a combination of spectral analysis, statistical analysis, and wavelet analysis on received pressure data, and determining the existence of a blowout precursor based on such analyses. The spectral analysis, statistical analysis, and wavelet analysis further comprise their respective sub-methods to determine the existence of blowout precursors.
Bismuth-based electrochemical stripping analysis
Wang, Joseph
2004-01-27
Method and apparatus for trace metal detection and analysis using bismuth-coated electrodes and electrochemical stripping analysis. Both anodic stripping voltammetry and adsorptive stripping analysis may be employed.
Rathi, Monika; Ahrenkiel, S P; Carapella, J J; Wanlass, M W
2013-02-01
Given an unknown multicomponent alloy, and a set of standard compounds or alloys of known composition, can one improve upon popular standards-based methods for energy dispersive X-ray (EDX) spectrometry to quantify the elemental composition of the unknown specimen? A method is presented here for determining elemental composition of alloys using transmission electron microscopy-based EDX with appropriate standards. The method begins with a discrete set of related reference standards of known composition, applies multivariate statistical analysis to those spectra, and evaluates the compositions with a linear matrix algebra method to relate the spectra to elemental composition. By using associated standards, only limited assumptions about the physical origins of the EDX spectra are needed. Spectral absorption corrections can be performed by providing an estimate of the foil thickness of one or more reference standards. The technique was applied to III-V multicomponent alloy thin films: composition and foil thickness were determined for various III-V alloys. The results were then validated by comparing with X-ray diffraction and photoluminescence analysis, demonstrating accuracy of approximately 1% in atomic fraction.
An advanced probabilistic structural analysis method for implicit performance functions
NASA Technical Reports Server (NTRS)
Wu, Y.-T.; Millwater, H. R.; Cruse, T. A.
1989-01-01
In probabilistic structural analysis, the performance or response functions usually are implicitly defined and must be solved by numerical analysis methods such as finite element methods. In such cases, the most commonly used probabilistic analysis tool is the mean-based, second-moment method which provides only the first two statistical moments. This paper presents a generalized advanced mean value (AMV) method which is capable of establishing the distributions to provide additional information for reliability design. The method requires slightly more computations than the second-moment method but is highly efficient relative to the other alternative methods. In particular, the examples show that the AMV method can be used to solve problems involving non-monotonic functions that result in truncated distributions.
ADAM: analysis of discrete models of biological systems using computer algebra.
Hinkelmann, Franziska; Brandon, Madison; Guang, Bonny; McNeill, Rustin; Blekherman, Grigoriy; Veliz-Cuba, Alan; Laubenbacher, Reinhard
2011-07-20
Many biological systems are modeled qualitatively with discrete models, such as probabilistic Boolean networks, logical models, Petri nets, and agent-based models, to gain a better understanding of them. The computational complexity to analyze the complete dynamics of these models grows exponentially in the number of variables, which impedes working with complex models. There exist software tools to analyze discrete models, but they either lack the algorithmic functionality to analyze complex models deterministically or they are inaccessible to many users as they require understanding the underlying algorithm and implementation, do not have a graphical user interface, or are hard to install. Efficient analysis methods that are accessible to modelers and easy to use are needed. We propose a method for efficiently identifying attractors and introduce the web-based tool Analysis of Dynamic Algebraic Models (ADAM), which provides this and other analysis methods for discrete models. ADAM converts several discrete model types automatically into polynomial dynamical systems and analyzes their dynamics using tools from computer algebra. Specifically, we propose a method to identify attractors of a discrete model that is equivalent to solving a system of polynomial equations, a long-studied problem in computer algebra. Based on extensive experimentation with both discrete models arising in systems biology and randomly generated networks, we found that the algebraic algorithms presented in this manuscript are fast for systems with the structure maintained by most biological systems, namely sparseness and robustness. For a large set of published complex discrete models, ADAM identified the attractors in less than one second. Discrete modeling techniques are a useful tool for analyzing complex biological systems and there is a need in the biological community for accessible efficient analysis tools. ADAM provides analysis methods based on mathematical algorithms as a web-based tool for several different input formats, and it makes analysis of complex models accessible to a larger community, as it is platform independent as a web-service and does not require understanding of the underlying mathematics.
Decision tree and PCA-based fault diagnosis of rotating machinery
NASA Astrophysics Data System (ADS)
Sun, Weixiang; Chen, Jin; Li, Jiaqing
2007-04-01
After analysing the flaws of conventional fault diagnosis methods, data mining technology is introduced to fault diagnosis field, and a new method based on C4.5 decision tree and principal component analysis (PCA) is proposed. In this method, PCA is used to reduce features after data collection, preprocessing and feature extraction. Then, C4.5 is trained by using the samples to generate a decision tree model with diagnosis knowledge. At last the tree model is used to make diagnosis analysis. To validate the method proposed, six kinds of running states (normal or without any defect, unbalance, rotor radial rub, oil whirl, shaft crack and a simultaneous state of unbalance and radial rub), are simulated on Bently Rotor Kit RK4 to test C4.5 and PCA-based method and back-propagation neural network (BPNN). The result shows that C4.5 and PCA-based diagnosis method has higher accuracy and needs less training time than BPNN.
Chromý, Vratislav; Vinklárková, Bára; Šprongl, Luděk; Bittová, Miroslava
2015-01-01
We found previously that albumin-calibrated total protein in certified reference materials causes unacceptable positive bias in analysis of human sera. The simplest way to cure this defect is the use of human-based serum/plasma standards calibrated by the Kjeldahl method. Such standards, commutative with serum samples, will compensate for bias caused by lipids and bilirubin in most human sera. To find a suitable primary reference procedure for total protein in reference materials, we reviewed Kjeldahl methods adopted by laboratory medicine. We found two methods recommended for total protein in human samples: an indirect analysis based on total Kjeldahl nitrogen corrected for its nonprotein nitrogen and a direct analysis made on isolated protein precipitates. The methods found will be assessed in a subsequent article.
Alimohammadi, Nasrollah; Taleghani, Fariba
2015-01-01
Introduction: Health and healthy human being as a core concept of nursing have attracted considerable attention in the Western literature but have received less attention in the context of Eastern philosophy contexts. Methods: This study was done based on philosophical inquiry; this method could be accomplished by means of different approaches like philosophical analysis through concept analysis. There are different methods for concept analysis. Mors's method was employed to analyze the concept of health and healthy human being, we sought to clarify them according to ideas deriving from the Islamic thought. To achieve the research objective, Islamic texts were studied and analyzed based on the criteria of concept analysis (definition, attributes/characteristics, and beaneries). Results: Our analysis revealed in the Islamic thought human being is an integrated entity. Therefore, his health not only consists of each single dimension, but also the full health together with the health of society gets meaning in a balanced and coordinated set. Conclusion: Based on the results, in this study, there are a series of similarities and differences with the perspectives of health in Islamic thought and holism paradigm available in nursing. PMID:27462615
An Evaluation of a Computer-Based Training on the Visual Analysis of Single-Subject Data
ERIC Educational Resources Information Center
Snyder, Katie
2013-01-01
Visual analysis is the primary method of analyzing data in single-subject methodology, which is the predominant research method used in the fields of applied behavior analysis and special education. Previous research on the reliability of visual analysis suggests that judges often disagree about what constitutes an intervention effect. Considering…
Kahlert, Maria; Fink, Patrick
2017-01-01
An increasing number of studies use next generation sequencing (NGS) to analyze complex communities, but is the method sensitive enough when it comes to identification and quantification of species? We compared NGS with morphology-based identification methods in an analysis of microalgal (periphyton) communities. We conducted a mesocosm experiment in which we allowed two benthic grazer species to feed upon benthic biofilms, which resulted in altered periphyton communities. Morphology-based identification and 454 (Roche) pyrosequencing of the V4 region in the small ribosomal unit (18S) rDNA gene were used to investigate the community change caused by grazing. Both the NGS-based data and the morphology-based method detected a marked shift in the biofilm composition, though the two methods varied strongly in their abilities to detect and quantify specific taxa, and neither method was able to detect all species in the biofilms. For quantitative analysis, we therefore recommend using both metabarcoding and microscopic identification when assessing the community composition of eukaryotic microorganisms. PMID:28234997
Web-based multimedia information retrieval for clinical application research
NASA Astrophysics Data System (ADS)
Cao, Xinhua; Hoo, Kent S., Jr.; Zhang, Hong; Ching, Wan; Zhang, Ming; Wong, Stephen T. C.
2001-08-01
We described a web-based data warehousing method for retrieving and analyzing neurological multimedia information. The web-based method supports convenient access, effective search and retrieval of clinical textual and image data, and on-line analysis. To improve the flexibility and efficiency of multimedia information query and analysis, a three-tier, multimedia data warehouse for epilepsy research has been built. The data warehouse integrates clinical multimedia data related to epilepsy from disparate sources and archives them into a well-defined data model.
Chapter A5. Section 6.1.F. Wastewater, Pharmaceutical, and Antibiotic Compounds
Lewis, Michael Edward; Zaugg, Steven D.
2003-01-01
The USGS differentiates between samples collected for analysis of wastewater compounds and those collected for analysis of pharmaceutical and antibiotic compounds, based on the analytical schedule for the laboratory method. Currently, only the wastewater laboratory method for field-filtered samples (SH1433) is an approved, routine (production) method. (The unfiltered wastewater method LC 8033 also is available but requires a proposal for custom analysis.) At this time, analysis of samples for pharmaceutical and antibiotic compounds is confined to research studies and is available only on a custom basis.
2012-01-01
Background This study illustrates an evidence-based method for the segmentation analysis of patients that could greatly improve the approach to population-based medicine, by filling a gap in the empirical analysis of this topic. Segmentation facilitates individual patient care in the context of the culture, health status, and the health needs of the entire population to which that patient belongs. Because many health systems are engaged in developing better chronic care management initiatives, patient profiles are critical to understanding whether some patients can move toward effective self-management and can play a central role in determining their own care, which fosters a sense of responsibility for their own health. A review of the literature on patient segmentation provided the background for this research. Method First, we conducted a literature review on patient satisfaction and segmentation to build a survey. Then, we performed 3,461 surveys of outpatient services users. The key structures on which the subjects’ perception of outpatient services was based were extrapolated using principal component factor analysis with varimax rotation. After the factor analysis, segmentation was performed through cluster analysis to better analyze the influence of individual attitudes on the results. Results Four segments were identified through factor and cluster analysis: the “unpretentious,” the “informed and supported,” the “experts” and the “advanced” patients. Their policies and managerial implications are outlined. Conclusions With this research, we provide the following: – a method for profiling patients based on common patient satisfaction surveys that is easily replicable in all health systems and contexts; – a proposal for segments based on the results of a broad-based analysis conducted in the Italian National Health System (INHS). Segments represent profiles of patients requiring different strategies for delivering health services. Their knowledge and analysis might support an effort to build an effective population-based medicine approach. PMID:23256543
Estimation of Handgrip Force from SEMG Based on Wavelet Scale Selection.
Wang, Kai; Zhang, Xianmin; Ota, Jun; Huang, Yanjiang
2018-02-24
This paper proposes a nonlinear correlation-based wavelet scale selection technology to select the effective wavelet scales for the estimation of handgrip force from surface electromyograms (SEMG). The SEMG signal corresponding to gripping force was collected from extensor and flexor forearm muscles during the force-varying analysis task. We performed a computational sensitivity analysis on the initial nonlinear SEMG-handgrip force model. To explore the nonlinear correlation between ten wavelet scales and handgrip force, a large-scale iteration based on the Monte Carlo simulation was conducted. To choose a suitable combination of scales, we proposed a rule to combine wavelet scales based on the sensitivity of each scale and selected the appropriate combination of wavelet scales based on sequence combination analysis (SCA). The results of SCA indicated that the scale combination VI is suitable for estimating force from the extensors and the combination V is suitable for the flexors. The proposed method was compared to two former methods through prolonged static and force-varying contraction tasks. The experiment results showed that the root mean square errors derived by the proposed method for both static and force-varying contraction tasks were less than 20%. The accuracy and robustness of the handgrip force derived by the proposed method is better than that obtained by the former methods.
A Bootstrap Generalization of Modified Parallel Analysis for IRT Dimensionality Assessment
ERIC Educational Resources Information Center
Finch, Holmes; Monahan, Patrick
2008-01-01
This article introduces a bootstrap generalization to the Modified Parallel Analysis (MPA) method of test dimensionality assessment using factor analysis. This methodology, based on the use of Marginal Maximum Likelihood nonlinear factor analysis, provides for the calculation of a test statistic based on a parametric bootstrap using the MPA…
A noninvasive, direct real-time PCR method for sex determination in multiple avian species
Brubaker, Jessica L.; Karouna-Renier, Natalie K.; Chen, Yu; Jenko, Kathryn; Sprague, Daniel T.; Henry, Paula F.P.
2011-01-01
Polymerase chain reaction (PCR)-based methods to determine the sex of birds are well established and have seen few modifications since they were first introduced in the 1990s. Although these methods allowed for sex determination in species that were previously difficult to analyse, they were not conducive to high-throughput analysis because of the laboriousness of DNA extraction and gel electrophoresis. We developed a high-throughput real-time PCR-based method for analysis of sex in birds, which uses noninvasive sample collection and avoids DNA extraction and gel electrophoresis.
Sha, Zhichao; Liu, Zhengmeng; Huang, Zhitao; Zhou, Yiyu
2013-08-29
This paper addresses the problem of direction-of-arrival (DOA) estimation of multiple wideband coherent chirp signals, and a new method is proposed. The new method is based on signal component analysis of the array output covariance, instead of the complicated time-frequency analysis used in previous literatures, and thus is more compact and effectively avoids possible signal energy loss during the hyper-processes. Moreover, the a priori information of signal number is no longer a necessity for DOA estimation in the new method. Simulation results demonstrate the performance superiority of the new method over previous ones.
NASA Astrophysics Data System (ADS)
Zhang, Rui; Xin, Binjie
2016-08-01
Yarn density is always considered as the fundamental structural parameter used for the quality evaluation of woven fabrics. The conventional yarn density measurement method is based on one-side analysis. In this paper, a novel density measurement method is developed for yarn-dyed woven fabrics based on a dual-side fusion technique. Firstly, a lab-used dual-side imaging system is established to acquire both face-side and back-side images of woven fabric and the affine transform is used for the alignment and fusion of the dual-side images. Then, the color images of the woven fabrics are transferred from the RGB to the CIE-Lab color space, and the intensity information of the image extracted from the L component is used for texture fusion and analysis. Subsequently, three image fusion methods are developed and utilized to merge the dual-side images: the weighted average method, wavelet transform method and Laplacian pyramid blending method. The fusion efficacy of each method is evaluated by three evaluation indicators and the best of them is selected to do the reconstruction of the complete fabric texture. Finally, the yarn density of the fused image is measured based on the fast Fourier transform, and the yarn alignment image could be reconstructed using the inverse fast Fourier transform. Our experimental results show that the accuracy of density measurement by using the proposed method is close to 99.44% compared with the traditional method and the robustness of this new proposed method is better than that of conventional analysis methods.
Clustering "N" Objects into "K" Groups under Optimal Scaling of Variables.
ERIC Educational Resources Information Center
van Buuren, Stef; Heiser, Willem J.
1989-01-01
A method based on homogeneity analysis (multiple correspondence analysis or multiple scaling) is proposed to reduce many categorical variables to one variable with "k" categories. The method is a generalization of the sum of squared distances cluster analysis problem to the case of mixed measurement level variables. (SLD)
Design of an image encryption scheme based on a multiple chaotic map
NASA Astrophysics Data System (ADS)
Tong, Xiao-Jun
2013-07-01
In order to solve the problem that chaos is degenerated in limited computer precision and Cat map is the small key space, this paper presents a chaotic map based on topological conjugacy and the chaotic characteristics are proved by Devaney definition. In order to produce a large key space, a Cat map named block Cat map is also designed for permutation process based on multiple-dimensional chaotic maps. The image encryption algorithm is based on permutation-substitution, and each key is controlled by different chaotic maps. The entropy analysis, differential analysis, weak-keys analysis, statistical analysis, cipher random analysis, and cipher sensibility analysis depending on key and plaintext are introduced to test the security of the new image encryption scheme. Through the comparison to the proposed scheme with AES, DES and Logistic encryption methods, we come to the conclusion that the image encryption method solves the problem of low precision of one dimensional chaotic function and has higher speed and higher security.
Alignment-free genetic sequence comparisons: a review of recent approaches by word analysis.
Bonham-Carter, Oliver; Steele, Joe; Bastola, Dhundy
2014-11-01
Modern sequencing and genome assembly technologies have provided a wealth of data, which will soon require an analysis by comparison for discovery. Sequence alignment, a fundamental task in bioinformatics research, may be used but with some caveats. Seminal techniques and methods from dynamic programming are proving ineffective for this work owing to their inherent computational expense when processing large amounts of sequence data. These methods are prone to giving misleading information because of genetic recombination, genetic shuffling and other inherent biological events. New approaches from information theory, frequency analysis and data compression are available and provide powerful alternatives to dynamic programming. These new methods are often preferred, as their algorithms are simpler and are not affected by synteny-related problems. In this review, we provide a detailed discussion of computational tools, which stem from alignment-free methods based on statistical analysis from word frequencies. We provide several clear examples to demonstrate applications and the interpretations over several different areas of alignment-free analysis such as base-base correlations, feature frequency profiles, compositional vectors, an improved string composition and the D2 statistic metric. Additionally, we provide detailed discussion and an example of analysis by Lempel-Ziv techniques from data compression. © The Author 2013. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
Effectiveness of problem-based learning in Chinese pharmacy education: a meta-analysis.
Zhou, Jiyin; Zhou, Shiwen; Huang, Chunji; Xu, Rufu; Zhang, Zuo; Zeng, Shengya; Qian, Guisheng
2016-01-19
This review provides a critical overview of problem-based learning (PBL) practices in Chinese pharmacy education. PBL has yet to be widely applied in pharmaceutical education in China. The results of those studies that have been conducted are published in Chinese and thus may not be easily accessible to international researchers. Therefore, this meta-analysis was carried out to review the effectiveness of PBL. Databases were searched for studies in accordance with the inclusion criteria. Two reviewers independently performed the study identification and data extraction. A meta-analysis was conducted using Revman 5.3 software. Sixteen randomized controlled trials were included. The meta-analysis revealed that PBL had a positive association with higher theoretical scores (SMD = 1.17, 95% CI [0.77, 11.57], P < 0.00001). The questionnaire results show that PBL methods are superior to conventional teaching methods in improving students' learning interest, independent analysis skills, scope of knowledge, self-study, team spirit, and oral expression. This meta-analysis indicates that PBL pedagogy is superior to traditional lecture-based teaching in Chinese pharmacy education. PBL methods could be an optional, supplementary method of pharmaceutical teaching in China. However, Chinese pharmacy colleges and universities should revise PBL curricula according to their own needs, which would maximize the effectiveness of PBL.
Time Series Imputation via L1 Norm-Based Singular Spectrum Analysis
NASA Astrophysics Data System (ADS)
Kalantari, Mahdi; Yarmohammadi, Masoud; Hassani, Hossein; Silva, Emmanuel Sirimal
Missing values in time series data is a well-known and important problem which many researchers have studied extensively in various fields. In this paper, a new nonparametric approach for missing value imputation in time series is proposed. The main novelty of this research is applying the L1 norm-based version of Singular Spectrum Analysis (SSA), namely L1-SSA which is robust against outliers. The performance of the new imputation method has been compared with many other established methods. The comparison is done by applying them to various real and simulated time series. The obtained results confirm that the SSA-based methods, especially L1-SSA can provide better imputation in comparison to other methods.
Bobst, Cedric E.; Kaltashov, Igor A.
2012-01-01
Mass spectrometry has already become an indispensable tool in the analytical armamentarium of the biopharmaceutical industry, although its current uses are limited to characterization of covalent structure of recombinant protein drugs. However, the scope of applications of mass spectrometry-based methods is beginning to expand to include characterization of the higher order structure and dynamics of biopharmaceutical products, a development which is catalyzed by the recent progress in mass spectrometry-based methods to study higher order protein structure. The two particularly promising methods that are likely to have the most significant and lasting impact in many areas of biopharmaceutical analysis, direct ESI MS and hydrogen/deuterium exchange, are focus of this article. PMID:21542797
NASA Astrophysics Data System (ADS)
Gorthi, Sai Siva; Rajshekhar, Gannavarpu; Rastogi, Pramod
2010-06-01
Recently, a high-order instantaneous moments (HIM)-operator-based method was proposed for accurate phase estimation in digital holographic interferometry. The method relies on piece-wise polynomial approximation of phase and subsequent evaluation of the polynomial coefficients from the HIM operator using single-tone frequency estimation. The work presents a comparative analysis of the performance of different single-tone frequency estimation techniques, like Fourier transform followed by optimization, estimation of signal parameters by rotational invariance technique (ESPRIT), multiple signal classification (MUSIC), and iterative frequency estimation by interpolation on Fourier coefficients (IFEIF) in HIM-operator-based methods for phase estimation. Simulation and experimental results demonstrate the potential of the IFEIF technique with respect to computational efficiency and estimation accuracy.
ERIC Educational Resources Information Center
Hostager, Todd J.; Voiovich, Jason; Hughes, Raymond K.
2013-01-01
The authors apply a software-based content analysis method to uncover differences in responses by expert entrepreneurs and undergraduate entrepreneur majors to a new venture investment proposal. Data analyzed via the Leximancer software package yielded conceptual maps highlighting key differences in the nature of these responses. Study methods and…
NASA Astrophysics Data System (ADS)
Hano, Mitsuo; Hotta, Masashi
A new multigrid method based on high-order vector finite elements is proposed in this paper. Low level discretizations in this method are obtained by using low-order vector finite elements for the same mesh. Gauss-Seidel method is used as a smoother, and a linear equation of lowest level is solved by ICCG method. But it is often found that multigrid solutions do not converge into ICCG solutions. An elimination algolithm of constant term using a null space of the coefficient matrix is also described. In three dimensional magnetostatic field analysis, convergence time and number of iteration of this multigrid method are discussed with the convectional ICCG method.
A Study of Deep CNN-Based Classification of Open and Closed Eyes Using a Visible Light Camera Sensor
Kim, Ki Wan; Hong, Hyung Gil; Nam, Gi Pyo; Park, Kang Ryoung
2017-01-01
The necessity for the classification of open and closed eyes is increasing in various fields, including analysis of eye fatigue in 3D TVs, analysis of the psychological states of test subjects, and eye status tracking-based driver drowsiness detection. Previous studies have used various methods to distinguish between open and closed eyes, such as classifiers based on the features obtained from image binarization, edge operators, or texture analysis. However, when it comes to eye images with different lighting conditions and resolutions, it can be difficult to find an optimal threshold for image binarization or optimal filters for edge and texture extraction. In order to address this issue, we propose a method to classify open and closed eye images with different conditions, acquired by a visible light camera, using a deep residual convolutional neural network. After conducting performance analysis on both self-collected and open databases, we have determined that the classification accuracy of the proposed method is superior to that of existing methods. PMID:28665361
An Android malware detection system based on machine learning
NASA Astrophysics Data System (ADS)
Wen, Long; Yu, Haiyang
2017-08-01
The Android smartphone, with its open source character and excellent performance, has attracted many users. However, the convenience of the Android platform also has motivated the development of malware. The traditional method which detects the malware based on the signature is unable to detect unknown applications. The article proposes a machine learning-based lightweight system that is capable of identifying malware on Android devices. In this system we extract features based on the static analysis and the dynamitic analysis, then a new feature selection approach based on principle component analysis (PCA) and relief are presented in the article to decrease the dimensions of the features. After that, a model will be constructed with support vector machine (SVM) for classification. Experimental results show that our system provides an effective method in Android malware detection.
The power-proportion method for intracranial volume correction in volumetric imaging analysis.
Liu, Dawei; Johnson, Hans J; Long, Jeffrey D; Magnotta, Vincent A; Paulsen, Jane S
2014-01-01
In volumetric brain imaging analysis, volumes of brain structures are typically assumed to be proportional or linearly related to intracranial volume (ICV). However, evidence abounds that many brain structures have power law relationships with ICV. To take this relationship into account in volumetric imaging analysis, we propose a power law based method-the power-proportion method-for ICV correction. The performance of the new method is demonstrated using data from the PREDICT-HD study.
Spectral and correlation analysis with applications to middle-atmosphere radars
NASA Technical Reports Server (NTRS)
Rastogi, Prabhat K.
1989-01-01
The correlation and spectral analysis methods for uniformly sampled stationary random signals, estimation of their spectral moments, and problems arising due to nonstationary are reviewed. Some of these methods are already in routine use in atmospheric radar experiments. Other methods based on the maximum entropy principle and time series models have been used in analyzing data, but are just beginning to receive attention in the analysis of radar signals. These methods are also briefly discussed.
Research on the raw data processing method of the hydropower construction project
NASA Astrophysics Data System (ADS)
Tian, Zhichao
2018-01-01
In this paper, based on the characteristics of the fixed data, this paper compares the various mathematical statistics analysis methods and chooses the improved Grabs criterion to analyze the data, and through the analysis of the data processing, the data processing method is not suitable. It is proved that this method can be applied to the processing of fixed raw data. This paper provides a reference for reasonably determining the effective quota analysis data.
NASA Astrophysics Data System (ADS)
Fan, Qingju; Wu, Yonghong
2015-08-01
In this paper, we develop a new method for the multifractal characterization of two-dimensional nonstationary signal, which is based on the detrended fluctuation analysis (DFA). By applying to two artificially generated signals of two-component ARFIMA process and binomial multifractal model, we show that the new method can reliably determine the multifractal scaling behavior of two-dimensional signal. We also illustrate the applications of this method in finance and physiology. The analyzing results exhibit that the two-dimensional signals under investigation are power-law correlations, and the electricity market consists of electricity price and trading volume is multifractal, while the two-dimensional EEG signal in sleep recorded for a single patient is weak multifractal. The new method based on the detrended fluctuation analysis may add diagnostic power to existing statistical methods.
Mutual Coupling Analysis for Conformal Microstrip Antennas.
1984-12-01
6 0.001/ko, and the infinite integral is terminated at k 150 ko . 28*,-J ." . .. C. MUTUAL COUPLING ANALYSIS In this section, the moment method ...fact that it does provide an attractive alternative to the Green’s function method on which the analysis in later sections is based. In the present...by the moment method , the chosen set of expansion dipole modes plays a very important role. The efficiency as well as accuracy of the analysis depend
Chan, Leo Li-Ying; Kuksin, Dmitry; Laverty, Daniel J; Saldi, Stephanie; Qiu, Jean
2015-05-01
The ability to accurately determine cell viability is essential to performing a well-controlled biological experiment. Typical experiments range from standard cell culturing to advanced cell-based assays that may require cell viability measurement for downstream experiments. The traditional cell viability measurement method has been the trypan blue (TB) exclusion assay. However, since the introduction of fluorescence-based dyes for cell viability measurement using flow or image-based cytometry systems, there have been numerous publications comparing the two detection methods. Although previous studies have shown discrepancies between TB exclusion and fluorescence-based viability measurements, image-based morphological analysis was not performed in order to examine the viability discrepancies. In this work, we compared TB exclusion and fluorescence-based viability detection methods using image cytometry to observe morphological changes due to the effect of TB on dead cells. Imaging results showed that as the viability of a naturally-dying Jurkat cell sample decreased below 70 %, many TB-stained cells began to exhibit non-uniform morphological characteristics. Dead cells with these characteristics may be difficult to count under light microscopy, thus generating an artificially higher viability measurement compared to fluorescence-based method. These morphological observations can potentially explain the differences in viability measurement between the two methods.
NASA Technical Reports Server (NTRS)
Dorris, William J.; Hairr, John W.; Huang, Jui-Tien; Ingram, J. Edward; Shah, Bharat M.
1992-01-01
Non-linear analysis methods were adapted and incorporated in a finite element based DIAL code. These methods are necessary to evaluate the global response of a stiffened structure under combined in-plane and out-of-plane loading. These methods include the Arc Length method and target point analysis procedure. A new interface material model was implemented that can model elastic-plastic behavior of the bond adhesive. Direct application of this method is in skin/stiffener interface failure assessment. Addition of the AML (angle minus longitudinal or load) failure procedure and Hasin's failure criteria provides added capability in the failure predictions. Interactive Stiffened Panel Analysis modules were developed as interactive pre-and post-processors. Each module provides the means of performing self-initiated finite elements based analysis of primary structures such as a flat or curved stiffened panel; a corrugated flat sandwich panel; and a curved geodesic fuselage panel. This module brings finite element analysis into the design of composite structures without the requirement for the user to know much about the techniques and procedures needed to actually perform a finite element analysis from scratch. An interactive finite element code was developed to predict bolted joint strength considering material and geometrical non-linearity. The developed method conducts an ultimate strength failure analysis using a set of material degradation models.
NASA Astrophysics Data System (ADS)
Zhou, Peng; Peng, Zhike; Chen, Shiqian; Yang, Yang; Zhang, Wenming
2018-06-01
With the development of large rotary machines for faster and more integrated performance, the condition monitoring and fault diagnosis for them are becoming more challenging. Since the time-frequency (TF) pattern of the vibration signal from the rotary machine often contains condition information and fault feature, the methods based on TF analysis have been widely-used to solve these two problems in the industrial community. This article introduces an effective non-stationary signal analysis method based on the general parameterized time-frequency transform (GPTFT). The GPTFT is achieved by inserting a rotation operator and a shift operator in the short-time Fourier transform. This method can produce a high-concentrated TF pattern with a general kernel. A multi-component instantaneous frequency (IF) extraction method is proposed based on it. The estimation for the IF of every component is accomplished by defining a spectrum concentration index (SCI). Moreover, such an IF estimation process is iteratively operated until all the components are extracted. The tests on three simulation examples and a real vibration signal demonstrate the effectiveness and superiority of our method.
NASA Astrophysics Data System (ADS)
Pan, Leyun; Cheng, Caixia; Haberkorn, Uwe; Dimitrakopoulou-Strauss, Antonia
2017-05-01
A variety of compartment models are used for the quantitative analysis of dynamic positron emission tomography (PET) data. Traditionally, these models use an iterative fitting (IF) method to find the least squares between the measured and calculated values over time, which may encounter some problems such as the overfitting of model parameters and a lack of reproducibility, especially when handling noisy data or error data. In this paper, a machine learning (ML) based kinetic modeling method is introduced, which can fully utilize a historical reference database to build a moderate kinetic model directly dealing with noisy data but not trying to smooth the noise in the image. Also, due to the database, the presented method is capable of automatically adjusting the models using a multi-thread grid parameter searching technique. Furthermore, a candidate competition concept is proposed to combine the advantages of the ML and IF modeling methods, which could find a balance between fitting to historical data and to the unseen target curve. The machine learning based method provides a robust and reproducible solution that is user-independent for VOI-based and pixel-wise quantitative analysis of dynamic PET data.
Fusion and quality analysis for remote sensing images using contourlet transform
NASA Astrophysics Data System (ADS)
Choi, Yoonsuk; Sharifahmadian, Ershad; Latifi, Shahram
2013-05-01
Recent developments in remote sensing technologies have provided various images with high spatial and spectral resolutions. However, multispectral images have low spatial resolution and panchromatic images have low spectral resolution. Therefore, image fusion techniques are necessary to improve the spatial resolution of spectral images by injecting spatial details of high-resolution panchromatic images. The objective of image fusion is to provide useful information by improving the spatial resolution and the spectral information of the original images. The fusion results can be utilized in various applications, such as military, medical imaging, and remote sensing. This paper addresses two issues in image fusion: i) image fusion method and ii) quality analysis of fusion results. First, a new contourlet-based image fusion method is presented, which is an improvement over the wavelet-based fusion. This fusion method is then applied to a case study to demonstrate its fusion performance. Fusion framework and scheme used in the study are discussed in detail. Second, quality analysis for the fusion results is discussed. We employed various quality metrics in order to analyze the fusion results both spatially and spectrally. Our results indicate that the proposed contourlet-based fusion method performs better than the conventional wavelet-based fusion methods.
Pan, Leyun; Cheng, Caixia; Haberkorn, Uwe; Dimitrakopoulou-Strauss, Antonia
2017-05-07
A variety of compartment models are used for the quantitative analysis of dynamic positron emission tomography (PET) data. Traditionally, these models use an iterative fitting (IF) method to find the least squares between the measured and calculated values over time, which may encounter some problems such as the overfitting of model parameters and a lack of reproducibility, especially when handling noisy data or error data. In this paper, a machine learning (ML) based kinetic modeling method is introduced, which can fully utilize a historical reference database to build a moderate kinetic model directly dealing with noisy data but not trying to smooth the noise in the image. Also, due to the database, the presented method is capable of automatically adjusting the models using a multi-thread grid parameter searching technique. Furthermore, a candidate competition concept is proposed to combine the advantages of the ML and IF modeling methods, which could find a balance between fitting to historical data and to the unseen target curve. The machine learning based method provides a robust and reproducible solution that is user-independent for VOI-based and pixel-wise quantitative analysis of dynamic PET data.
Influence analysis in quantitative trait loci detection.
Dou, Xiaoling; Kuriki, Satoshi; Maeno, Akiteru; Takada, Toyoyuki; Shiroishi, Toshihiko
2014-07-01
This paper presents systematic methods for the detection of influential individuals that affect the log odds (LOD) score curve. We derive general formulas of influence functions for profile likelihoods and introduce them into two standard quantitative trait locus detection methods-the interval mapping method and single marker analysis. Besides influence analysis on specific LOD scores, we also develop influence analysis methods on the shape of the LOD score curves. A simulation-based method is proposed to assess the significance of the influence of the individuals. These methods are shown useful in the influence analysis of a real dataset of an experimental population from an F2 mouse cross. By receiver operating characteristic analysis, we confirm that the proposed methods show better performance than existing diagnostics. © 2014 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Feng, Ximeng; Li, Gang; Yu, Haixia; Wang, Shaohui; Yi, Xiaoqing; Lin, Ling
2018-03-01
Noninvasive blood component analysis by spectroscopy has been a hotspot in biomedical engineering in recent years. Dynamic spectrum provides an excellent idea for noninvasive blood component measurement, but studies have been limited to the application of broadband light sources and high-resolution spectroscopy instruments. In order to remove redundant information, a more effective wavelength selection method has been presented in this paper. In contrast to many common wavelength selection methods, this method is based on sensing mechanism which has a clear mechanism and can effectively avoid the noise from acquisition system. The spectral difference coefficient was theoretically proved to have a guiding significance for wavelength selection. After theoretical analysis, the multi-band spectral difference coefficient-wavelength selection method combining with the dynamic spectrum was proposed. An experimental analysis based on clinical trial data from 200 volunteers has been conducted to illustrate the effectiveness of this method. The extreme learning machine was used to develop the calibration models between the dynamic spectrum data and hemoglobin concentration. The experiment result shows that the prediction precision of hemoglobin concentration using multi-band spectral difference coefficient-wavelength selection method is higher compared with other methods.
Liang, Xianrui; Ma, Meiling; Su, Weike
2013-01-01
Background: A method for chemical fingerprint analysis of Hibiscus mutabilis L. leaves was developed based on ultra performance liquid chromatography with photodiode array detector (UPLC-PAD) combined with similarity analysis (SA) and hierarchical clustering analysis (HCA). Materials and Methods: 10 batches of Hibiscus mutabilis L. leaves samples were collected from different regions of China. UPLC-PAD was employed to collect chemical fingerprints of Hibiscus mutabilis L. leaves. Results: The relative standard deviations (RSDs) of the relative retention times (RRT) and relative peak areas (RPA) of 10 characteristic peaks (one of them was identified as rutin) in precision, repeatability and stability test were less than 3%, and the method of fingerprint analysis was validated to be suitable for the Hibiscus mutabilis L. leaves. Conclusions: The chromatographic fingerprints showed abundant diversity of chemical constituents qualitatively in the 10 batches of Hibiscus mutabilis L. leaves samples from different locations by similarity analysis on basis of calculating the correlation coefficients between each two fingerprints. Moreover, the HCA method clustered the samples into four classes, and the HCA dendrogram showed the close or distant relations among the 10 samples, which was consistent to the SA result to some extent. PMID:23930008
Use of CellNetAnalyzer in biotechnology and metabolic engineering.
von Kamp, Axel; Thiele, Sven; Hädicke, Oliver; Klamt, Steffen
2017-11-10
Mathematical models of the cellular metabolism have become an essential tool for the optimization of biotechnological processes. They help to obtain a systemic understanding of the metabolic processes in the used microorganisms and to find suitable genetic modifications maximizing the production performance. In particular, methods of stoichiometric and constraint-based modeling are frequently used in the context of metabolic and bioprocess engineering. Since metabolic networks can be complex and comprise hundreds or even thousands of metabolites and reactions, dedicated software tools are required for an efficient analysis. One such software suite is CellNetAnalyzer, a MATLAB package providing, among others, various methods for analyzing stoichiometric and constraint-based metabolic models. CellNetAnalyzer can be used via command-line based operations or via a graphical user interface with embedded network visualizations. Herein we will present key functionalities of CellNetAnalyzer for applications in biotechnology and metabolic engineering and thereby review constraint-based modeling techniques such as metabolic flux analysis, flux balance analysis, flux variability analysis, metabolic pathway analysis (elementary flux modes) and methods for computational strain design. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.
Determination of Diethyl Phthalate and Polyhexamethylene Guanidine in Surrogate Alcohol from Russia
Monakhova, Yulia B.; Kuballa, Thomas; Leitz, Jenny; Lachenmeier, Dirk W.
2011-01-01
Analytical methods based on spectroscopic techniques were developed and validated for the determination of diethyl phthalate (DEP) and polyhexamethylene guanidine (PHMG), which may occur in unrecorded alcohol. Analysis for PHMG was based on UV-VIS spectrophotometry after derivatization with Eosin Y and 1H NMR spectroscopy of the DMSO extract. Analysis of DEP was performed with direct UV-VIS and 1H NMR methods. Multivariate curve resolution and spectra computation methods were used to confirm the presence of PHMG and DEP in the investigated beverages. Of 22 analysed alcohol samples, two contained DEP or PHMG. 1H NMR analysis also revealed the presence of signals of hawthorn extract in three medicinal alcohols used as surrogate alcohol. The simple and cheap UV-VIS methods can be used for rapid screening of surrogate alcohol samples for impurities, while 1H NMR is recommended for specific confirmatory analysis if required. PMID:21647285
Determination of diethyl phthalate and polyhexamethylene guanidine in surrogate alcohol from Russia.
Monakhova, Yulia B; Kuballa, Thomas; Leitz, Jenny; Lachenmeier, Dirk W
2011-01-01
Analytical methods based on spectroscopic techniques were developed and validated for the determination of diethyl phthalate (DEP) and polyhexamethylene guanidine (PHMG), which may occur in unrecorded alcohol. Analysis for PHMG was based on UV-VIS spectrophotometry after derivatization with Eosin Y and (1)H NMR spectroscopy of the DMSO extract. Analysis of DEP was performed with direct UV-VIS and (1)H NMR methods. Multivariate curve resolution and spectra computation methods were used to confirm the presence of PHMG and DEP in the investigated beverages. Of 22 analysed alcohol samples, two contained DEP or PHMG. (1)H NMR analysis also revealed the presence of signals of hawthorn extract in three medicinal alcohols used as surrogate alcohol. The simple and cheap UV-VIS methods can be used for rapid screening of surrogate alcohol samples for impurities, while (1)H NMR is recommended for specific confirmatory analysis if required.
Seismic Hazard Analysis — Quo vadis?
NASA Astrophysics Data System (ADS)
Klügel, Jens-Uwe
2008-05-01
The paper is dedicated to the review of methods of seismic hazard analysis currently in use, analyzing the strengths and weaknesses of different approaches. The review is performed from the perspective of a user of the results of seismic hazard analysis for different applications such as the design of critical and general (non-critical) civil infrastructures, technical and financial risk analysis. A set of criteria is developed for and applied to an objective assessment of the capabilities of different analysis methods. It is demonstrated that traditional probabilistic seismic hazard analysis (PSHA) methods have significant deficiencies, thus limiting their practical applications. These deficiencies have their roots in the use of inadequate probabilistic models and insufficient understanding of modern concepts of risk analysis, as have been revealed in some recent large scale studies. These deficiencies result in the lack of ability of a correct treatment of dependencies between physical parameters and finally, in an incorrect treatment of uncertainties. As a consequence, results of PSHA studies have been found to be unrealistic in comparison with empirical information from the real world. The attempt to compensate these problems by a systematic use of expert elicitation has, so far, not resulted in any improvement of the situation. It is also shown that scenario-earthquakes developed by disaggregation from the results of a traditional PSHA may not be conservative with respect to energy conservation and should not be used for the design of critical infrastructures without validation. Because the assessment of technical as well as of financial risks associated with potential damages of earthquakes need a risk analysis, current method is based on a probabilistic approach with its unsolved deficiencies. Traditional deterministic or scenario-based seismic hazard analysis methods provide a reliable and in general robust design basis for applications such as the design of critical infrastructures, especially with systematic sensitivity analyses based on validated phenomenological models. Deterministic seismic hazard analysis incorporates uncertainties in the safety factors. These factors are derived from experience as well as from expert judgment. Deterministic methods associated with high safety factors may lead to too conservative results, especially if applied for generally short-lived civil structures. Scenarios used in deterministic seismic hazard analysis have a clear physical basis. They are related to seismic sources discovered by geological, geomorphologic, geodetic and seismological investigations or derived from historical references. Scenario-based methods can be expanded for risk analysis applications with an extended data analysis providing the frequency of seismic events. Such an extension provides a better informed risk model that is suitable for risk-informed decision making.
Coelho, Lúcia H G; Gutz, Ivano G R
2006-03-15
A chemometric method for analysis of conductometric titration data was introduced to extend its applicability to lower concentrations and more complex acid-base systems. Auxiliary pH measurements were made during the titration to assist the calculation of the distribution of protonable species on base of known or guessed equilibrium constants. Conductivity values of each ionized or ionizable species possibly present in the sample were introduced in a general equation where the only unknown parameters were the total concentrations of (conjugated) bases and of strong electrolytes not involved in acid-base equilibria. All these concentrations were adjusted by a multiparametric nonlinear regression (NLR) method, based on the Levenberg-Marquardt algorithm. This first conductometric titration method with NLR analysis (CT-NLR) was successfully applied to simulated conductometric titration data and to synthetic samples with multiple components at concentrations as low as those found in rainwater (approximately 10 micromol L(-1)). It was possible to resolve and quantify mixtures containing a strong acid, formic acid, acetic acid, ammonium ion, bicarbonate and inert electrolyte with accuracy of 5% or better.
Wang, Longfei; Lee, Sungyoung; Gim, Jungsoo; Qiao, Dandi; Cho, Michael; Elston, Robert C; Silverman, Edwin K; Won, Sungho
2016-09-01
Family-based designs have been repeatedly shown to be powerful in detecting the significant rare variants associated with human diseases. Furthermore, human diseases are often defined by the outcomes of multiple phenotypes, and thus we expect multivariate family-based analyses may be very efficient in detecting associations with rare variants. However, few statistical methods implementing this strategy have been developed for family-based designs. In this report, we describe one such implementation: the multivariate family-based rare variant association tool (mFARVAT). mFARVAT is a quasi-likelihood-based score test for rare variant association analysis with multiple phenotypes, and tests both homogeneous and heterogeneous effects of each variant on multiple phenotypes. Simulation results show that the proposed method is generally robust and efficient for various disease models, and we identify some promising candidate genes associated with chronic obstructive pulmonary disease. The software of mFARVAT is freely available at http://healthstat.snu.ac.kr/software/mfarvat/, implemented in C++ and supported on Linux and MS Windows. © 2016 WILEY PERIODICALS, INC.
NASA Astrophysics Data System (ADS)
Zabolotna, Natalia I.; Radchenko, Kostiantyn O.; Karas, Oleksandr V.
2018-01-01
A fibroadenoma diagnosing of breast using statistical analysis (determination and analysis of statistical moments of the 1st-4th order) of the obtained polarization images of Jones matrix imaginary elements of the optically thin (attenuation coefficient τ <= 0,1 ) blood plasma films with further intellectual differentiation based on the method of "fuzzy" logic and discriminant analysis were proposed. The accuracy of the intellectual differentiation of blood plasma samples to the "norm" and "fibroadenoma" of breast was 82.7% by the method of linear discriminant analysis, and by the "fuzzy" logic method is 95.3%. The obtained results allow to confirm the potentially high level of reliability of the method of differentiation by "fuzzy" analysis.
NASA Astrophysics Data System (ADS)
Chen, Xiaogang; Wang, Yijun; Gao, Shangkai; Jung, Tzyy-Ping; Gao, Xiaorong
2015-08-01
Objective. Recently, canonical correlation analysis (CCA) has been widely used in steady-state visual evoked potential (SSVEP)-based brain-computer interfaces (BCIs) due to its high efficiency, robustness, and simple implementation. However, a method with which to make use of harmonic SSVEP components to enhance the CCA-based frequency detection has not been well established. Approach. This study proposed a filter bank canonical correlation analysis (FBCCA) method to incorporate fundamental and harmonic frequency components to improve the detection of SSVEPs. A 40-target BCI speller based on frequency coding (frequency range: 8-15.8 Hz, frequency interval: 0.2 Hz) was used for performance evaluation. To optimize the filter bank design, three methods (M1: sub-bands with equally spaced bandwidths; M2: sub-bands corresponding to individual harmonic frequency bands; M3: sub-bands covering multiple harmonic frequency bands) were proposed for comparison. Classification accuracy and information transfer rate (ITR) of the three FBCCA methods and the standard CCA method were estimated using an offline dataset from 12 subjects. Furthermore, an online BCI speller adopting the optimal FBCCA method was tested with a group of 10 subjects. Main results. The FBCCA methods significantly outperformed the standard CCA method. The method M3 achieved the highest classification performance. At a spelling rate of ˜33.3 characters/min, the online BCI speller obtained an average ITR of 151.18 ± 20.34 bits min-1. Significance. By incorporating the fundamental and harmonic SSVEP components in target identification, the proposed FBCCA method significantly improves the performance of the SSVEP-based BCI, and thereby facilitates its practical applications such as high-speed spelling.
[Value-based medicine in ophthalmology].
Hirneiss, C; Neubauer, A S; Tribus, C; Kampik, A
2006-06-01
Value-based medicine (VBM) unifies costs and patient-perceived value (improvement in quality of life, length of life, or both) of an intervention. Value-based ophthalmology is of increasing importance for decisions in eye care. The methods of VBM are explained and definitions for a specific terminology in this field are given. The cost-utility analysis as part of health care economic analyses is explained. VBM exceeds evidence-based medicine by incorporating parameters of cost and benefits from an ophthalmological intervention. The benefit of the intervention is defined as an increase or maintenance of visual quality of life and can be determined by utility analysis. The time trade-off method is valid and reliable for utility analysis. The resources expended for the value gained in VBM are measured with cost-utility analysis in terms of cost per quality-adjusted life years gained (euros/QALY). Numerous cost-utility analyses of different ophthalmological interventions have been published. The fundamental instrument of VBM is cost-utility analysis. The results in cost per QALY allow estimation of cost effectiveness of an ophthalmological intervention. Using the time trade-off method for utility analysis allows the comparison of ophthalmological cost-utility analyses with those of other medical interventions. VBM is important for individual medical decision making and for general health care.
Penning, Holger; Elsner, Martin
2007-11-01
Potentially, compound-specific isotope analysis may provide unique information on source and fate of pesticides in natural systems. Yet for isotope analysis, LC-based methods that are based on the use of organic solvents often cannot be used and GC-based analysis is frequently not possible due to thermolability of the analyte. A typical example of a compound with such properties is isoproturon (3-(4-isopropylphenyl)-1,1-dimethylurea), belonging to the worldwide extensively used phenylurea herbicides. To make isoproturon accessible to carbon and nitrogen isotope analysis, we developed a GC-based method during which isoproturon was quantitatively fragmented to dimethylamine and 4-isopropylphenylisocyanate. Fragmentation occurred only partially in the injector but was mainly achieved on a heated capillary column. The fragments were then chromatographically separated and individually measured by isotope ratio mass spectrometry. The reliability of the method was tested in hydrolysis experiments with three isotopically different batches of isoproturon. For all three products, the same isotope fractionation factors were observed during conversion and the difference in isotope composition between the batches was preserved. This study demonstrates that fragmentation of phenylurea herbicides does not only make them accessible to isotope analysis but even enables determination of intramolecular isotope fractionation.
Rubel, Oliver; Bowen, Benjamin P
2018-01-01
Mass spectrometry imaging (MSI) is a transformative imaging method that supports the untargeted, quantitative measurement of the chemical composition and spatial heterogeneity of complex samples with broad applications in life sciences, bioenergy, and health. While MSI data can be routinely collected, its broad application is currently limited by the lack of easily accessible analysis methods that can process data of the size, volume, diversity, and complexity generated by MSI experiments. The development and application of cutting-edge analytical methods is a core driver in MSI research for new scientific discoveries, medical diagnostics, and commercial-innovation. However, the lack of means to share, apply, and reproduce analyses hinders the broad application, validation, and use of novel MSI analysis methods. To address this central challenge, we introduce the Berkeley Analysis and Storage Toolkit (BASTet), a novel framework for shareable and reproducible data analysis that supports standardized data and analysis interfaces, integrated data storage, data provenance, workflow management, and a broad set of integrated tools. Based on BASTet, we describe the extension of the OpenMSI mass spectrometry imaging science gateway to enable web-based sharing, reuse, analysis, and visualization of data analyses and derived data products. We demonstrate the application of BASTet and OpenMSI in practice to identify and compare characteristic substructures in the mouse brain based on their chemical composition measured via MSI.
Computer Graphics-aided systems analysis: application to well completion design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Detamore, J.E.; Sarma, M.P.
1985-03-01
The development of an engineering tool (in the form of a computer model) for solving design and analysis problems related with oil and gas well production operations is discussed. The development of the method is based on integrating the concepts of ''Systems Analysis'' with the techniques of ''Computer Graphics''. The concepts behind the method are very general in nature. This paper, however, illustrates the application of the method in solving gas well completion design problems. The use of the method will save time and improve the efficiency of such design and analysis problems. The method can be extended to othermore » design and analysis aspects of oil and gas wells.« less
A study of microstructural characteristics and differential thermal analysis of Ni-based superalloys
NASA Technical Reports Server (NTRS)
Aggarwal, M. D.; Lal, R. B.; Oyekenu, Samuel A.; Parr, Richard; Gentz, Stephen
1989-01-01
The objective of this work is to correlate the mechanical properties of the Ni-based superalloy MAR M246(Hf) used in the Space Shuttle Main Engine with its structural characteristics by systematic study of optical photomicrographs and differential thermal analysis. The authors developed a method of predicting the liquidus and solidus temperature of various nickel based superalloys (MAR-M247, Waspaloy, Udimet-41, polycrystalline and single crystals of CMSX-2 and CMSX-3) and comparing the predictions with the experimental differential thermal analysis (DTA) curves using Perkin-Elmer DTA 1700. The method of predicting these temperatures is based on the additive effect of the components dissolved in nickel. The results were compared with the experimental values.
Monakhova, Yulia B; Mushtakova, Svetlana P
2017-05-01
A fast and reliable spectroscopic method for multicomponent quantitative analysis of targeted compounds with overlapping signals in complex mixtures has been established. The innovative analytical approach is based on the preliminary chemometric extraction of qualitative and quantitative information from UV-vis and IR spectral profiles of a calibration system using independent component analysis (ICA). Using this quantitative model and ICA resolution results of spectral profiling of "unknown" model mixtures, the absolute analyte concentrations in multicomponent mixtures and authentic samples were then calculated without reference solutions. Good recoveries generally between 95% and 105% were obtained. The method can be applied to any spectroscopic data that obey the Beer-Lambert-Bouguer law. The proposed method was tested on analysis of vitamins and caffeine in energy drinks and aromatic hydrocarbons in motor fuel with 10% error. The results demonstrated that the proposed method is a promising tool for rapid simultaneous multicomponent analysis in the case of spectral overlap and the absence/inaccessibility of reference materials.
NASA Astrophysics Data System (ADS)
Peng, Yahui; Ma, Xiao; Gao, Xinyu; Zhou, Fangxu
2015-12-01
Computer vision is an important tool for sports video processing. However, its application in badminton match analysis is very limited. In this study, we proposed a straightforward but robust histogram-based background estimation and player detection methods for badminton video clips, and compared the results with the naive averaging method and the mixture of Gaussians methods, respectively. The proposed method yielded better background estimation results than the naive averaging method and more accurate player detection results than the mixture of Gaussians player detection method. The preliminary results indicated that the proposed histogram-based method could estimate the background and extract the players accurately. We conclude that the proposed method can be used for badminton player tracking and further studies are warranted for automated match analysis.
Li, Bo; Tang, Jing; Yang, Qingxia; Cui, Xuejiao; Li, Shuang; Chen, Sijie; Cao, Quanxing; Xue, Weiwei; Chen, Na; Zhu, Feng
2016-12-13
In untargeted metabolomics analysis, several factors (e.g., unwanted experimental &biological variations and technical errors) may hamper the identification of differential metabolic features, which requires the data-driven normalization approaches before feature selection. So far, ≥16 normalization methods have been widely applied for processing the LC/MS based metabolomics data. However, the performance and the sample size dependence of those methods have not yet been exhaustively compared and no online tool for comparatively and comprehensively evaluating the performance of all 16 normalization methods has been provided. In this study, a comprehensive comparison on these methods was conducted. As a result, 16 methods were categorized into three groups based on their normalization performances across various sample sizes. The VSN, the Log Transformation and the PQN were identified as methods of the best normalization performance, while the Contrast consistently underperformed across all sub-datasets of different benchmark data. Moreover, an interactive web tool comprehensively evaluating the performance of 16 methods specifically for normalizing LC/MS based metabolomics data was constructed and hosted at http://server.idrb.cqu.edu.cn/MetaPre/. In summary, this study could serve as a useful guidance to the selection of suitable normalization methods in analyzing the LC/MS based metabolomics data.
Li, Bo; Tang, Jing; Yang, Qingxia; Cui, Xuejiao; Li, Shuang; Chen, Sijie; Cao, Quanxing; Xue, Weiwei; Chen, Na; Zhu, Feng
2016-01-01
In untargeted metabolomics analysis, several factors (e.g., unwanted experimental & biological variations and technical errors) may hamper the identification of differential metabolic features, which requires the data-driven normalization approaches before feature selection. So far, ≥16 normalization methods have been widely applied for processing the LC/MS based metabolomics data. However, the performance and the sample size dependence of those methods have not yet been exhaustively compared and no online tool for comparatively and comprehensively evaluating the performance of all 16 normalization methods has been provided. In this study, a comprehensive comparison on these methods was conducted. As a result, 16 methods were categorized into three groups based on their normalization performances across various sample sizes. The VSN, the Log Transformation and the PQN were identified as methods of the best normalization performance, while the Contrast consistently underperformed across all sub-datasets of different benchmark data. Moreover, an interactive web tool comprehensively evaluating the performance of 16 methods specifically for normalizing LC/MS based metabolomics data was constructed and hosted at http://server.idrb.cqu.edu.cn/MetaPre/. In summary, this study could serve as a useful guidance to the selection of suitable normalization methods in analyzing the LC/MS based metabolomics data. PMID:27958387
a Cognitive Approach to Teaching a Graduate-Level Geobia Course
NASA Astrophysics Data System (ADS)
Bianchetti, Raechel A.
2016-06-01
Remote sensing image analysis training occurs both in the classroom and the research lab. Education in the classroom for traditional pixel-based image analysis has been standardized across college curriculums. However, with the increasing interest in Geographic Object-Based Image Analysis (GEOBIA), there is a need to develop classroom instruction for this method of image analysis. While traditional remote sensing courses emphasize the expansion of skills and knowledge related to the use of computer-based analysis, GEOBIA courses should examine the cognitive factors underlying visual interpretation. This current paper provides an initial analysis of the development, implementation, and outcomes of a GEOBIA course that considers not only the computational methods of GEOBIA, but also the cognitive factors of expertise, that such software attempts to replicate. Finally, a reflection on the first instantiation of this course is presented, in addition to plans for development of an open-source repository for course materials.
Object-Based Image Analysis Beyond Remote Sensing - the Human Perspective
NASA Astrophysics Data System (ADS)
Blaschke, T.; Lang, S.; Tiede, D.; Papadakis, M.; Györi, A.
2016-06-01
We introduce a prototypical methodological framework for a place-based GIS-RS system for the spatial delineation of place while incorporating spatial analysis and mapping techniques using methods from different fields such as environmental psychology, geography, and computer science. The methodological lynchpin for this to happen - when aiming to delineate place in terms of objects - is object-based image analysis (OBIA).
Daxini, S D; Prajapati, J M
2014-01-01
Meshfree methods are viewed as next generation computational techniques. With evident limitations of conventional grid based methods, like FEM, in dealing with problems of fracture mechanics, large deformation, and simulation of manufacturing processes, meshfree methods have gained much attention by researchers. A number of meshfree methods have been proposed till now for analyzing complex problems in various fields of engineering. Present work attempts to review recent developments and some earlier applications of well-known meshfree methods like EFG and MLPG to various types of structure mechanics and fracture mechanics applications like bending, buckling, free vibration analysis, sensitivity analysis and topology optimization, single and mixed mode crack problems, fatigue crack growth, and dynamic crack analysis and some typical applications like vibration of cracked structures, thermoelastic crack problems, and failure transition in impact problems. Due to complex nature of meshfree shape functions and evaluation of integrals in domain, meshless methods are computationally expensive as compared to conventional mesh based methods. Some improved versions of original meshfree methods and other techniques suggested by researchers to improve computational efficiency of meshfree methods are also reviewed here.
Method of center localization for objects containing concentric arcs
NASA Astrophysics Data System (ADS)
Kuznetsova, Elena G.; Shvets, Evgeny A.; Nikolaev, Dmitry P.
2015-02-01
This paper proposes a method for automatic center location of objects containing concentric arcs. The method utilizes structure tensor analysis and voting scheme optimized with Fast Hough Transform. Two applications of the proposed method are considered: (i) wheel tracking in video-based system for automatic vehicle classification and (ii) tree growth rings analysis on a tree cross cut image.
An optimized rapid bisulfite conversion method with high recovery of cell-free DNA.
Yi, Shaohua; Long, Fei; Cheng, Juanbo; Huang, Daixin
2017-12-19
Methylation analysis of cell-free DNA is a encouraging tool for tumor diagnosis, monitoring and prognosis. Sensitivity of methylation analysis is a very important matter due to the tiny amounts of cell-free DNA available in plasma. Most current methods of DNA methylation analysis are based on the difference of bisulfite-mediated deamination of cytosine between cytosine and 5-methylcytosine. However, the recovery of bisulfite-converted DNA based on current methods is very poor for the methylation analysis of cell-free DNA. We optimized a rapid method for the crucial steps of bisulfite conversion with high recovery of cell-free DNA. A rapid deamination step and alkaline desulfonation was combined with the purification of DNA on a silica column. The conversion efficiency and recovery of bisulfite-treated DNA was investigated by the droplet digital PCR. The optimization of the reaction results in complete cytosine conversion in 30 min at 70 °C and about 65% of recovery of bisulfite-treated cell-free DNA, which is higher than current methods. The method allows high recovery from low levels of bisulfite-treated cell-free DNA, enhancing the analysis sensitivity of methylation detection from cell-free DNA.
Syntactic methods of shape feature description and its application in analysis of medical images
NASA Astrophysics Data System (ADS)
Ogiela, Marek R.; Tadeusiewicz, Ryszard
2000-02-01
The paper presents specialist algorithms of morphologic analysis of shapes of selected organs of abdominal cavity proposed in order to diagnose disease symptoms occurring in the main pancreatic ducts and upper segments of ureters. Analysis of the correct morphology of these structures has been conducted with the use of syntactic methods of pattern recognition. Its main objective is computer-aided support to early diagnosis of neoplastic lesions and pancreatitis based on images taken in the course of examination with the endoscopic retrograde cholangiopancreatography (ERCP) method and a diagnosis of morphological lesions in ureter based on kidney radiogram analysis. In the analysis of ERCP images, the main objective is to recognize morphological lesions in pancreas ducts characteristic for carcinoma and chronic pancreatitis. In the case of kidney radiogram analysis the aim is to diagnose local irregularity of ureter lumen. Diagnosing the above mentioned lesion has been conducted with the use of syntactic methods of pattern recognition, in particular the languages of shape features description and context-free attributed grammars. These methods allow to recognize and describe in a very efficient way the aforementioned lesions on images obtained as a result of initial image processing into diagrams of widths of the examined structures.
Ishii, Kazuei; Furuichi, Toru; Nagao, Yukari
2013-02-01
Land use at contaminated sites, following remediation, is often needed for regional redevelopment. However, there exist few methods of developing economically and socially feasible land-use plans based on regional needs because of the wide variety of land-use requirements. This study proposes a new needs analysis method for the conceptual land-use planning of contaminated sites and illustrates this method with a case study of an illegal dumping site for hazardous waste. In this method, planning factors consisting of the land-use attributes and related facilities are extracted from the potential needs of the residents through a preliminary questionnaire. Using the extracted attributes of land use and the related facilities, land-use cases are designed for selection-based conjoint analysis. A second questionnaire for respondents to the first one who indicated an interest in participating in the second questionnaire is conducted for the conjoint analysis to determine the utility function and marginal cost of each attribute in order to prioritize the planning factors to develop a quantitative and economically and socially feasible land-use plan. Based on the results, site-specific land-use alternatives are developed and evaluated by the utility function obtained from the conjoint analysis. In this case study of an illegal dumping site for hazardous waste, the uses preferred as part of a conceptual land-use plan following remediation of the site were (1) agricultural land and a biogas plant designed to recover energy from biomass or (2) a park with a welfare facility and an athletic field. Our needs analysis method with conjoint analysis is applicable to the development of conceptual land-use planning for similar sites following remediation, particularly when added value is considered. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Yusa, Yasunori; Okada, Hiroshi; Yamada, Tomonori; Yoshimura, Shinobu
2018-04-01
A domain decomposition method for large-scale elastic-plastic problems is proposed. The proposed method is based on a quasi-Newton method in conjunction with a balancing domain decomposition preconditioner. The use of a quasi-Newton method overcomes two problems associated with the conventional domain decomposition method based on the Newton-Raphson method: (1) avoidance of a double-loop iteration algorithm, which generally has large computational complexity, and (2) consideration of the local concentration of nonlinear deformation, which is observed in elastic-plastic problems with stress concentration. Moreover, the application of a balancing domain decomposition preconditioner ensures scalability. Using the conventional and proposed domain decomposition methods, several numerical tests, including weak scaling tests, were performed. The convergence performance of the proposed method is comparable to that of the conventional method. In particular, in elastic-plastic analysis, the proposed method exhibits better convergence performance than the conventional method.
Clark, Neil R.; Szymkiewicz, Maciej; Wang, Zichen; Monteiro, Caroline D.; Jones, Matthew R.; Ma’ayan, Avi
2016-01-01
Gene set analysis of differential expression, which identifies collectively differentially expressed gene sets, has become an important tool for biology. The power of this approach lies in its reduction of the dimensionality of the statistical problem and its incorporation of biological interpretation by construction. Many approaches to gene set analysis have been proposed, but benchmarking their performance in the setting of real biological data is difficult due to the lack of a gold standard. In a previously published work we proposed a geometrical approach to differential expression which performed highly in benchmarking tests and compared well to the most popular methods of differential gene expression. As reported, this approach has a natural extension to gene set analysis which we call Principal Angle Enrichment Analysis (PAEA). PAEA employs dimensionality reduction and a multivariate approach for gene set enrichment analysis. However, the performance of this method has not been assessed nor its implementation as a web-based tool. Here we describe new benchmarking protocols for gene set analysis methods and find that PAEA performs highly. The PAEA method is implemented as a user-friendly web-based tool, which contains 70 gene set libraries and is freely available to the community. PMID:26848405
Clark, Neil R; Szymkiewicz, Maciej; Wang, Zichen; Monteiro, Caroline D; Jones, Matthew R; Ma'ayan, Avi
2015-11-01
Gene set analysis of differential expression, which identifies collectively differentially expressed gene sets, has become an important tool for biology. The power of this approach lies in its reduction of the dimensionality of the statistical problem and its incorporation of biological interpretation by construction. Many approaches to gene set analysis have been proposed, but benchmarking their performance in the setting of real biological data is difficult due to the lack of a gold standard. In a previously published work we proposed a geometrical approach to differential expression which performed highly in benchmarking tests and compared well to the most popular methods of differential gene expression. As reported, this approach has a natural extension to gene set analysis which we call Principal Angle Enrichment Analysis (PAEA). PAEA employs dimensionality reduction and a multivariate approach for gene set enrichment analysis. However, the performance of this method has not been assessed nor its implementation as a web-based tool. Here we describe new benchmarking protocols for gene set analysis methods and find that PAEA performs highly. The PAEA method is implemented as a user-friendly web-based tool, which contains 70 gene set libraries and is freely available to the community.
ERIC Educational Resources Information Center
Yamaguchi, Yusuke; Sakamoto, Wataru; Goto, Masashi; Staessen, Jan A.; Wang, Jiguang; Gueyffier, Francois; Riley, Richard D.
2014-01-01
When some trials provide individual patient data (IPD) and the others provide only aggregate data (AD), meta-analysis methods for combining IPD and AD are required. We propose a method that reconstructs the missing IPD for AD trials by a Bayesian sampling procedure and then applies an IPD meta-analysis model to the mixture of simulated IPD and…
The Shock and Vibration Digest, Volume 17, Number 8
1985-08-01
ate, transmit, and radiate audible sound. dures are based on acoustic power flow, statistical energy analysis (SEA), and modal methods [22-283. A...modified partition area. features of the acoustic field. I.--1 85-1642 Statistical Energy Analysis , Structural Reso- nances, and Beam Networks BUILDING...energy methods, Structural resonance L.J. Lee Heriot-Watt Univ., Chambers St., Edinburgh The statistical energy analysis method is EHI 1HX, Scotland
ACCOUNTING FOR CALIBRATION UNCERTAINTIES IN X-RAY ANALYSIS: EFFECTIVE AREAS IN SPECTRAL FITTING
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Hyunsook; Kashyap, Vinay L.; Drake, Jeremy J.
2011-04-20
While considerable advance has been made to account for statistical uncertainties in astronomical analyses, systematic instrumental uncertainties have been generally ignored. This can be crucial to a proper interpretation of analysis results because instrumental calibration uncertainty is a form of systematic uncertainty. Ignoring it can underestimate error bars and introduce bias into the fitted values of model parameters. Accounting for such uncertainties currently requires extensive case-specific simulations if using existing analysis packages. Here, we present general statistical methods that incorporate calibration uncertainties into spectral analysis of high-energy data. We first present a method based on multiple imputation that can bemore » applied with any fitting method, but is necessarily approximate. We then describe a more exact Bayesian approach that works in conjunction with a Markov chain Monte Carlo based fitting. We explore methods for improving computational efficiency, and in particular detail a method of summarizing calibration uncertainties with a principal component analysis of samples of plausible calibration files. This method is implemented using recently codified Chandra effective area uncertainties for low-resolution spectral analysis and is verified using both simulated and actual Chandra data. Our procedure for incorporating effective area uncertainty is easily generalized to other types of calibration uncertainties.« less
Han, Junwei; Li, Chunquan; Yang, Haixiu; Xu, Yanjun; Zhang, Chunlong; Ma, Jiquan; Shi, Xinrui; Liu, Wei; Shang, Desi; Yao, Qianlan; Zhang, Yunpeng; Su, Fei; Feng, Li; Li, Xia
2015-01-01
Identifying dysregulated pathways from high-throughput experimental data in order to infer underlying biological insights is an important task. Current pathway-identification methods focus on single pathways in isolation; however, consideration of crosstalk between pathways could improve our understanding of alterations in biological states. We propose a novel method of pathway analysis based on global influence (PAGI) to identify dysregulated pathways, by considering both within-pathway effects and crosstalk between pathways. We constructed a global gene–gene network based on the relationships among genes extracted from a pathway database. We then evaluated the extent of differential expression for each gene, and mapped them to the global network. The random walk with restart algorithm was used to calculate the extent of genes affected by global influence. Finally, we used cumulative distribution functions to determine the significance values of the dysregulated pathways. We applied the PAGI method to five cancer microarray datasets, and compared our results with gene set enrichment analysis and five other methods. Based on these analyses, we demonstrated that PAGI can effectively identify dysregulated pathways associated with cancer, with strong reproducibility and robustness. We implemented PAGI using the freely available R-based and Web-based tools (http://bioinfo.hrbmu.edu.cn/PAGI). PMID:25551156
Chwialkowska, Karolina; Korotko, Urszula; Kosinska, Joanna; Szarejko, Iwona; Kwasniewski, Miroslaw
2017-01-01
Epigenetic mechanisms, including histone modifications and DNA methylation, mutually regulate chromatin structure, maintain genome integrity, and affect gene expression and transposon mobility. Variations in DNA methylation within plant populations, as well as methylation in response to internal and external factors, are of increasing interest, especially in the crop research field. Methylation Sensitive Amplification Polymorphism (MSAP) is one of the most commonly used methods for assessing DNA methylation changes in plants. This method involves gel-based visualization of PCR fragments from selectively amplified DNA that are cleaved using methylation-sensitive restriction enzymes. In this study, we developed and validated a new method based on the conventional MSAP approach called Methylation Sensitive Amplification Polymorphism Sequencing (MSAP-Seq). We improved the MSAP-based approach by replacing the conventional separation of amplicons on polyacrylamide gels with direct, high-throughput sequencing using Next Generation Sequencing (NGS) and automated data analysis. MSAP-Seq allows for global sequence-based identification of changes in DNA methylation. This technique was validated in Hordeum vulgare . However, MSAP-Seq can be straightforwardly implemented in different plant species, including crops with large, complex and highly repetitive genomes. The incorporation of high-throughput sequencing into MSAP-Seq enables parallel and direct analysis of DNA methylation in hundreds of thousands of sites across the genome. MSAP-Seq provides direct genomic localization of changes and enables quantitative evaluation. We have shown that the MSAP-Seq method specifically targets gene-containing regions and that a single analysis can cover three-quarters of all genes in large genomes. Moreover, MSAP-Seq's simplicity, cost effectiveness, and high-multiplexing capability make this method highly affordable. Therefore, MSAP-Seq can be used for DNA methylation analysis in crop plants with large and complex genomes.
Chwialkowska, Karolina; Korotko, Urszula; Kosinska, Joanna; Szarejko, Iwona; Kwasniewski, Miroslaw
2017-01-01
Epigenetic mechanisms, including histone modifications and DNA methylation, mutually regulate chromatin structure, maintain genome integrity, and affect gene expression and transposon mobility. Variations in DNA methylation within plant populations, as well as methylation in response to internal and external factors, are of increasing interest, especially in the crop research field. Methylation Sensitive Amplification Polymorphism (MSAP) is one of the most commonly used methods for assessing DNA methylation changes in plants. This method involves gel-based visualization of PCR fragments from selectively amplified DNA that are cleaved using methylation-sensitive restriction enzymes. In this study, we developed and validated a new method based on the conventional MSAP approach called Methylation Sensitive Amplification Polymorphism Sequencing (MSAP-Seq). We improved the MSAP-based approach by replacing the conventional separation of amplicons on polyacrylamide gels with direct, high-throughput sequencing using Next Generation Sequencing (NGS) and automated data analysis. MSAP-Seq allows for global sequence-based identification of changes in DNA methylation. This technique was validated in Hordeum vulgare. However, MSAP-Seq can be straightforwardly implemented in different plant species, including crops with large, complex and highly repetitive genomes. The incorporation of high-throughput sequencing into MSAP-Seq enables parallel and direct analysis of DNA methylation in hundreds of thousands of sites across the genome. MSAP-Seq provides direct genomic localization of changes and enables quantitative evaluation. We have shown that the MSAP-Seq method specifically targets gene-containing regions and that a single analysis can cover three-quarters of all genes in large genomes. Moreover, MSAP-Seq's simplicity, cost effectiveness, and high-multiplexing capability make this method highly affordable. Therefore, MSAP-Seq can be used for DNA methylation analysis in crop plants with large and complex genomes. PMID:29250096
Lakbub, Jude C; Shipman, Joshua T; Desaire, Heather
2018-04-01
Disulfide bonds are important structural moieties of proteins: they ensure proper folding, provide stability, and ensure proper function. With the increasing use of proteins for biotherapeutics, particularly monoclonal antibodies, which are highly disulfide bonded, it is now important to confirm the correct disulfide bond connectivity and to verify the presence, or absence, of disulfide bond variants in the protein therapeutics. These studies help to ensure safety and efficacy. Hence, disulfide bonds are among the critical quality attributes of proteins that have to be monitored closely during the development of biotherapeutics. However, disulfide bond analysis is challenging because of the complexity of the biomolecules. Mass spectrometry (MS) has been the go-to analytical tool for the characterization of such complex biomolecules, and several methods have been reported to meet the challenging task of mapping disulfide bonds in proteins. In this review, we describe the relevant, recent MS-based techniques and provide important considerations needed for efficient disulfide bond analysis in proteins. The review focuses on methods for proper sample preparation, fragmentation techniques for disulfide bond analysis, recent disulfide bond mapping methods based on the fragmentation techniques, and automated algorithms designed for rapid analysis of disulfide bonds from liquid chromatography-MS/MS data. Researchers involved in method development for protein characterization can use the information herein to facilitate development of new MS-based methods for protein disulfide bond analysis. In addition, individuals characterizing biotherapeutics, especially by disulfide bond mapping in antibodies, can use this review to choose the best strategies for disulfide bond assignment of their biologic products. Graphical Abstract This review, describing characterization methods for disulfide bonds in proteins, focuses on three critical components: sample preparation, mass spectrometry data, and software tools.
Low-Resolution Raman-Spectroscopy Combustion Thermometry
NASA Technical Reports Server (NTRS)
Nguyen, Quang-Viet; Kojima, Jun
2008-01-01
A method of optical thermometry, now undergoing development, involves low-resolution measurement of the spectrum of spontaneous Raman scattering (SRS) from N2 and O2 molecules. The method is especially suitable for measuring temperatures in high pressure combustion environments that contain N2, O2, or N2/O2 mixtures (including air). Methods based on SRS (in which scattered light is shifted in wavelength by amounts that depend on vibrational and rotational energy levels of laser-illuminated molecules) have been popular means of probing flames because they are almost the only methods that provide spatially and temporally resolved concentrations and temperatures of multiple molecular species in turbulent combustion. The present SRS-based method differs from prior SRS-based methods that have various drawbacks, a description of which would exceed the scope of this article. Two main differences between this and prior SRS-based methods are that it involves analysis in the frequency (equivalently, wavelength) domain, in contradistinction to analysis in the intensity domain in prior methods; and it involves low-resolution measurement of what amounts to predominantly the rotational Raman spectra of N2 and O2, in contradistinction to higher-resolution measurement of the vibrational Raman spectrum of N2 only in prior methods.
A COMBINED SPECTROSCOPIC AND PHOTOMETRIC STELLAR ACTIVITY STUDY OF EPSILON ERIDANI
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giguere, Matthew J.; Fischer, Debra A.; Zhang, Cyril X. Y.
2016-06-20
We present simultaneous ground-based radial velocity (RV) measurements and space-based photometric measurements of the young and active K dwarf Epsilon Eridani. These measurements provide a data set for exploring methods of identifying and ultimately distinguishing stellar photospheric velocities from Keplerian motion. We compare three methods we have used in exploring this data set: Dalmatian, an MCMC spot modeling code that fits photometric and RV measurements simultaneously; the FF′ method, which uses photometric measurements to predict the stellar activity signal in simultaneous RV measurements; and H α analysis. We show that our H α measurements are strongly correlated with the Microvariabilitymore » and Oscillations of STars telescope ( MOST ) photometry, which led to a promising new method based solely on the spectroscopic observations. This new method, which we refer to as the HH′ method, uses H α measurements as input into the FF′ model. While the Dalmatian spot modeling analysis and the FF′ method with MOST space-based photometry are currently more robust, the HH′ method only makes use of one of the thousands of stellar lines in the visible spectrum. By leveraging additional spectral activity indicators, we believe the HH′ method may prove quite useful in disentangling stellar signals.« less
Decerns: A framework for multi-criteria decision analysis
Yatsalo, Boris; Didenko, Vladimir; Gritsyuk, Sergey; ...
2015-02-27
A new framework, Decerns, for multicriteria decision analysis (MCDA) of a wide range of practical problems on risk management is introduced. Decerns framework contains a library of modules that are the basis for two scalable systems: DecernsMCDA for analysis of multicriteria problems, and DecernsSDSS for multicriteria analysis of spatial options. DecernsMCDA includes well known MCDA methods and original methods for uncertainty treatment based on probabilistic approaches and fuzzy numbers. As a result, these MCDA methods are described along with a case study on analysis of multicriteria location problem.
In-service teachers' perceptions of project-based learning.
Habók, Anita; Nagy, Judit
2016-01-01
The study analyses teachers' perceptions of methods, teacher roles, success and evaluation in PBL and traditional classroom instruction. The analysis is based on empirical data collected in primary schools and vocational secondary schools. An analysis of 109 questionnaires revealed numerous differences based on degree of experience and type of school. In general, project-based methods were preferred among teachers, who mostly perceived themselves as facilitators and considered motivation and transmission of values central to their work. Teachers appeared not to capitalize on the use of ICT tools or emotions. Students actively participated in the evaluation process via oral evaluation.
Paule‐Mandel estimators for network meta‐analysis with random inconsistency effects
Veroniki, Areti Angeliki; Law, Martin; Tricco, Andrea C.; Baker, Rose
2017-01-01
Network meta‐analysis is used to simultaneously compare multiple treatments in a single analysis. However, network meta‐analyses may exhibit inconsistency, where direct and different forms of indirect evidence are not in agreement with each other, even after allowing for between‐study heterogeneity. Models for network meta‐analysis with random inconsistency effects have the dual aim of allowing for inconsistencies and estimating average treatment effects across the whole network. To date, two classical estimation methods for fitting this type of model have been developed: a method of moments that extends DerSimonian and Laird's univariate method and maximum likelihood estimation. However, the Paule and Mandel estimator is another recommended classical estimation method for univariate meta‐analysis. In this paper, we extend the Paule and Mandel method so that it can be used to fit models for network meta‐analysis with random inconsistency effects. We apply all three estimation methods to a variety of examples that have been used previously and we also examine a challenging new dataset that is highly heterogenous. We perform a simulation study based on this new example. We find that the proposed Paule and Mandel method performs satisfactorily and generally better than the previously proposed method of moments because it provides more accurate inferences. Furthermore, the Paule and Mandel method possesses some advantages over likelihood‐based methods because it is both semiparametric and requires no convergence diagnostics. Although restricted maximum likelihood estimation remains the gold standard, the proposed methodology is a fully viable alternative to this and other estimation methods. PMID:28585257
Tertiary structure-based analysis of microRNA–target interactions
Gan, Hin Hark; Gunsalus, Kristin C.
2013-01-01
Current computational analysis of microRNA interactions is based largely on primary and secondary structure analysis. Computationally efficient tertiary structure-based methods are needed to enable more realistic modeling of the molecular interactions underlying miRNA-mediated translational repression. We incorporate algorithms for predicting duplex RNA structures, ionic strength effects, duplex entropy and free energy, and docking of duplex–Argonaute protein complexes into a pipeline to model and predict miRNA–target duplex binding energies. To ensure modeling accuracy and computational efficiency, we use an all-atom description of RNA and a continuum description of ionic interactions using the Poisson–Boltzmann equation. Our method predicts the conformations of two constructs of Caenorhabditis elegans let-7 miRNA–target duplexes to an accuracy of ∼3.8 Å root mean square distance of their NMR structures. We also show that the computed duplex formation enthalpies, entropies, and free energies for eight miRNA–target duplexes agree with titration calorimetry data. Analysis of duplex–Argonaute docking shows that structural distortions arising from single-base-pair mismatches in the seed region influence the activity of the complex by destabilizing both duplex hybridization and its association with Argonaute. Collectively, these results demonstrate that tertiary structure-based modeling of miRNA interactions can reveal structural mechanisms not accessible with current secondary structure-based methods. PMID:23417009
GOMA: functional enrichment analysis tool based on GO modules
Huang, Qiang; Wu, Ling-Yun; Wang, Yong; Zhang, Xiang-Sun
2013-01-01
Analyzing the function of gene sets is a critical step in interpreting the results of high-throughput experiments in systems biology. A variety of enrichment analysis tools have been developed in recent years, but most output a long list of significantly enriched terms that are often redundant, making it difficult to extract the most meaningful functions. In this paper, we present GOMA, a novel enrichment analysis method based on the new concept of enriched functional Gene Ontology (GO) modules. With this method, we systematically revealed functional GO modules, i.e., groups of functionally similar GO terms, via an optimization model and then ranked them by enrichment scores. Our new method simplifies enrichment analysis results by reducing redundancy, thereby preventing inconsistent enrichment results among functionally similar terms and providing more biologically meaningful results. PMID:23237213
Speech Emotion Feature Selection Method Based on Contribution Analysis Algorithm of Neural Network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang Xiaojia; Mao Qirong; Zhan Yongzhao
There are many emotion features. If all these features are employed to recognize emotions, redundant features may be existed. Furthermore, recognition result is unsatisfying and the cost of feature extraction is high. In this paper, a method to select speech emotion features based on contribution analysis algorithm of NN is presented. The emotion features are selected by using contribution analysis algorithm of NN from the 95 extracted features. Cluster analysis is applied to analyze the effectiveness for the features selected, and the time of feature extraction is evaluated. Finally, 24 emotion features selected are used to recognize six speech emotions.more » The experiments show that this method can improve the recognition rate and the time of feature extraction.« less
NASA Technical Reports Server (NTRS)
Heldenfels, Richard R
1951-01-01
A numerical method is presented for the stress analysis of stiffened-shell structures of arbitrary cross section under nonuniform temperature distributions. The method is based on a previously published procedure that is extended to include temperature effects and multicell construction. The application of the method to practical problems is discussed and an illustrative analysis is presented of a two-cell box beam under the combined action of vertical loads and a nonuniform temperature distribution.
Analyzing Visibility Configurations.
Dachsbacher, C
2011-04-01
Many algorithms, such as level of detail rendering and occlusion culling methods, make decisions based on the degree of visibility of an object, but do not analyze the distribution, or structure, of the visible and occluded regions across surfaces. We present an efficient method to classify different visibility configurations and show how this can be used on top of existing methods based on visibility determination. We adapt co-occurrence matrices for visibility analysis and generalize them to operate on clusters of triangular surfaces instead of pixels. We employ machine learning techniques to reliably classify the thus extracted feature vectors. Our method allows perceptually motivated level of detail methods for real-time rendering applications by detecting configurations with expected visual masking. We exemplify the versatility of our method with an analysis of area light visibility configurations in ray tracing and an area-to-area visibility analysis suitable for hierarchical radiosity refinement. Initial results demonstrate the robustness, simplicity, and performance of our method in synthetic scenes, as well as real applications.
NASA Technical Reports Server (NTRS)
Morduchow, Morris
1955-01-01
A survey of integral methods in laminar-boundary-layer analysis is first given. A simple and sufficiently accurate method for practical purposes of calculating the properties (including stability) of the laminar compressible boundary layer in an axial pressure gradient with heat transfer at the wall is presented. For flow over a flat plate, the method is applicable for an arbitrarily prescribed distribution of temperature along the surface and for any given constant Prandtl number close to unity. For flow in a pressure gradient, the method is based on a Prandtl number of unity and a uniform wall temperature. A simple and accurate method of determining the separation point in a compressible flow with an adverse pressure gradient over a surface at a given uniform wall temperature is developed. The analysis is based on an extension of the Karman-Pohlhausen method to the momentum and the thermal energy equations in conjunction with fourth- and especially higher degree velocity and stagnation-enthalpy profiles.
An analysis of general chain systems
NASA Technical Reports Server (NTRS)
Passerello, C. E.; Huston, R. L.
1972-01-01
A general analysis of dynamic systems consisting of connected rigid bodies is presented. The number of bodies and their manner of connection is arbitrary so long as no closed loops are formed. The analysis represents a dynamic finite element method, which is computer-oriented and designed so that nonworking, interval constraint forces are automatically eliminated. The method is based upon Lagrange's form of d'Alembert's principle. Shifter matrix transformations are used with the geometrical aspects of the analysis. The method is illustrated with a space manipulator.
Face recognition using slow feature analysis and contourlet transform
NASA Astrophysics Data System (ADS)
Wang, Yuehao; Peng, Lingling; Zhe, Fuchuan
2018-04-01
In this paper we propose a novel face recognition approach based on slow feature analysis (SFA) in contourlet transform domain. This method firstly use contourlet transform to decompose the face image into low frequency and high frequency part, and then takes technological advantages of slow feature analysis for facial feature extraction. We named the new method combining the slow feature analysis and contourlet transform as CT-SFA. The experimental results on international standard face database demonstrate that the new face recognition method is effective and competitive.
This method provides a procedure for the determination of low-level orthophosphate concentrations normally found in estuarine and/or coastal waters. It is based upon the method of Murphy and Riley1 adapted for automated segmented flow analysis2 in which the two reagent solutions ...
A Comparison of Imputation Methods for Bayesian Factor Analysis Models
ERIC Educational Resources Information Center
Merkle, Edgar C.
2011-01-01
Imputation methods are popular for the handling of missing data in psychology. The methods generally consist of predicting missing data based on observed data, yielding a complete data set that is amiable to standard statistical analyses. In the context of Bayesian factor analysis, this article compares imputation under an unrestricted…
The Precision Efficacy Analysis for Regression Sample Size Method.
ERIC Educational Resources Information Center
Brooks, Gordon P.; Barcikowski, Robert S.
The general purpose of this study was to examine the efficiency of the Precision Efficacy Analysis for Regression (PEAR) method for choosing appropriate sample sizes in regression studies used for precision. The PEAR method, which is based on the algebraic manipulation of an accepted cross-validity formula, essentially uses an effect size to…
ERIC Educational Resources Information Center
Duxbury, Mark
2004-01-01
An enzymatic laboratory experiment based on the analysis of serum is described that is suitable for students of clinical chemistry. The experiment incorporates an introduction to mathematical method-comparison techniques in which three different clinical glucose analysis methods are compared using linear regression and Bland-Altman difference…
Terrien, Jérémy; Marque, Catherine; Germain, Guy
2008-05-01
Time-frequency representations (TFRs) of signals are increasingly being used in biomedical research. Analysis of such representations is sometimes difficult, however, and is often reduced to the extraction of ridges, or local energy maxima. In this paper, we describe a new ridge extraction method based on the image processing technique of active contours or snakes. We have tested our method on several synthetic signals and for the analysis of uterine electromyogram or electrohysterogram (EHG) recorded during gestation in monkeys. We have also evaluated a postprocessing algorithm that is especially suited for EHG analysis. Parameters are evaluated on real EHG signals in different gestational periods. The presented method gives good results when applied to synthetic as well as EHG signals. We have been able to obtain smaller ridge extraction errors when compared to two other methods specially developed for EHG. The gradient vector flow (GVF) snake method, or GVF-snake method, appears to be a good ridge extraction tool, which could be used on TFR of mono or multicomponent signals with good results.
Evaluation of Contamination Inspection and Analysis Methods through Modeling System Performance
NASA Technical Reports Server (NTRS)
Seasly, Elaine; Dever, Jason; Stuban, Steven M. F.
2016-01-01
Contamination is usually identified as a risk on the risk register for sensitive space systems hardware. Despite detailed, time-consuming, and costly contamination control efforts during assembly, integration, and test of space systems, contaminants are still found during visual inspections of hardware. Improved methods are needed to gather information during systems integration to catch potential contamination issues earlier and manage contamination risks better. This research explores evaluation of contamination inspection and analysis methods to determine optical system sensitivity to minimum detectable molecular contamination levels based on IEST-STD-CC1246E non-volatile residue (NVR) cleanliness levels. Potential future degradation of the system is modeled given chosen modules representative of optical elements in an optical system, minimum detectable molecular contamination levels for a chosen inspection and analysis method, and determining the effect of contamination on the system. By modeling system performance based on when molecular contamination is detected during systems integration and at what cleanliness level, the decision maker can perform trades amongst different inspection and analysis methods and determine if a planned method is adequate to meet system requirements and manage contamination risk.
A Century of Enzyme Kinetic Analysis, 1913 to 2013
Johnson, Kenneth A.
2013-01-01
This review traces the history and logical progression of methods for quantitative analysis of enzyme kinetics from the 1913 Michaelis and Menten paper to the application of modern computational methods today. Following a brief review of methods for fitting steady state kinetic data, modern methods are highlighted for fitting full progress curve kinetics based upon numerical integration of rate equations, including a re-analysis of the original Michaelis-Menten full time course kinetic data. Finally, several illustrations of modern transient state kinetic methods of analysis are shown which enable the elucidation of reactions occurring at the active sites of enzymes in order to relate structure and function. PMID:23850893
Haghighi, Mona; Johnson, Suzanne Bennett; Qian, Xiaoning; Lynch, Kristian F; Vehik, Kendra; Huang, Shuai
2016-08-26
Regression models are extensively used in many epidemiological studies to understand the linkage between specific outcomes of interest and their risk factors. However, regression models in general examine the average effects of the risk factors and ignore subgroups with different risk profiles. As a result, interventions are often geared towards the average member of the population, without consideration of the special health needs of different subgroups within the population. This paper demonstrates the value of using rule-based analysis methods that can identify subgroups with heterogeneous risk profiles in a population without imposing assumptions on the subgroups or method. The rules define the risk pattern of subsets of individuals by not only considering the interactions between the risk factors but also their ranges. We compared the rule-based analysis results with the results from a logistic regression model in The Environmental Determinants of Diabetes in the Young (TEDDY) study. Both methods detected a similar suite of risk factors, but the rule-based analysis was superior at detecting multiple interactions between the risk factors that characterize the subgroups. A further investigation of the particular characteristics of each subgroup may detect the special health needs of the subgroup and lead to tailored interventions.
Particle Pollution Estimation Based on Image Analysis
Liu, Chenbin; Tsow, Francis; Zou, Yi; Tao, Nongjian
2016-01-01
Exposure to fine particles can cause various diseases, and an easily accessible method to monitor the particles can help raise public awareness and reduce harmful exposures. Here we report a method to estimate PM air pollution based on analysis of a large number of outdoor images available for Beijing, Shanghai (China) and Phoenix (US). Six image features were extracted from the images, which were used, together with other relevant data, such as the position of the sun, date, time, geographic information and weather conditions, to predict PM2.5 index. The results demonstrate that the image analysis method provides good prediction of PM2.5 indexes, and different features have different significance levels in the prediction. PMID:26828757
Particle Pollution Estimation Based on Image Analysis.
Liu, Chenbin; Tsow, Francis; Zou, Yi; Tao, Nongjian
2016-01-01
Exposure to fine particles can cause various diseases, and an easily accessible method to monitor the particles can help raise public awareness and reduce harmful exposures. Here we report a method to estimate PM air pollution based on analysis of a large number of outdoor images available for Beijing, Shanghai (China) and Phoenix (US). Six image features were extracted from the images, which were used, together with other relevant data, such as the position of the sun, date, time, geographic information and weather conditions, to predict PM2.5 index. The results demonstrate that the image analysis method provides good prediction of PM2.5 indexes, and different features have different significance levels in the prediction.
Young, Allan; Yatham, Lakshmi; Grunze, Heinz; Vieta, Eduard; Blier, Pierre; Moeller, Hans Jurgen; Kasper, Siegfried
2017-01-01
Abstract Background: This paper includes a short description of the important clinical aspects of Bipolar Disorder with emphasis on issues that are important for the therapeutic considerations, including mixed and psychotic features, predominant polarity, and rapid cycling as well as comorbidity. Methods: The workgroup performed a review and critical analysis of the literature concerning grading methods and methods for the development of guidelines. Results: The workgroup arrived at a consensus to base the development of the guideline on randomized controlled trials and related meta-analyses alone in order to follow a strict evidence-based approach. A critical analysis of the existing methods for the grading of treatment options was followed by the development of a new grading method to arrive at efficacy and recommendation levels after the analysis of 32 distinct scenarios of available data for a given treatment option. Conclusion: The current paper reports details on the design, method, and process for the development of CINP guidelines for the treatment of Bipolar Disorder. The rationale and the method with which all data and opinions are combined in order to produce an evidence-based operationalized but also user-friendly guideline and a specific algorithm are described in detail in this paper. PMID:27815414
An evaluation of authentication methods for smartphone based on users’ preferences
NASA Astrophysics Data System (ADS)
Sari, P. K.; Ratnasari, G. S.; Prasetio, A.
2016-04-01
This study discusses about smartphone screen lock preferences using some types of authentication methods. The purpose is to determine the user behaviours based on the perceived security and convenience, as well as the preferences for different types of authentication methods. Variables used are the considerations for locking the screens and the types of authentication methods. The population consists of the smartphone users with the total samples of 400 respondents within a nonprobability sampling method. Data analysis method used is the descriptive analysis. The results showed that the convenience factor is still the major consideration for locking the smartphone screens. Majority of the users chose the pattern unlock as the most convenient method to use. Meanwhile, fingerprint unlock becomes the most secure method in the users’ perceptions and as the method chosen to be used in the future.
Effects of Computer-Based Training on Procedural Modifications to Standard Functional Analyses
ERIC Educational Resources Information Center
Schnell, Lauren K.; Sidener, Tina M.; DeBar, Ruth M.; Vladescu, Jason C.; Kahng, SungWoo
2018-01-01
Few studies have evaluated methods for training decision-making when functional analysis data are undifferentiated. The current study evaluated computer-based training to teach 20 graduate students to arrange functional analysis conditions, analyze functional analysis data, and implement procedural modifications. Participants were exposed to…
NASA Astrophysics Data System (ADS)
Ji, Jinghua; Luo, Jianhua; Lei, Qian; Bian, Fangfang
2017-05-01
This paper proposed an analytical method, based on conformal mapping (CM) method, for the accurate evaluation of magnetic field and eddy current (EC) loss in fault-tolerant permanent-magnet (FTPM) machines. The aim of modulation function, applied in CM method, is to change the open-slot structure into fully closed-slot structure, whose air-gap flux density is easy to calculate analytically. Therefore, with the help of Matlab Schwarz-Christoffel (SC) Toolbox, both the magnetic flux density and EC density of FTPM machine are obtained accurately. Finally, time-stepped transient finite-element method (FEM) is used to verify the theoretical analysis, showing that the proposed method is able to predict the magnetic flux density and EC loss precisely.
Marateb, Hamid Reza; Mansourian, Marjan; Adibi, Peyman; Farina, Dario
2014-01-01
Background: selecting the correct statistical test and data mining method depends highly on the measurement scale of data, type of variables, and purpose of the analysis. Different measurement scales are studied in details and statistical comparison, modeling, and data mining methods are studied based upon using several medical examples. We have presented two ordinal–variables clustering examples, as more challenging variable in analysis, using Wisconsin Breast Cancer Data (WBCD). Ordinal-to-Interval scale conversion example: a breast cancer database of nine 10-level ordinal variables for 683 patients was analyzed by two ordinal-scale clustering methods. The performance of the clustering methods was assessed by comparison with the gold standard groups of malignant and benign cases that had been identified by clinical tests. Results: the sensitivity and accuracy of the two clustering methods were 98% and 96%, respectively. Their specificity was comparable. Conclusion: by using appropriate clustering algorithm based on the measurement scale of the variables in the study, high performance is granted. Moreover, descriptive and inferential statistics in addition to modeling approach must be selected based on the scale of the variables. PMID:24672565
A comparison of latent class, K-means, and K-median methods for clustering dichotomous data.
Brusco, Michael J; Shireman, Emilie; Steinley, Douglas
2017-09-01
The problem of partitioning a collection of objects based on their measurements on a set of dichotomous variables is a well-established problem in psychological research, with applications including clinical diagnosis, educational testing, cognitive categorization, and choice analysis. Latent class analysis and K-means clustering are popular methods for partitioning objects based on dichotomous measures in the psychological literature. The K-median clustering method has recently been touted as a potentially useful tool for psychological data and might be preferable to its close neighbor, K-means, when the variable measures are dichotomous. We conducted simulation-based comparisons of the latent class, K-means, and K-median approaches for partitioning dichotomous data. Although all 3 methods proved capable of recovering cluster structure, K-median clustering yielded the best average performance, followed closely by latent class analysis. We also report results for the 3 methods within the context of an application to transitive reasoning data, in which it was found that the 3 approaches can exhibit profound differences when applied to real data. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
C4 Software Technology Reference Guide - A Prototype.
1997-01-10
domain analysis methods include • Feature-oriented domain analysis ( FODA ) (see pg. 185), a domain analysis method based upon identifying the... Analysis ( FODA ) Feasibility Study (CMU/SEI-90-TR-21, ADA 235785). Pittsburgh, PA: Software En- gineering Institute, Carnegie Mellon University, 1990. 178...domain analysis ( FODA ) (see pg. 185), in which a feature is a user-visible aspect or char- acteristic of the domain [Kang 90].) The features in a system
Deep Learning Nuclei Detection in Digitized Histology Images by Superpixels
Sornapudi, Sudhir; Stanley, Ronald Joe; Stoecker, William V.; Almubarak, Haidar; Long, Rodney; Antani, Sameer; Thoma, George; Zuna, Rosemary; Frazier, Shelliane R.
2018-01-01
Background: Advances in image analysis and computational techniques have facilitated automatic detection of critical features in histopathology images. Detection of nuclei is critical for squamous epithelium cervical intraepithelial neoplasia (CIN) classification into normal, CIN1, CIN2, and CIN3 grades. Methods: In this study, a deep learning (DL)-based nuclei segmentation approach is investigated based on gathering localized information through the generation of superpixels using a simple linear iterative clustering algorithm and training with a convolutional neural network. Results: The proposed approach was evaluated on a dataset of 133 digitized histology images and achieved an overall nuclei detection (object-based) accuracy of 95.97%, with demonstrated improvement over imaging-based and clustering-based benchmark techniques. Conclusions: The proposed DL-based nuclei segmentation Method with superpixel analysis has shown improved segmentation results in comparison to state-of-the-art methods. PMID:29619277
The Role of Multiphysics Simulation in Multidisciplinary Analysis
NASA Technical Reports Server (NTRS)
Rifai, Steven M.; Ferencz, Robert M.; Wang, Wen-Ping; Spyropoulos, Evangelos T.; Lawrence, Charles; Melis, Matthew E.
1998-01-01
This article describes the applications of the Spectrum(Tm) Solver in Multidisciplinary Analysis (MDA). Spectrum, a multiphysics simulation software based on the finite element method, addresses compressible and incompressible fluid flow, structural, and thermal modeling as well as the interaction between these disciplines. Multiphysics simulation is based on a single computational framework for the modeling of multiple interacting physical phenomena. Interaction constraints are enforced in a fully-coupled manner using the augmented-Lagrangian method. Within the multiphysics framework, the finite element treatment of fluids is based on Galerkin-Least-Squares (GLS) method with discontinuity capturing operators. The arbitrary-Lagrangian-Eulerian method is utilized to account for deformable fluid domains. The finite element treatment of solids and structures is based on the Hu-Washizu variational principle. The multiphysics architecture lends itself naturally to high-performance parallel computing. Aeroelastic, propulsion, thermal management and manufacturing applications are presented.
Progress in multirate digital control system design
NASA Technical Reports Server (NTRS)
Berg, Martin C.; Mason, Gregory S.
1991-01-01
A new methodology for multirate sampled-data control design based on a new generalized control law structure, two new parameter-optimization-based control law synthesis methods, and a new singular-value-based robustness analysis method are described. The control law structure can represent multirate sampled-data control laws of arbitrary structure and dynamic order, with arbitrarily prescribed sampling rates for all sensors and update rates for all processor states and actuators. The two control law synthesis methods employ numerical optimization to determine values for the control law parameters. The robustness analysis method is based on the multivariable Nyquist criterion applied to the loop transfer function for the sampling period equal to the period of repetition of the system's complete sampling/update schedule. The complete methodology is demonstrated by application to the design of a combination yaw damper and modal suppression system for a commercial aircraft.
Piccinini, Filippo; Balassa, Tamas; Szkalisity, Abel; Molnar, Csaba; Paavolainen, Lassi; Kujala, Kaisa; Buzas, Krisztina; Sarazova, Marie; Pietiainen, Vilja; Kutay, Ulrike; Smith, Kevin; Horvath, Peter
2017-06-28
High-content, imaging-based screens now routinely generate data on a scale that precludes manual verification and interrogation. Software applying machine learning has become an essential tool to automate analysis, but these methods require annotated examples to learn from. Efficiently exploring large datasets to find relevant examples remains a challenging bottleneck. Here, we present Advanced Cell Classifier (ACC), a graphical software package for phenotypic analysis that addresses these difficulties. ACC applies machine-learning and image-analysis methods to high-content data generated by large-scale, cell-based experiments. It features methods to mine microscopic image data, discover new phenotypes, and improve recognition performance. We demonstrate that these features substantially expedite the training process, successfully uncover rare phenotypes, and improve the accuracy of the analysis. ACC is extensively documented, designed to be user-friendly for researchers without machine-learning expertise, and distributed as a free open-source tool at www.cellclassifier.org. Copyright © 2017 Elsevier Inc. All rights reserved.
Array magnetics modal analysis for the DIII-D tokamak based on localized time-series modelling
Olofsson, K. Erik J.; Hanson, Jeremy M.; Shiraki, Daisuke; ...
2014-07-14
Here, time-series analysis of magnetics data in tokamaks is typically done using block-based fast Fourier transform methods. This work presents the development and deployment of a new set of algorithms for magnetic probe array analysis. The method is based on an estimation technique known as stochastic subspace identification (SSI). Compared with the standard coherence approach or the direct singular value decomposition approach, the new technique exhibits several beneficial properties. For example, the SSI method does not require that frequencies are orthogonal with respect to the timeframe used in the analysis. Frequencies are obtained directly as parameters of localized time-series models.more » The parameters are extracted by solving small-scale eigenvalue problems. Applications include maximum-likelihood regularized eigenmode pattern estimation, detection of neoclassical tearing modes, including locked mode precursors, and automatic clustering of modes, and magnetics-pattern characterization of sawtooth pre- and postcursors, edge harmonic oscillations and fishbones.« less
Automated quantification of neuronal networks and single-cell calcium dynamics using calcium imaging
Patel, Tapan P.; Man, Karen; Firestein, Bonnie L.; Meaney, David F.
2017-01-01
Background Recent advances in genetically engineered calcium and membrane potential indicators provide the potential to estimate the activation dynamics of individual neurons within larger, mesoscale networks (100s–1000 +neurons). However, a fully integrated automated workflow for the analysis and visualization of neural microcircuits from high speed fluorescence imaging data is lacking. New method Here we introduce FluoroSNNAP, Fluorescence Single Neuron and Network Analysis Package. FluoroSNNAP is an open-source, interactive software developed in MATLAB for automated quantification of numerous biologically relevant features of both the calcium dynamics of single-cells and network activity patterns. FluoroSNNAP integrates and improves upon existing tools for spike detection, synchronization analysis, and inference of functional connectivity, making it most useful to experimentalists with little or no programming knowledge. Results We apply FluoroSNNAP to characterize the activity patterns of neuronal microcircuits undergoing developmental maturation in vitro. Separately, we highlight the utility of single-cell analysis for phenotyping a mixed population of neurons expressing a human mutant variant of the microtubule associated protein tau and wild-type tau. Comparison with existing method(s) We show the performance of semi-automated cell segmentation using spatiotemporal independent component analysis and significant improvement in detecting calcium transients using a template-based algorithm in comparison to peak-based or wavelet-based detection methods. Our software further enables automated analysis of microcircuits, which is an improvement over existing methods. Conclusions We expect the dissemination of this software will facilitate a comprehensive analysis of neuronal networks, promoting the rapid interrogation of circuits in health and disease. PMID:25629800
Construction of phylogenetic trees by kernel-based comparative analysis of metabolic networks.
Oh, S June; Joung, Je-Gun; Chang, Jeong-Ho; Zhang, Byoung-Tak
2006-06-06
To infer the tree of life requires knowledge of the common characteristics of each species descended from a common ancestor as the measuring criteria and a method to calculate the distance between the resulting values of each measure. Conventional phylogenetic analysis based on genomic sequences provides information about the genetic relationships between different organisms. In contrast, comparative analysis of metabolic pathways in different organisms can yield insights into their functional relationships under different physiological conditions. However, evaluating the similarities or differences between metabolic networks is a computationally challenging problem, and systematic methods of doing this are desirable. Here we introduce a graph-kernel method for computing the similarity between metabolic networks in polynomial time, and use it to profile metabolic pathways and to construct phylogenetic trees. To compare the structures of metabolic networks in organisms, we adopted the exponential graph kernel, which is a kernel-based approach with a labeled graph that includes a label matrix and an adjacency matrix. To construct the phylogenetic trees, we used an unweighted pair-group method with arithmetic mean, i.e., a hierarchical clustering algorithm. We applied the kernel-based network profiling method in a comparative analysis of nine carbohydrate metabolic networks from 81 biological species encompassing Archaea, Eukaryota, and Eubacteria. The resulting phylogenetic hierarchies generally support the tripartite scheme of three domains rather than the two domains of prokaryotes and eukaryotes. By combining the kernel machines with metabolic information, the method infers the context of biosphere development that covers physiological events required for adaptation by genetic reconstruction. The results show that one may obtain a global view of the tree of life by comparing the metabolic pathway structures using meta-level information rather than sequence information. This method may yield further information about biological evolution, such as the history of horizontal transfer of each gene, by studying the detailed structure of the phylogenetic tree constructed by the kernel-based method.
Use of focused ultrasonication in activity-based profiling of deubiquitinating enzymes in tissue.
Nanduri, Bindu; Shack, Leslie A; Rai, Aswathy N; Epperson, William B; Baumgartner, Wes; Schmidt, Ty B; Edelmann, Mariola J
2016-12-15
To develop a reproducible tissue lysis method that retains enzyme function for activity-based protein profiling, we compared four different methods to obtain protein extracts from bovine lung tissue: focused ultrasonication, standard sonication, mortar & pestle method, and homogenization combined with standard sonication. Focused ultrasonication and mortar & pestle methods were sufficiently effective for activity-based profiling of deubiquitinases in tissue, and focused ultrasonication also had the fastest processing time. We used focused-ultrasonicator for subsequent activity-based proteomic analysis of deubiquitinases to test the compatibility of this method in sample preparation for activity-based chemical proteomics. Copyright © 2016 Elsevier Inc. All rights reserved.
Research of second harmonic generation images based on texture analysis
NASA Astrophysics Data System (ADS)
Liu, Yao; Li, Yan; Gong, Haiming; Zhu, Xiaoqin; Huang, Zufang; Chen, Guannan
2014-09-01
Texture analysis plays a crucial role in identifying objects or regions of interest in an image. It has been applied to a variety of medical image processing, ranging from the detection of disease and the segmentation of specific anatomical structures, to differentiation between healthy and pathological tissues. Second harmonic generation (SHG) microscopy as a potential noninvasive tool for imaging biological tissues has been widely used in medicine, with reduced phototoxicity and photobleaching. In this paper, we clarified the principles of texture analysis including statistical, transform, structural and model-based methods and gave examples of its applications, reviewing studies of the technique. Moreover, we tried to apply texture analysis to the SHG images for the differentiation of human skin scar tissues. Texture analysis method based on local binary pattern (LBP) and wavelet transform was used to extract texture features of SHG images from collagen in normal and abnormal scars, and then the scar SHG images were classified into normal or abnormal ones. Compared with other texture analysis methods with respect to the receiver operating characteristic analysis, LBP combined with wavelet transform was demonstrated to achieve higher accuracy. It can provide a new way for clinical diagnosis of scar types. At last, future development of texture analysis in SHG images were discussed.
NASA Astrophysics Data System (ADS)
Togayachi, Akira; Tomioka, Azusa; Fujita, Mika; Sukegawa, Masako; Noro, Erika; Takakura, Daisuke; Miyazaki, Michiyo; Shikanai, Toshihide; Narimatsu, Hisashi; Kaji, Hiroyuki
2018-04-01
To elucidate the relationship between the protein function and the diversity and heterogeneity of glycans conjugated to the protein, glycosylation sites, glycan variation, and glycan proportions at each site of the glycoprotein must be analyzed. Glycopeptide-based structural analysis technology using mass spectrometry has been developed; however, complicated analyses of complex spectra obtained by multistage fragmentation are necessary, and sensitivity and throughput of the analyses are low. Therefore, we developed a liquid chromatography/mass spectrometry (MS)-based glycopeptide analysis method to reveal the site-specific glycome (Glycan heterogeneity-based Relational IDentification of Glycopeptide signals on Elution profile, Glyco-RIDGE). This method used accurate masses and retention times of glycopeptides, without requiring MS2, and could be applied to complex mixtures. To increase the number of identified peptide, fractionation of sample glycopeptides for reduction of sample complexity is required. Therefore, in this study, glycopeptides were fractionated into four fractions by hydrophilic interaction chromatography, and each fraction was analyzed using the Glyco-RIDGE method. As a result, many glycopeptides having long glycans were enriched in the highest hydrophilic fraction. Based on the monosaccharide composition, these glycans were thought to be poly-N-acetyllactosamine (polylactosamine [pLN]), and 31 pLN-carrier proteins were identified in HL-60 cells. Gene ontology enrichment analysis revealed that pLN carriers included many molecules related to signal transduction, receptors, and cell adhesion. Thus, these findings provided important insights into the analysis of the glycoproteome using our novel Glyco-RIDGE method. [Figure not available: see fulltext.
Computerized spiral analysis using the iPad.
Sisti, Jonathan A; Christophe, Brandon; Seville, Audrey Rakovich; Garton, Andrew L A; Gupta, Vivek P; Bandin, Alexander J; Yu, Qiping; Pullman, Seth L
2017-01-01
Digital analysis of writing and drawing has become a valuable research and clinical tool for the study of upper limb motor dysfunction in patients with essential tremor, Parkinson's disease, dystonia, and related disorders. We developed a validated method of computerized spiral analysis of hand-drawn Archimedean spirals that provides insight into movement dynamics beyond subjective visual assessment using a Wacom graphics tablet. While the Wacom tablet method provides robust data, more widely available mobile technology platforms exist. We introduce a novel adaptation of the Wacom-based method for the collection of hand-drawn kinematic data using an Apple iPad. This iPad-based system is stand-alone, easy-to-use, can capture drawing data with either a finger or capacitive stylus, is precise, and potentially ubiquitous. The iPad-based system acquires position and time data that is fully compatible with our original spiral analysis program. All of the important indices including degree of severity, speed, presence of tremor, tremor amplitude, tremor frequency, variability of pressure, and tightness are calculated from the digital spiral data, which the application is able to transmit. While the iPad method is limited by current touch screen technology, it does collect data with acceptable congruence compared to the current Wacom-based method while providing the advantages of accessibility and ease of use. The iPad is capable of capturing precise digital spiral data for analysis of motor dysfunction while also providing a convenient, easy-to-use modality in clinics and potentially at home. Copyright © 2016 Elsevier B.V. All rights reserved.
Generalized fictitious methods for fluid-structure interactions: Analysis and simulations
NASA Astrophysics Data System (ADS)
Yu, Yue; Baek, Hyoungsu; Karniadakis, George Em
2013-07-01
We present a new fictitious pressure method for fluid-structure interaction (FSI) problems in incompressible flow by generalizing the fictitious mass and damping methods we published previously in [1]. The fictitious pressure method involves modification of the fluid solver whereas the fictitious mass and damping methods modify the structure solver. We analyze all fictitious methods for simplified problems and obtain explicit expressions for the optimal reduction factor (convergence rate index) at the FSI interface [2]. This analysis also demonstrates an apparent similarity of fictitious methods to the FSI approach based on Robin boundary conditions, which have been found to be very effective in FSI problems. We implement all methods, including the semi-implicit Robin based coupling method, in the context of spectral element discretization, which is more sensitive to temporal instabilities than low-order methods. However, the methods we present here are simple and general, and hence applicable to FSI based on any other spatial discretization. In numerical tests, we verify the selection of optimal values for the fictitious parameters for simplified problems and for vortex-induced vibrations (VIV) even at zero mass ratio ("for-ever-resonance"). We also develop an empirical a posteriori analysis for complex geometries and apply it to 3D patient-specific flexible brain arteries with aneurysms for very large deformations. We demonstrate that the fictitious pressure method enhances stability and convergence, and is comparable or better in most cases to the Robin approach or the other fictitious methods.
Semi-Supervised Marginal Fisher Analysis for Hyperspectral Image Classification
NASA Astrophysics Data System (ADS)
Huang, H.; Liu, J.; Pan, Y.
2012-07-01
The problem of learning with both labeled and unlabeled examples arises frequently in Hyperspectral image (HSI) classification. While marginal Fisher analysis is a supervised method, which cannot be directly applied for Semi-supervised classification. In this paper, we proposed a novel method, called semi-supervised marginal Fisher analysis (SSMFA), to process HSI of natural scenes, which uses a combination of semi-supervised learning and manifold learning. In SSMFA, a new difference-based optimization objective function with unlabeled samples has been designed. SSMFA preserves the manifold structure of labeled and unlabeled samples in addition to separating labeled samples in different classes from each other. The semi-supervised method has an analytic form of the globally optimal solution, and it can be computed based on eigen decomposition. Classification experiments with a challenging HSI task demonstrate that this method outperforms current state-of-the-art HSI-classification methods.
NASA Astrophysics Data System (ADS)
Xie, Wen-Jie; Jiang, Zhi-Qiang; Gu, Gao-Feng; Xiong, Xiong; Zhou, Wei-Xing
2015-10-01
Many complex systems generate multifractal time series which are long-range cross-correlated. Numerous methods have been proposed to characterize the multifractal nature of these long-range cross correlations. However, several important issues about these methods are not well understood and most methods consider only one moment order. We study the joint multifractal analysis based on partition function with two moment orders, which was initially invented to investigate fluid fields, and derive analytically several important properties. We apply the method numerically to binomial measures with multifractal cross correlations and bivariate fractional Brownian motions without multifractal cross correlations. For binomial multifractal measures, the explicit expressions of mass function, singularity strength and multifractal spectrum of the cross correlations are derived, which agree excellently with the numerical results. We also apply the method to stock market indexes and unveil intriguing multifractality in the cross correlations of index volatilities.
Generalized sample entropy analysis for traffic signals based on similarity measure
NASA Astrophysics Data System (ADS)
Shang, Du; Xu, Mengjia; Shang, Pengjian
2017-05-01
Sample entropy is a prevailing method used to quantify the complexity of a time series. In this paper a modified method of generalized sample entropy and surrogate data analysis is proposed as a new measure to assess the complexity of a complex dynamical system such as traffic signals. The method based on similarity distance presents a different way of signals patterns match showing distinct behaviors of complexity. Simulations are conducted over synthetic data and traffic signals for providing the comparative study, which is provided to show the power of the new method. Compared with previous sample entropy and surrogate data analysis, the new method has two main advantages. The first one is that it overcomes the limitation about the relationship between the dimension parameter and the length of series. The second one is that the modified sample entropy functions can be used to quantitatively distinguish time series from different complex systems by the similar measure.
Simulation Research on Vehicle Active Suspension Controller Based on G1 Method
NASA Astrophysics Data System (ADS)
Li, Gen; Li, Hang; Zhang, Shuaiyang; Luo, Qiuhui
2017-09-01
Based on the order relation analysis method (G1 method), the optimal linear controller of vehicle active suspension is designed. The system of the main and passive suspension of the single wheel vehicle is modeled and the system input signal model is determined. Secondly, the system motion state space equation is established by the kinetic knowledge and the optimal linear controller design is completed with the optimal control theory. The weighting coefficient of the performance index coefficients of the main passive suspension is determined by the relational analysis method. Finally, the model is simulated in Simulink. The simulation results show that: the optimal weight value is determined by using the sequence relation analysis method under the condition of given road conditions, and the vehicle acceleration, suspension stroke and tire motion displacement are optimized to improve the comprehensive performance of the vehicle, and the active control is controlled within the requirements.
interest: mechanical system design sensitivity analysis and optimization of linear and nonlinear structural systems, reliability analysis and reliability-based design optimization, computational methods in committee member, ISSMO; Associate Editor, Mechanics Based Design of Structures and Machines; Associate
Gopinath, Kaundinya; Krishnamurthy, Venkatagiri; Lacey, Simon; Sathian, K
2018-02-01
In a recent study Eklund et al. have shown that cluster-wise family-wise error (FWE) rate-corrected inferences made in parametric statistical method-based functional magnetic resonance imaging (fMRI) studies over the past couple of decades may have been invalid, particularly for cluster defining thresholds less stringent than p < 0.001; principally because the spatial autocorrelation functions (sACFs) of fMRI data had been modeled incorrectly to follow a Gaussian form, whereas empirical data suggest otherwise. Hence, the residuals from general linear model (GLM)-based fMRI activation estimates in these studies may not have possessed a homogenously Gaussian sACF. Here we propose a method based on the assumption that heterogeneity and non-Gaussianity of the sACF of the first-level GLM analysis residuals, as well as temporal autocorrelations in the first-level voxel residual time-series, are caused by unmodeled MRI signal from neuronal and physiological processes as well as motion and other artifacts, which can be approximated by appropriate decompositions of the first-level residuals with principal component analysis (PCA), and removed. We show that application of this method yields GLM residuals with significantly reduced spatial correlation, nearly Gaussian sACF and uniform spatial smoothness across the brain, thereby allowing valid cluster-based FWE-corrected inferences based on assumption of Gaussian spatial noise. We further show that application of this method renders the voxel time-series of first-level GLM residuals independent, and identically distributed across time (which is a necessary condition for appropriate voxel-level GLM inference), without having to fit ad hoc stochastic colored noise models. Furthermore, the detection power of individual subject brain activation analysis is enhanced. This method will be especially useful for case studies, which rely on first-level GLM analysis inferences.
Missing value imputation in DNA microarrays based on conjugate gradient method.
Dorri, Fatemeh; Azmi, Paeiz; Dorri, Faezeh
2012-02-01
Analysis of gene expression profiles needs a complete matrix of gene array values; consequently, imputation methods have been suggested. In this paper, an algorithm that is based on conjugate gradient (CG) method is proposed to estimate missing values. k-nearest neighbors of the missed entry are first selected based on absolute values of their Pearson correlation coefficient. Then a subset of genes among the k-nearest neighbors is labeled as the best similar ones. CG algorithm with this subset as its input is then used to estimate the missing values. Our proposed CG based algorithm (CGimpute) is evaluated on different data sets. The results are compared with sequential local least squares (SLLSimpute), Bayesian principle component analysis (BPCAimpute), local least squares imputation (LLSimpute), iterated local least squares imputation (ILLSimpute) and adaptive k-nearest neighbors imputation (KNNKimpute) methods. The average of normalized root mean squares error (NRMSE) and relative NRMSE in different data sets with various missing rates shows CGimpute outperforms other methods. Copyright © 2011 Elsevier Ltd. All rights reserved.
New preparation method of {beta}{double_prime}-alumina and application for AMTEC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nishi, Toshiro; Tsuru, Yasuhiko; Yamamoto, Hirokazu
1995-12-31
The Alkali Metal Thermo-Electric Converter(AMTEC) is an energy conversion system that converts heat to electrical energy with high efficiency. The {beta}{double_prime}-alumina solid electrolyte (BASE) is the most important component in the AMTEC system. In this paper, the relationship among the conduction property, the microstructure and the amount of chemical component for BASE is studied. As an analysis of the chemical reaction for each component, the authors established a new BASE preparation method rather than using the conventional method. They also report the AMTFC cell performance using this electrolyte tube on which Mo or TiC electrode is filmed by the screenmore » printing method. Then, an electrochemical analysis and a heat cycle test of AMTEC cell are studied.« less
Position Accuracy Analysis of a Robust Vision-Based Navigation
NASA Astrophysics Data System (ADS)
Gaglione, S.; Del Pizzo, S.; Troisi, S.; Angrisano, A.
2018-05-01
Using images to determine camera position and attitude is a consolidated method, very widespread for application like UAV navigation. In harsh environment, where GNSS could be degraded or denied, image-based positioning could represent a possible candidate for an integrated or alternative system. In this paper, such method is investigated using a system based on single camera and 3D maps. A robust estimation method is proposed in order to limit the effect of blunders or noisy measurements on position solution. The proposed approach is tested using images collected in an urban canyon, where GNSS positioning is very unaccurate. A previous photogrammetry survey has been performed to build the 3D model of tested area. The position accuracy analysis is performed and the effect of the robust method proposed is validated.
Evaluation of redundancy analysis to identify signatures of local adaptation.
Capblancq, Thibaut; Luu, Keurcien; Blum, Michael G B; Bazin, Eric
2018-05-26
Ordination is a common tool in ecology that aims at representing complex biological information in a reduced space. In landscape genetics, ordination methods such as principal component analysis (PCA) have been used to detect adaptive variation based on genomic data. Taking advantage of environmental data in addition to genotype data, redundancy analysis (RDA) is another ordination approach that is useful to detect adaptive variation. This paper aims at proposing a test statistic based on RDA to search for loci under selection. We compare redundancy analysis to pcadapt, which is a nonconstrained ordination method, and to a latent factor mixed model (LFMM), which is a univariate genotype-environment association method. Individual-based simulations identify evolutionary scenarios where RDA genome scans have a greater statistical power than genome scans based on PCA. By constraining the analysis with environmental variables, RDA performs better than PCA in identifying adaptive variation when selection gradients are weakly correlated with population structure. Additionally, we show that if RDA and LFMM have a similar power to identify genetic markers associated with environmental variables, the RDA-based procedure has the advantage to identify the main selective gradients as a combination of environmental variables. To give a concrete illustration of RDA in population genomics, we apply this method to the detection of outliers and selective gradients on an SNP data set of Populus trichocarpa (Geraldes et al., 2013). The RDA-based approach identifies the main selective gradient contrasting southern and coastal populations to northern and continental populations in the northwestern American coast. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
A Review of Research Ethics in Internet-Based Research
ERIC Educational Resources Information Center
Convery, Ian; Cox, Diane
2012-01-01
Internet-based research methods can include: online surveys, web page content analysis, videoconferencing for online focus groups and/or interviews, analysis of "e-conversations" through social networking sites, email, chat rooms, discussion boards and/or blogs. Over the last ten years, an upsurge in internet-based research (IBR) has led…
Fuzzy method of recognition of high molecular substances in evidence-based biology
NASA Astrophysics Data System (ADS)
Olevskyi, V. I.; Smetanin, V. T.; Olevska, Yu. B.
2017-10-01
Nowadays modern requirements to achieving reliable results along with high quality of researches put mathematical analysis methods of results at the forefront. Because of this, evidence-based methods of processing experimental data have become increasingly popular in the biological sciences and medicine. Their basis is meta-analysis, a method of quantitative generalization of a large number of randomized trails contributing to a same special problem, which are often contradictory and performed by different authors. It allows identifying the most important trends and quantitative indicators of the data, verification of advanced hypotheses and discovering new effects in the population genotype. The existing methods of recognizing high molecular substances by gel electrophoresis of proteins under denaturing conditions are based on approximate methods for comparing the contrast of electrophoregrams with a standard solution of known substances. We propose a fuzzy method for modeling experimental data to increase the accuracy and validity of the findings of the detection of new proteins.
Harmonic analysis of electrified railway based on improved HHT
NASA Astrophysics Data System (ADS)
Wang, Feng
2018-04-01
In this paper, the causes and harms of the current electric locomotive electrical system harmonics are firstly studied and analyzed. Based on the characteristics of the harmonics in the electrical system, the Hilbert-Huang transform method is introduced. Based on the in-depth analysis of the empirical mode decomposition method and the Hilbert transform method, the reasons and solutions to the endpoint effect and modal aliasing problem in the HHT method are explored. For the endpoint effect of HHT, this paper uses point-symmetric extension method to extend the collected data; In allusion to the modal aliasing problem, this paper uses the high frequency harmonic assistant method to preprocess the signal and gives the empirical formula of high frequency auxiliary harmonic. Finally, combining the suppression of HHT endpoint effect and modal aliasing problem, an improved HHT method is proposed and simulated by matlab. The simulation results show that the improved HHT is effective for the electric locomotive power supply system.
10 CFR 431.173 - Requirements applicable to all manufacturers.
Code of Federal Regulations, 2011 CFR
2011-01-01
... COMMERCIAL AND INDUSTRIAL EQUIPMENT Provisions for Commercial Heating, Ventilating, Air-Conditioning and... is based on engineering or statistical analysis, computer simulation or modeling, or other analytic... method or methods used; (B) The mathematical model, the engineering or statistical analysis, computer...
[An EMD based time-frequency distribution and its application in EEG analysis].
Li, Xiaobing; Chu, Meng; Qiu, Tianshuang; Bao, Haiping
2007-10-01
Hilbert-Huang transform (HHT) is a new time-frequency analytic method to analyze the nonlinear and the non-stationary signals. The key step of this method is the empirical mode decomposition (EMD), with which any complicated signal can be decomposed into a finite and small number of intrinsic mode functions (IMF). In this paper, a new EMD based method for suppressing the cross-term of Wigner-Ville distribution (WVD) is developed and is applied to analyze the epileptic EEG signals. The simulation data and analysis results show that the new method suppresses the cross-term of the WVD effectively with an excellent resolution.
Analysis of pressure distortion testing
NASA Technical Reports Server (NTRS)
Koch, K. E.; Rees, R. L.
1976-01-01
The development of a distortion methodology, method D, was documented, and its application to steady state and unsteady data was demonstrated. Three methodologies based upon DIDENT, a NASA-LeRC distortion methodology based upon the parallel compressor model, were investigated by applying them to a set of steady state data. The best formulation was then applied to an independent data set. The good correlation achieved with this data set showed that method E, one of the above methodologies, is a viable concept. Unsteady data were analyzed by using the method E methodology. This analysis pointed out that the method E sensitivities are functions of pressure defect level as well as corrected speed and pattern.
Comparing Mycobacterium tuberculosis genomes using genome topology networks.
Jiang, Jianping; Gu, Jianlei; Zhang, Liang; Zhang, Chenyi; Deng, Xiao; Dou, Tonghai; Zhao, Guoping; Zhou, Yan
2015-02-14
Over the last decade, emerging research methods, such as comparative genomic analysis and phylogenetic study, have yielded new insights into genotypes and phenotypes of closely related bacterial strains. Several findings have revealed that genomic structural variations (SVs), including gene gain/loss, gene duplication and genome rearrangement, can lead to different phenotypes among strains, and an investigation of genes affected by SVs may extend our knowledge of the relationships between SVs and phenotypes in microbes, especially in pathogenic bacteria. In this work, we introduce a 'Genome Topology Network' (GTN) method based on gene homology and gene locations to analyze genomic SVs and perform phylogenetic analysis. Furthermore, the concept of 'unfixed ortholog' has been proposed, whose members are affected by SVs in genome topology among close species. To improve the precision of 'unfixed ortholog' recognition, a strategy to detect annotation differences and complete gene annotation was applied. To assess the GTN method, a set of thirteen complete M. tuberculosis genomes was analyzed as a case study. GTNs with two different gene homology-assigning methods were built, the Clusters of Orthologous Groups (COG) method and the orthoMCL clustering method, and two phylogenetic trees were constructed accordingly, which may provide additional insights into whole genome-based phylogenetic analysis. We obtained 24 unfixable COG groups, of which most members were related to immunogenicity and drug resistance, such as PPE-repeat proteins (COG5651) and transcriptional regulator TetR gene family members (COG1309). The GTN method has been implemented in PERL and released on our website. The tool can be downloaded from http://homepage.fudan.edu.cn/zhouyan/gtn/ , and allows re-annotating the 'lost' genes among closely related genomes, analyzing genes affected by SVs, and performing phylogenetic analysis. With this tool, many immunogenic-related and drug resistance-related genes were found to be affected by SVs in M. tuberculosis genomes. We believe that the GTN method will be suitable for the exploration of genomic SVs in connection with biological features of bacterial strains, and that GTN-based phylogenetic analysis will provide additional insights into whole genome-based phylogenetic analysis.
Ruppin, Eytan; Papin, Jason A; de Figueiredo, Luis F; Schuster, Stefan
2010-08-01
With the advent of modern omics technologies, it has become feasible to reconstruct (quasi-) whole-cell metabolic networks and characterize them in more and more detail. Computer simulations of the dynamic behavior of such networks are difficult due to a lack of kinetic data and to computational limitations. In contrast, network analysis based on appropriate constraints such as the steady-state condition (constraint-based analysis) is feasible and allows one to derive conclusions about the system's metabolic capabilities. Here, we review methods for the reconstruction of metabolic networks, modeling techniques such as flux balance analysis and elementary flux modes and current progress in their development and applications. Game-theoretical methods for studying metabolic networks are discussed as well. Copyright © 2010 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Szafranko, E.
2017-08-01
When planning a building structure, dilemmas arise as to what construction and material solutions are feasible. The decisions are not always obvious. A procedure for selecting the variant that will best satisfy the expectations of the investor and future users of a structure must be founded on mathematical methods. The following deserve special attention: the MCE methods, Hierarchical Analysis Methods and Weighting Methods. Another interesting solution, particularly useful when dealing with evaluations which take into account negative values, is the Indicator Method. MCE methods are relatively popular owing to the simplicity of the calculations and ease of the interpretation of the results. Having prepared the input data properly, they enable the user to compare them on the same level. In a situation where an analysis involves a large number of data, it is more convenient to divide them into groups according to main criteria and subcriteria. This option is provided by hierarchical analysis methods. They are based on ordered sets of criteria, which are evaluated in groups. In some cases, this approach yields the results that are superior and easier to read. If an analysis encompasses direct and indirect effects, an Indicator Method seems to be a justified choice for selecting the right solution. The Indicator Method is different in character and relies on weights and assessments of effects. It allows the user to evaluate effectively the analyzed variants. This article explains the methodology of conducting a multi-criteria analysis, showing its advantages and disadvantages. An example of calculations contained in the article shows what problems can be encountered when making an assessment of various solutions regarding building materials and structures. For comparison, an analysis based on graphical methods developed by the author was presented.
Determination of sex origin of meat and meat products on the DNA basis: a review.
Gokulakrishnan, Palanisamy; Kumar, Rajiv Ranjan; Sharma, Brahm Deo; Mendiratta, Sanjod Kumar; Malav, Omprakash; Sharma, Deepak
2015-01-01
Sex determination of domestic animal's meat is of potential value in meat authentication and quality control studies. Methods aiming at determining the sex origin of meat may be based either on the analysis of hormone or on the analysis of nucleic acids. At the present time, sex determination of meat and meat products based on hormone analysis employ gas chromatography-mass spectrometry (GC-MS), high-performance liquid chromatography-mass spectrometry/mass spectrometry (HPLC-MS/MS), and enzyme-linked immunosorbent assay (ELISA). Most of the hormone-based methods proved to be highly specific and sensitive but were not performed on a regular basis for meat sexing due to the technical limitations or the expensive equipments required. On the other hand, the most common methodology to determine the sex of meat is unquestionably traditional polymerase chain reaction (PCR) that involves gel electrophoresis of DNA amplicons. This review is intended to provide an overview of the DNA-based methods for sex determination of meat and meat products.
Self-adaptive relevance feedback based on multilevel image content analysis
NASA Astrophysics Data System (ADS)
Gao, Yongying; Zhang, Yujin; Fu, Yu
2001-01-01
In current content-based image retrieval systems, it is generally accepted that obtaining high-level image features is a key to improve the querying. Among the related techniques, relevance feedback has become a hot research aspect because it combines the information from the user to refine the querying results. In practice, many methods have been proposed to achieve the goal of relevance feedback. In this paper, a new scheme for relevance feedback is proposed. Unlike previous methods for relevance feedback, our scheme provides a self-adaptive operation. First, based on multi- level image content analysis, the relevant images from the user could be automatically analyzed in different levels and the querying could be modified in terms of different analysis results. Secondly, to make it more convenient to the user, the procedure of relevance feedback could be led with memory or without memory. To test the performance of the proposed method, a practical semantic-based image retrieval system has been established, and the querying results gained by our self-adaptive relevance feedback are given.
Self-adaptive relevance feedback based on multilevel image content analysis
NASA Astrophysics Data System (ADS)
Gao, Yongying; Zhang, Yujin; Fu, Yu
2000-12-01
In current content-based image retrieval systems, it is generally accepted that obtaining high-level image features is a key to improve the querying. Among the related techniques, relevance feedback has become a hot research aspect because it combines the information from the user to refine the querying results. In practice, many methods have been proposed to achieve the goal of relevance feedback. In this paper, a new scheme for relevance feedback is proposed. Unlike previous methods for relevance feedback, our scheme provides a self-adaptive operation. First, based on multi- level image content analysis, the relevant images from the user could be automatically analyzed in different levels and the querying could be modified in terms of different analysis results. Secondly, to make it more convenient to the user, the procedure of relevance feedback could be led with memory or without memory. To test the performance of the proposed method, a practical semantic-based image retrieval system has been established, and the querying results gained by our self-adaptive relevance feedback are given.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Townsend, D.W.; Linnhoff, B.
In Part I, criteria for heat engine and heat pump placement in chemical process networks were derived, based on the ''temperature interval'' (T.I) analysis of the heat exchanger network problem. Using these criteria, this paper gives a method for identifying the best outline design for any combined system of chemical process, heat engines, and heat pumps. The method eliminates inferior alternatives early, and positively leads on to the most appropriate solution. A graphical procedure based on the T.I. analysis forms the heart of the approach, and the calculations involved are simple enough to be carried out on, say, a programmablemore » calculator. Application to a case study is demonstrated. Optimization methods based on this procedure are currently under research.« less
Wavelet-based tracking of bacteria in unreconstructed off-axis holograms.
Marin, Zach; Wallace, J Kent; Nadeau, Jay; Khalil, Andre
2018-03-01
We propose an automated wavelet-based method of tracking particles in unreconstructed off-axis holograms to provide rough estimates of the presence of motion and particle trajectories in digital holographic microscopy (DHM) time series. The wavelet transform modulus maxima segmentation method is adapted and tailored to extract Airy-like diffraction disks, which represent bacteria, from DHM time series. In this exploratory analysis, the method shows potential for estimating bacterial tracks in low-particle-density time series, based on a preliminary analysis of both living and dead Serratia marcescens, and for rapidly providing a single-bit answer to whether a sample chamber contains living or dead microbes or is empty. Copyright © 2017 Elsevier Inc. All rights reserved.
Validity and consistency assessment of accident analysis methods in the petroleum industry.
Ahmadi, Omran; Mortazavi, Seyed Bagher; Khavanin, Ali; Mokarami, Hamidreza
2017-11-17
Accident analysis is the main aspect of accident investigation. It includes the method of connecting different causes in a procedural way. Therefore, it is important to use valid and reliable methods for the investigation of different causal factors of accidents, especially the noteworthy ones. This study aimed to prominently assess the accuracy (sensitivity index [SI]) and consistency of the six most commonly used accident analysis methods in the petroleum industry. In order to evaluate the methods of accident analysis, two real case studies (process safety and personal accident) from the petroleum industry were analyzed by 10 assessors. The accuracy and consistency of these methods were then evaluated. The assessors were trained in the workshop of accident analysis methods. The systematic cause analysis technique and bowtie methods gained the greatest SI scores for both personal and process safety accidents, respectively. The best average results of the consistency in a single method (based on 10 independent assessors) were in the region of 70%. This study confirmed that the application of methods with pre-defined causes and a logic tree could enhance the sensitivity and consistency of accident analysis.
Santra, Kalyan; Smith, Emily A.; Petrich, Jacob W.; ...
2016-12-12
It is often convenient to know the minimum amount of data needed in order to obtain a result of desired accuracy and precision. It is a necessity in the case of subdiffraction-limited microscopies, such as stimulated emission depletion (STED) microscopy, owing to the limited sample volumes and the extreme sensitivity of the samples to photobleaching and photodamage. We present a detailed comparison of probability-based techniques (the maximum likelihood method and methods based on the binomial and the Poisson distributions) with residual minimization-based techniques for retrieving the fluorescence decay parameters for various two-fluorophore mixtures, as a function of the total numbermore » of photon counts, in time-correlated, single-photon counting experiments. The probability-based techniques proved to be the most robust (insensitive to initial values) in retrieving the target parameters and, in fact, performed equivalently to 2-3 significant figures. This is to be expected, as we demonstrate that the three methods are fundamentally related. Furthermore, methods based on the Poisson and binomial distributions have the desirable feature of providing a bin-by-bin analysis of a single fluorescence decay trace, which thus permits statistics to be acquired using only the one trace for not only the mean and median values of the fluorescence decay parameters but also for the associated standard deviations. Lastly, these probability-based methods lend themselves well to the analysis of the sparse data sets that are encountered in subdiffraction-limited microscopies.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Kuangcai
The goal of this study is to help with future data analysis and experiment designs in rotational dynamics research using DIC-based SPORT technique. Most of the current studies using DIC-based SPORT techniques are technical demonstrations. Understanding the mechanisms behind the observed rotational behaviors of the imaging probes should be the focus of the future SPORT studies. More efforts are still needed in the development of new imaging probes, particle tracking methods, instrumentations, and advanced data analysis methods to further extend the potential of DIC-based SPORT technique.
An Integrated Low-Speed Performance and Noise Prediction Methodology for Subsonic Aircraft
NASA Technical Reports Server (NTRS)
Olson, E. D.; Mavris, D. N.
2000-01-01
An integrated methodology has been assembled to compute the engine performance, takeoff and landing trajectories, and community noise levels for a subsonic commercial aircraft. Where feasible, physics-based noise analysis methods have been used to make the results more applicable to newer, revolutionary designs and to allow for a more direct evaluation of new technologies. The methodology is intended to be used with approximation methods and risk analysis techniques to allow for the analysis of a greater number of variable combinations while retaining the advantages of physics-based analysis. Details of the methodology are described and limited results are presented for a representative subsonic commercial aircraft.
Shu, Ting; Zhang, Bob; Tang, Yuan Yan
2017-01-01
At present, heart disease is the number one cause of death worldwide. Traditionally, heart disease is commonly detected using blood tests, electrocardiogram, cardiac computerized tomography scan, cardiac magnetic resonance imaging, and so on. However, these traditional diagnostic methods are time consuming and/or invasive. In this paper, we propose an effective noninvasive computerized method based on facial images to quantitatively detect heart disease. Specifically, facial key block color features are extracted from facial images and analyzed using the Probabilistic Collaborative Representation Based Classifier. The idea of facial key block color analysis is founded in Traditional Chinese Medicine. A new dataset consisting of 581 heart disease and 581 healthy samples was experimented by the proposed method. In order to optimize the Probabilistic Collaborative Representation Based Classifier, an analysis of its parameters was performed. According to the experimental results, the proposed method obtains the highest accuracy compared with other classifiers and is proven to be effective at heart disease detection.
ERIC Educational Resources Information Center
Zheng, Lanqin; Yang, Kaicheng; Huang, Ronghuai
2012-01-01
This study proposes a new method named the IIS-map-based method for analyzing interactions in face-to-face collaborative learning settings. This analysis method is conducted in three steps: firstly, drawing an initial IIS-map according to collaborative tasks; secondly, coding and segmenting information flows into information items of IIS; thirdly,…
Elsohaby, Ibrahim; McClure, J Trenton; Riley, Christopher B; Bryanton, Janet; Bigsby, Kathryn; Shaw, R Anthony
2018-02-20
Attenuated total reflectance infrared (ATR-IR) spectroscopy is a simple, rapid and cost-effective method for the analysis of serum. However, the complex nature of serum remains a limiting factor to the reliability of this method. We investigated the benefits of coupling the centrifugal ultrafiltration with ATR-IR spectroscopy for quantification of human serum IgA concentration. Human serum samples (n = 196) were analyzed for IgA using an immunoturbidimetric assay. ATR-IR spectra were acquired for whole serum samples and for the retentate (residue) reconstituted with saline following 300 kDa centrifugal ultrafiltration. IR-based analytical methods were developed for each of the two spectroscopic datasets, and the accuracy of each of the two methods compared. Analytical methods were based upon partial least squares regression (PLSR) calibration models - one with 5-PLS factors (for whole serum) and the second with 9-PLS factors (for the reconstituted retentate). Comparison of the two sets of IR-based analytical results to reference IgA values revealed improvements in the Pearson correlation coefficient (from 0.66 to 0.76), and the root mean squared error of prediction in IR-based IgA concentrations (from 102 to 79 mg/dL) for the ultrafiltration retentate-based method as compared to the method built upon whole serum spectra. Depleting human serum low molecular weight proteins using a 300 kDa centrifugal filter thus enhances the accuracy IgA quantification by ATR-IR spectroscopy. Further evaluation and optimization of this general approach may ultimately lead to routine analysis of a range of high molecular-weight analytical targets that are otherwise unsuitable for IR-based analysis. Copyright © 2017 Elsevier B.V. All rights reserved.
On Multifunctional Collaborative Methods in Engineering Science
NASA Technical Reports Server (NTRS)
Ransom, Jonathan B.
2001-01-01
Multifunctional methodologies and analysis procedures are formulated for interfacing diverse subdomain idealizations including multi-fidelity modeling methods and multi-discipline analysis methods. These methods, based on the method of weighted residuals, ensure accurate compatibility of primary and secondary variables across the subdomain interfaces. Methods are developed using diverse mathematical modeling (i.e., finite difference and finite element methods) and multi-fidelity modeling among the subdomains. Several benchmark scalar-field and vector-field problems in engineering science are presented with extensions to multidisciplinary problems. Results for all problems presented are in overall good agreement with the exact analytical solution or the reference numerical solution. Based on the results, the integrated modeling approach using the finite element method for multi-fidelity discretization among the subdomains is identified as most robust. The multiple method approach is advantageous when interfacing diverse disciplines in which each of the method's strengths are utilized.
NASA Astrophysics Data System (ADS)
Zhang, Zhen; Chen, Siqing; Zheng, Huadong; Sun, Tao; Yu, Yingjie; Gao, Hongyue; Asundi, Anand K.
2017-06-01
Computer holography has made a notably progress in recent years. The point-based method and slice-based method are chief calculation algorithms for generating holograms in holographic display. Although both two methods are validated numerically and optically, the differences of the imaging quality of these methods have not been specifically analyzed. In this paper, we analyze the imaging quality of computer-generated phase holograms generated by point-based Fresnel zone plates (PB-FZP), point-based Fresnel diffraction algorithm (PB-FDA) and slice-based Fresnel diffraction algorithm (SB-FDA). The calculation formula and hologram generation with three methods are demonstrated. In order to suppress the speckle noise, sequential phase-only holograms are generated in our work. The results of reconstructed images numerically and experimentally are also exhibited. By comparing the imaging quality, the merits and drawbacks with three methods are analyzed. Conclusions are given by us finally.
Categorical data processing for real estate objects valuation using statistical analysis
NASA Astrophysics Data System (ADS)
Parygin, D. S.; Malikov, V. P.; Golubev, A. V.; Sadovnikova, N. P.; Petrova, T. M.; Finogeev, A. G.
2018-05-01
Theoretical and practical approaches to the use of statistical methods for studying various properties of infrastructure objects are analyzed in the paper. Methods of forecasting the value of objects are considered. A method for coding categorical variables describing properties of real estate objects is proposed. The analysis of the results of modeling the price of real estate objects using regression analysis and an algorithm based on a comparative approach is carried out.
Sparse dictionary learning for resting-state fMRI analysis
NASA Astrophysics Data System (ADS)
Lee, Kangjoo; Han, Paul Kyu; Ye, Jong Chul
2011-09-01
Recently, there has been increased interest in the usage of neuroimaging techniques to investigate what happens in the brain at rest. Functional imaging studies have revealed that the default-mode network activity is disrupted in Alzheimer's disease (AD). However, there is no consensus, as yet, on the choice of analysis method for the application of resting-state analysis for disease classification. This paper proposes a novel compressed sensing based resting-state fMRI analysis tool called Sparse-SPM. As the brain's functional systems has shown to have features of complex networks according to graph theoretical analysis, we apply a graph model to represent a sparse combination of information flows in complex network perspectives. In particular, a new concept of spatially adaptive design matrix has been proposed by implementing sparse dictionary learning based on sparsity. The proposed approach shows better performance compared to other conventional methods, such as independent component analysis (ICA) and seed-based approach, in classifying the AD patients from normal using resting-state analysis.
NASA Technical Reports Server (NTRS)
Korkin, Sergey V.; Lyapustin, Alexei I.; Rozanov, Vladimir V.
2012-01-01
A numerical accuracy analysis of the radiative transfer equation (RTE) solution based on separation of the diffuse light field into anisotropic and smooth parts is presented. The analysis uses three different algorithms based on the discrete ordinate method (DOM). Two methods, DOMAS and DOM2+, that do not use the truncation of the phase function, are compared against the TMS-method. DOMAS and DOM2+ use the Small-Angle Modification of RTE and the single scattering term, respectively, as an anisotropic part. The TMS method uses Delta-M method for truncation of the phase function along with the single scattering correction. For reference, a standard discrete ordinate method, DOM, is also included in analysis. The obtained results for cases with high scattering anisotropy show that at low number of streams (16, 32) only DOMAS provides an accurate solution in the aureole area. Outside of the aureole, the convergence and accuracy of DOMAS, and TMS is found to be approximately similar: DOMAS was found more accurate in cases with coarse aerosol and liquid water cloud models, except low optical depth, while the TMS showed better results in case of ice cloud.
NASA Astrophysics Data System (ADS)
Peretyagin, Vladimir S.; Korolev, Timofey K.; Chertov, Aleksandr N.
2017-02-01
The problems of dressability the solid minerals are attracted attention of specialists, where the extraction of mineral raw materials is a significant sector of the economy. There are a significant amount of mineral ore dressability methods. At the moment the radiometric dressability methods are considered the most promising. One of radiometric methods is method photoluminescence. This method is based on the spectral analysis, amplitude and kinetic parameters luminescence of minerals (under UV radiation), as well as color parameters of radiation. The absence of developed scientific and methodological approaches of analysis irradiation area to UV radiation as well as absence the relevant radiation sources are the factors which hinder development and use of photoluminescence method. The present work is devoted to the development of multi-element UV radiation source designed for the solution problem of analysis and sorting minerals by their selective luminescence. This article is presented a method of theoretical modeling of the radiation devices based on UV LEDs. The models consider such factors as spectral component, the spatial and energy parameters of the LEDs. Also, this article is presented the results of experimental studies of the some samples minerals.
Analysis of biomolecular solvation sites by 3D-RISM theory.
Sindhikara, Daniel J; Hirata, Fumio
2013-06-06
We derive, implement, and apply equilibrium solvation site analysis for biomolecules. Our method utilizes 3D-RISM calculations to quickly obtain equilibrium solvent distributions without either necessity of simulation or limits of solvent sampling. Our analysis of these distributions extracts highest likelihood poses of solvent as well as localized entropies, enthalpies, and solvation free energies. We demonstrate our method on a structure of HIV-1 protease where excellent structural and thermodynamic data are available for comparison. Our results, obtained within minutes, show systematic agreement with available experimental data. Further, our results are in good agreement with established simulation-based solvent analysis methods. This method can be used not only for visual analysis of active site solvation but also for virtual screening methods and experimental refinement.
The SNPforID Assay as a Supplementary Method in Kinship and Trace Analysis
Schwark, Thorsten; Meyer, Patrick; Harder, Melanie; Modrow, Jan-Hendrick; von Wurmb-Schwark, Nicole
2012-01-01
Objective Short tandem repeat (STR) analysis using commercial multiplex PCR kits is the method of choice for kinship testing and trace analysis. However, under certain circumstances (deficiency testing, mutations, minute DNA amounts), STRs alone may not suffice. Methods We present a 50-plex single nucleotide polymorphism (SNP) assay based on the SNPs chosen by the SNPforID consortium as an additional method for paternity and for trace analysis. The new assay was applied to selected routine paternity and trace cases from our laboratory. Results and Conclusions Our investigation shows that the new SNP multiplex assay is a valuable method to supplement STR analysis, and is a powerful means to solve complicated genetic analyses. PMID:22851934
Frequency-Specific Fractal Analysis of Postural Control Accounts for Control Strategies
Gilfriche, Pierre; Deschodt-Arsac, Véronique; Blons, Estelle; Arsac, Laurent M.
2018-01-01
Diverse indicators of postural control in Humans have been explored for decades, mostly based on the trajectory of the center-of-pressure. Classical approaches focus on variability, based on the notion that if a posture is too variable, the subject is not stable. Going deeper, an improved understanding of underlying physiology has been gained from studying variability in different frequency ranges, pointing to specific short-loops (proprioception), and long-loops (visuo-vestibular) in neural control. More recently, fractal analyses have proliferated and become useful additional metrics of postural control. They allowed identifying two scaling phenomena, respectively in short and long timescales. Here, we show that one of the most widely used methods for fractal analysis, Detrended Fluctuation Analysis, could be enhanced to account for scalings on specific frequency ranges. By computing and filtering a bank of synthetic fractal signals, we established how scaling analysis can be focused on specific frequency components. We called the obtained method Frequency-specific Fractal Analysis (FsFA) and used it to associate the two scaling phenomena of postural control to proprioceptive-based control loop and visuo-vestibular based control loop. After that, convincing arguments of method validity came from an application on the study of unaltered vs. altered postural control in athletes. Overall, the analysis suggests that at least two timescales contribute to postural control: a velocity-based control in short timescales relying on proprioceptive sensors, and a position-based control in longer timescales with visuo-vestibular sensors, which is a brand-new vision of postural control. Frequency-specific scaling exponents are promising markers of control strategies in Humans. PMID:29643816
Displacement-based back-analysis of the model parameters of the Nuozhadu high earth-rockfill dam.
Wu, Yongkang; Yuan, Huina; Zhang, Bingyin; Zhang, Zongliang; Yu, Yuzhen
2014-01-01
The parameters of the constitutive model, the creep model, and the wetting model of materials of the Nuozhadu high earth-rockfill dam were back-analyzed together based on field monitoring displacement data by employing an intelligent back-analysis method. In this method, an artificial neural network is used as a substitute for time-consuming finite element analysis, and an evolutionary algorithm is applied for both network training and parameter optimization. To avoid simultaneous back-analysis of many parameters, the model parameters of the three main dam materials are decoupled and back-analyzed separately in a particular order. Displacement back-analyses were performed at different stages of the construction period, with and without considering the creep and wetting deformations. Good agreement between the numerical results and the monitoring data was obtained for most observation points, which implies that the back-analysis method and decoupling method are effective for solving complex problems with multiple models and parameters. The comparison of calculation results based on different sets of back-analyzed model parameters indicates the necessity of taking the effects of creep and wetting into consideration in the numerical analyses of high earth-rockfill dams. With the resulting model parameters, the stress and deformation distributions at completion are predicted and analyzed.
Feature-space-based FMRI analysis using the optimal linear transformation.
Sun, Fengrong; Morris, Drew; Lee, Wayne; Taylor, Margot J; Mills, Travis; Babyn, Paul S
2010-09-01
The optimal linear transformation (OLT), an image analysis technique of feature space, was first presented in the field of MRI. This paper proposes a method of extending OLT from MRI to functional MRI (fMRI) to improve the activation-detection performance over conventional approaches of fMRI analysis. In this method, first, ideal hemodynamic response time series for different stimuli were generated by convolving the theoretical hemodynamic response model with the stimulus timing. Second, constructing hypothetical signature vectors for different activity patterns of interest by virtue of the ideal hemodynamic responses, OLT was used to extract features of fMRI data. The resultant feature space had particular geometric clustering properties. It was then classified into different groups, each pertaining to an activity pattern of interest; the applied signature vector for each group was obtained by averaging. Third, using the applied signature vectors, OLT was applied again to generate fMRI composite images with high SNRs for the desired activity patterns. Simulations and a blocked fMRI experiment were employed for the method to be verified and compared with the general linear model (GLM)-based analysis. The simulation studies and the experimental results indicated the superiority of the proposed method over the GLM-based analysis in detecting brain activities.
Yong, A.; Hough, S.E.; Cox, B.R.; Rathje, E.M.; Bachhuber, J.; Dulberg, R.; Hulslander, D.; Christiansen, L.; Abrams, M.J.
2011-01-01
We report about a preliminary study to evaluate the use of semi-automated imaging analysis of remotely-sensed DEM and field geophysical measurements to develop a seismic-zonation map of Port-au-Prince, Haiti. For in situ data, Vs30 values are derived from the MASW technique deployed in and around the city. For satellite imagery, we use an ASTER GDEM of Hispaniola. We apply both pixel- and object-based imaging methods on the ASTER GDEM to explore local topography (absolute elevation values) and classify terrain types such as mountains, alluvial fans and basins/near-shore regions. We assign NEHRP seismic site class ranges based on available Vs30 values. A comparison of results from imagery-based methods to results from traditional geologic-based approaches reveals good overall correspondence. We conclude that image analysis of RS data provides reliable first-order site characterization results in the absence of local data and can be useful to refine detailed site maps with sparse local data. ?? 2011 American Society for Photogrammetry and Remote Sensing.
Molecular analysis of single oocyst of Eimeria by whole genome amplification (WGA) based nested PCR.
Wang, Yunzhou; Tao, Geru; Cui, Yujuan; Lv, Qiyao; Xie, Li; Li, Yuan; Suo, Xun; Qin, Yinghe; Xiao, Lihua; Liu, Xianyong
2014-09-01
PCR-based molecular tools are widely used for the identification and characterization of protozoa. Here we report the molecular analysis of Eimeria species using combined methods of whole genome amplification (WGA) and nested PCR. Single oocyst of Eimeria stiedai or Eimeriamedia was directly used for random amplification of the genomic DNA with either primer extension preamplification (PEP) or multiple displacement amplification (MDA), and then the WGA product was used as template in nested PCR with species-specific primers for ITS-1, 18S rDNA and 23S rDNA of E. stiedai and E. media. WGA-based PCR was successful for the amplification of these genes from single oocyst. For the species identification of single oocyst isolated from mixed E. stiedai or E. media, the results from WGA-based PCR were exactly in accordance with those from morphological identification, suggesting the availability of this method in molecular analysis of eimerian parasites at the single oocyst level. WGA-based PCR method can also be applied for the identification and genetic characterization of other protists. Copyright © 2014 Elsevier Inc. All rights reserved.
GWASinlps: Nonlocal prior based iterative SNP selection tool for genome-wide association studies.
Sanyal, Nilotpal; Lo, Min-Tzu; Kauppi, Karolina; Djurovic, Srdjan; Andreassen, Ole A; Johnson, Valen E; Chen, Chi-Hua
2018-06-19
Multiple marker analysis of the genome-wide association study (GWAS) data has gained ample attention in recent years. However, because of the ultra high-dimensionality of GWAS data, such analysis is challenging. Frequently used penalized regression methods often lead to large number of false positives, whereas Bayesian methods are computationally very expensive. Motivated to ameliorate these issues simultaneously, we consider the novel approach of using nonlocal priors in an iterative variable selection framework. We develop a variable selection method, named, iterative nonlocal prior based selection for GWAS, or GWASinlps, that combines, in an iterative variable selection framework, the computational efficiency of the screen-and-select approach based on some association learning and the parsimonious uncertainty quantification provided by the use of nonlocal priors. The hallmark of our method is the introduction of 'structured screen-and-select' strategy, that considers hierarchical screening, which is not only based on response-predictor associations, but also based on response-response associations, and concatenates variable selection within that hierarchy. Extensive simulation studies with SNPs having realistic linkage disequilibrium structures demonstrate the advantages of our computationally efficient method compared to several frequentist and Bayesian variable selection methods, in terms of true positive rate, false discovery rate, mean squared error, and effect size estimation error. Further, we provide empirical power analysis useful for study design. Finally, a real GWAS data application was considered with human height as phenotype. An R-package for implementing the GWASinlps method is available at https://cran.r-project.org/web/packages/GWASinlps/index.html. Supplementary data are available at Bioinformatics online.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Donnelly, H.; Fullwood, R.; Glancy, J.
This is the second volume of a two volume report on the VISA method for evaluating safeguards at fixed-site facilities. This volume contains appendices that support the description of the VISA concept and the initial working version of the method, VISA-1, presented in Volume I. The information is separated into four appendices, each describing details of one of the four analysis modules that comprise the analysis sections of the method. The first appendix discusses Path Analysis methodology, applies it to a Model Fuel Facility, and describes the computer codes that are being used. Introductory material on Path Analysis given inmore » Chapter 3.2.1 and Chapter 4.2.1 of Volume I. The second appendix deals with Detection Analysis, specifically the schemes used in VISA-1 for classifying adversaries and the methods proposed for evaluating individual detection mechanisms in order to build the data base required for detection analysis. Examples of evaluations on identity-access systems, SNM portal monitors, and intrusion devices are provided. The third appendix describes the Containment Analysis overt-segment path ranking, the Monte Carlo engagement model, the network simulation code, the delay mechanism data base, and the results of a sensitivity analysis. The last appendix presents general equations used in Interruption Analysis for combining covert-overt segments and compares them with equations given in Volume I, Chapter 3.« less
A Novel Method for Block Size Forensics Based on Morphological Operations
NASA Astrophysics Data System (ADS)
Luo, Weiqi; Huang, Jiwu; Qiu, Guoping
Passive forensics analysis aims to find out how multimedia data is acquired and processed without relying on pre-embedded or pre-registered information. Since most existing compression schemes for digital images are based on block processing, one of the fundamental steps for subsequent forensics analysis is to detect the presence of block artifacts and estimate the block size for a given image. In this paper, we propose a novel method for blind block size estimation. A 2×2 cross-differential filter is first applied to detect all possible block artifact boundaries, morphological operations are then used to remove the boundary effects caused by the edges of the actual image contents, and finally maximum-likelihood estimation (MLE) is employed to estimate the block size. The experimental results evaluated on over 1300 nature images show the effectiveness of our proposed method. Compared with existing gradient-based detection method, our method achieves over 39% accuracy improvement on average.
Exploiting salient semantic analysis for information retrieval
NASA Astrophysics Data System (ADS)
Luo, Jing; Meng, Bo; Quan, Changqin; Tu, Xinhui
2016-11-01
Recently, many Wikipedia-based methods have been proposed to improve the performance of different natural language processing (NLP) tasks, such as semantic relatedness computation, text classification and information retrieval. Among these methods, salient semantic analysis (SSA) has been proven to be an effective way to generate conceptual representation for words or documents. However, its feasibility and effectiveness in information retrieval is mostly unknown. In this paper, we study how to efficiently use SSA to improve the information retrieval performance, and propose a SSA-based retrieval method under the language model framework. First, SSA model is adopted to build conceptual representations for documents and queries. Then, these conceptual representations and the bag-of-words (BOW) representations can be used in combination to estimate the language models of queries and documents. The proposed method is evaluated on several standard text retrieval conference (TREC) collections. Experiment results on standard TREC collections show the proposed models consistently outperform the existing Wikipedia-based retrieval methods.
Zafar, Raheel; Dass, Sarat C; Malik, Aamir Saeed
2017-01-01
Electroencephalogram (EEG)-based decoding human brain activity is challenging, owing to the low spatial resolution of EEG. However, EEG is an important technique, especially for brain-computer interface applications. In this study, a novel algorithm is proposed to decode brain activity associated with different types of images. In this hybrid algorithm, convolutional neural network is modified for the extraction of features, a t-test is used for the selection of significant features and likelihood ratio-based score fusion is used for the prediction of brain activity. The proposed algorithm takes input data from multichannel EEG time-series, which is also known as multivariate pattern analysis. Comprehensive analysis was conducted using data from 30 participants. The results from the proposed method are compared with current recognized feature extraction and classification/prediction techniques. The wavelet transform-support vector machine method is the most popular currently used feature extraction and prediction method. This method showed an accuracy of 65.7%. However, the proposed method predicts the novel data with improved accuracy of 79.9%. In conclusion, the proposed algorithm outperformed the current feature extraction and prediction method.
Evaluation of aortic contractility based on analysis of CT images of the heart
NASA Astrophysics Data System (ADS)
DzierŻak, RóŻa; Maciejewski, Ryszard; Uhlig, Sebastian
2017-08-01
The paper presents a method to assess the aortic contractility based on the analysis of CT images of the heart. This is an alternative method that can be used for patients who cannot be examined by using echocardiography. Usage of medical imaging application for DICOM file processing allows to evaluate the aortic cross section during systole and diastole. It makes possible to assess the level of aortic contractility.
NASA Astrophysics Data System (ADS)
Lu, Siqi; Wang, Xiaorong; Wu, Junyong
2018-01-01
The paper presents a method to generate the planning scenarios, which is based on K-means clustering analysis algorithm driven by data, for the location and size planning of distributed photovoltaic (PV) units in the network. Taken the power losses of the network, the installation and maintenance costs of distributed PV, the profit of distributed PV and the voltage offset as objectives and the locations and sizes of distributed PV as decision variables, Pareto optimal front is obtained through the self-adaptive genetic algorithm (GA) and solutions are ranked by a method called technique for order preference by similarity to an ideal solution (TOPSIS). Finally, select the planning schemes at the top of the ranking list based on different planning emphasis after the analysis in detail. The proposed method is applied to a 10-kV distribution network in Gansu Province, China and the results are discussed.
Research on response spectrum of dam based on scenario earthquake
NASA Astrophysics Data System (ADS)
Zhang, Xiaoliang; Zhang, Yushan
2017-10-01
Taking a large hydropower station as an example, the response spectrum based on scenario earthquake is determined. Firstly, the potential source of greatest contribution to the site is determined on the basis of the results of probabilistic seismic hazard analysis (PSHA). Secondly, the magnitude and epicentral distance of the scenario earthquake are calculated according to the main faults and historical earthquake of the potential seismic source zone. Finally, the response spectrum of scenario earthquake is calculated using the Next Generation Attenuation (NGA) relations. The response spectrum based on scenario earthquake method is less than the probability-consistent response spectrum obtained by PSHA method. The empirical analysis shows that the response spectrum of scenario earthquake considers the probability level and the structural factors, and combines the advantages of the deterministic and probabilistic seismic hazard analysis methods. It is easy for people to accept and provide basis for seismic engineering of hydraulic engineering.
Orms, Natalie; Rehn, Dirk R; Dreuw, Andreas; Krylov, Anna I
2018-02-13
Density-based wave function analysis enables unambiguous comparisons of the electronic structure computed by different methods and removes ambiguity of orbital choices. We use this tool to investigate the performance of different spin-flip methods for several prototypical diradicals and triradicals. In contrast to previous calibration studies that focused on energy gaps between high- and low spin-states, we focus on the properties of the underlying wave functions, such as the number of effectively unpaired electrons. Comparison of different density functional and wave function theory results provides insight into the performance of the different methods when applied to strongly correlated systems such as polyradicals. We show that canonical molecular orbitals for species like large copper-containing diradicals fail to correctly represent the underlying electronic structure due to highly non-Koopmans character, while density-based analysis of the same wave function delivers a clear picture of the bonding pattern.
Leclercq, L; Laurent, C; De Pauw, E
1997-05-15
A method was developed for the analysis of 7-(2-hydroxyethyl)guanine (7HEG), the major DNA adduct formed after exposure to ethylene oxide (EO). The method is based on DNA neutral thermal hydrolysis, adduct micro-concentration, and final characterization and quantification by HPLC coupled to single-ion monitoring electrospray mass spectrometry (HPLC/SIR-ESMS). The method was found to be selective, sensitive, and easy to handle with no need for enzymatic digestion or previous sample derivatization. Detection limit was found to be close to 1 fmol of adduct injected (10(-10) M), thus allowing the detection of approximately three modified bases on 10(8) intact nucleotides in blood sample analysis. Quantification results are shown for 7HEG after calf thymus DNA and blood exposure to various doses of EO, in both cases obtaining clear dose-response relationships.
Visibility Graph Based Time Series Analysis.
Stephen, Mutua; Gu, Changgui; Yang, Huijie
2015-01-01
Network based time series analysis has made considerable achievements in the recent years. By mapping mono/multivariate time series into networks, one can investigate both it's microscopic and macroscopic behaviors. However, most proposed approaches lead to the construction of static networks consequently providing limited information on evolutionary behaviors. In the present paper we propose a method called visibility graph based time series analysis, in which series segments are mapped to visibility graphs as being descriptions of the corresponding states and the successively occurring states are linked. This procedure converts a time series to a temporal network and at the same time a network of networks. Findings from empirical records for stock markets in USA (S&P500 and Nasdaq) and artificial series generated by means of fractional Gaussian motions show that the method can provide us rich information benefiting short-term and long-term predictions. Theoretically, we propose a method to investigate time series from the viewpoint of network of networks.
A hybrid fault diagnosis approach based on mixed-domain state features for rotating machinery.
Xue, Xiaoming; Zhou, Jianzhong
2017-01-01
To make further improvement in the diagnosis accuracy and efficiency, a mixed-domain state features data based hybrid fault diagnosis approach, which systematically blends both the statistical analysis approach and the artificial intelligence technology, is proposed in this work for rolling element bearings. For simplifying the fault diagnosis problems, the execution of the proposed method is divided into three steps, i.e., fault preliminary detection, fault type recognition and fault degree identification. In the first step, a preliminary judgment about the health status of the equipment can be evaluated by the statistical analysis method based on the permutation entropy theory. If fault exists, the following two processes based on the artificial intelligence approach are performed to further recognize the fault type and then identify the fault degree. For the two subsequent steps, mixed-domain state features containing time-domain, frequency-domain and multi-scale features are extracted to represent the fault peculiarity under different working conditions. As a powerful time-frequency analysis method, the fast EEMD method was employed to obtain multi-scale features. Furthermore, due to the information redundancy and the submergence of original feature space, a novel manifold learning method (modified LGPCA) is introduced to realize the low-dimensional representations for high-dimensional feature space. Finally, two cases with 12 working conditions respectively have been employed to evaluate the performance of the proposed method, where vibration signals were measured from an experimental bench of rolling element bearing. The analysis results showed the effectiveness and the superiority of the proposed method of which the diagnosis thought is more suitable for practical application. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Image preprocessing study on KPCA-based face recognition
NASA Astrophysics Data System (ADS)
Li, Xuan; Li, Dehua
2015-12-01
Face recognition as an important biometric identification method, with its friendly, natural, convenient advantages, has obtained more and more attention. This paper intends to research a face recognition system including face detection, feature extraction and face recognition, mainly through researching on related theory and the key technology of various preprocessing methods in face detection process, using KPCA method, focuses on the different recognition results in different preprocessing methods. In this paper, we choose YCbCr color space for skin segmentation and choose integral projection for face location. We use erosion and dilation of the opening and closing operation and illumination compensation method to preprocess face images, and then use the face recognition method based on kernel principal component analysis method for analysis and research, and the experiments were carried out using the typical face database. The algorithms experiment on MATLAB platform. Experimental results show that integration of the kernel method based on PCA algorithm under certain conditions make the extracted features represent the original image information better for using nonlinear feature extraction method, which can obtain higher recognition rate. In the image preprocessing stage, we found that images under various operations may appear different results, so as to obtain different recognition rate in recognition stage. At the same time, in the process of the kernel principal component analysis, the value of the power of the polynomial function can affect the recognition result.
Classification of cassava genotypes based on qualitative and quantitative data.
Oliveira, E J; Oliveira Filho, O S; Santos, V S
2015-02-02
We evaluated the genetic variation of cassava accessions based on qualitative (binomial and multicategorical) and quantitative traits (continuous). We characterized 95 accessions obtained from the Cassava Germplasm Bank of Embrapa Mandioca e Fruticultura; we evaluated these accessions for 13 continuous, 10 binary, and 25 multicategorical traits. First, we analyzed the accessions based only on quantitative traits; next, we conducted joint analysis (qualitative and quantitative traits) based on the Ward-MLM method, which performs clustering in two stages. According to the pseudo-F, pseudo-t2, and maximum likelihood criteria, we identified five and four groups based on quantitative trait and joint analysis, respectively. The smaller number of groups identified based on joint analysis may be related to the nature of the data. On the other hand, quantitative data are more subject to environmental effects in the phenotype expression; this results in the absence of genetic differences, thereby contributing to greater differentiation among accessions. For most of the accessions, the maximum probability of classification was >0.90, independent of the trait analyzed, indicating a good fit of the clustering method. Differences in clustering according to the type of data implied that analysis of quantitative and qualitative traits in cassava germplasm might explore different genomic regions. On the other hand, when joint analysis was used, the means and ranges of genetic distances were high, indicating that the Ward-MLM method is very useful for clustering genotypes when there are several phenotypic traits, such as in the case of genetic resources and breeding programs.
The Five Star Method: A Relational Dream Work Methodology
ERIC Educational Resources Information Center
Sparrow, Gregory Scott; Thurston, Mark
2010-01-01
This article presents a systematic method of dream work called the Five Star Method. Based on cocreative dream theory, which views the dream as the product of the interaction between dreamer and dream, this creative intervention shifts the principal focus in dream analysis from the interpretation of static imagery to the analysis of the dreamer's…
Matsumoto, Hirotaka; Kiryu, Hisanori
2016-06-08
Single-cell technologies make it possible to quantify the comprehensive states of individual cells, and have the power to shed light on cellular differentiation in particular. Although several methods have been developed to fully analyze the single-cell expression data, there is still room for improvement in the analysis of differentiation. In this paper, we propose a novel method SCOUP to elucidate differentiation process. Unlike previous dimension reduction-based approaches, SCOUP describes the dynamics of gene expression throughout differentiation directly, including the degree of differentiation of a cell (in pseudo-time) and cell fate. SCOUP is superior to previous methods with respect to pseudo-time estimation, especially for single-cell RNA-seq. SCOUP also successfully estimates cell lineage more accurately than previous method, especially for cells at an early stage of bifurcation. In addition, SCOUP can be applied to various downstream analyses. As an example, we propose a novel correlation calculation method for elucidating regulatory relationships among genes. We apply this method to a single-cell RNA-seq data and detect a candidate of key regulator for differentiation and clusters in a correlation network which are not detected with conventional correlation analysis. We develop a stochastic process-based method SCOUP to analyze single-cell expression data throughout differentiation. SCOUP can estimate pseudo-time and cell lineage more accurately than previous methods. We also propose a novel correlation calculation method based on SCOUP. SCOUP is a promising approach for further single-cell analysis and available at https://github.com/hmatsu1226/SCOUP.
Wavelet-based image analysis system for soil texture analysis
NASA Astrophysics Data System (ADS)
Sun, Yun; Long, Zhiling; Jang, Ping-Rey; Plodinec, M. John
2003-05-01
Soil texture is defined as the relative proportion of clay, silt and sand found in a given soil sample. It is an important physical property of soil that affects such phenomena as plant growth and agricultural fertility. Traditional methods used to determine soil texture are either time consuming (hydrometer), or subjective and experience-demanding (field tactile evaluation). Considering that textural patterns observed at soil surfaces are uniquely associated with soil textures, we propose an innovative approach to soil texture analysis, in which wavelet frames-based features representing texture contents of soil images are extracted and categorized by applying a maximum likelihood criterion. The soil texture analysis system has been tested successfully with an accuracy of 91% in classifying soil samples into one of three general categories of soil textures. In comparison with the common methods, this wavelet-based image analysis approach is convenient, efficient, fast, and objective.
Tada, Atsuko; Ishizuki, Kyoko; Yamazaki, Takeshi; Sugimoto, Naoki; Akiyama, Hiroshi
2014-07-01
Natural ester-type gum bases, which are used worldwide as food additives, mainly consist of wax esters composed of long-chain fatty acids and long-chain fatty alcohols. There are many varieties of ester-type gum bases, and thus a useful method for their discrimination is needed in order to establish official specifications and manage their quality control. Herein is reported a rapid and simple method for the analysis of different ester-type gum bases used as food additives by high-temperature gas chromatography/mass spectrometry (GC/MS). With this method, the constituent wax esters in ester-type gum bases can be detected without hydrolysis and derivatization. The method was applied to the determination of 10 types of gum bases, including beeswax, carnauba wax, lanolin, and jojoba wax, and it was demonstrated that the gum bases derived from identical origins have specific and characteristic total ion chromatogram (TIC) patterns and ester compositions. Food additive gum bases were thus distinguished from one another based on their TIC patterns and then more clearly discriminated using simultaneous monitoring of the fragment ions corresponding to the fatty acid moieties of the individual molecular species of the wax esters. This direct high-temperature GC/MS method was shown to be very useful for the rapid and simple discrimination of varieties of ester-type gum bases used as food additives.
Tada, Atsuko; Ishizuki, Kyoko; Yamazaki, Takeshi; Sugimoto, Naoki; Akiyama, Hiroshi
2014-01-01
Natural ester-type gum bases, which are used worldwide as food additives, mainly consist of wax esters composed of long-chain fatty acids and long-chain fatty alcohols. There are many varieties of ester-type gum bases, and thus a useful method for their discrimination is needed in order to establish official specifications and manage their quality control. Herein is reported a rapid and simple method for the analysis of different ester-type gum bases used as food additives by high-temperature gas chromatography/mass spectrometry (GC/MS). With this method, the constituent wax esters in ester-type gum bases can be detected without hydrolysis and derivatization. The method was applied to the determination of 10 types of gum bases, including beeswax, carnauba wax, lanolin, and jojoba wax, and it was demonstrated that the gum bases derived from identical origins have specific and characteristic total ion chromatogram (TIC) patterns and ester compositions. Food additive gum bases were thus distinguished from one another based on their TIC patterns and then more clearly discriminated using simultaneous monitoring of the fragment ions corresponding to the fatty acid moieties of the individual molecular species of the wax esters. This direct high-temperature GC/MS method was shown to be very useful for the rapid and simple discrimination of varieties of ester-type gum bases used as food additives. PMID:25473499
RADIOISOTOPES USED IN PHARMACY. 5. IONIZING RADIATION IN PHARMACEUTICAL ANALYSIS (in Danish)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kristensen, K.
1962-09-01
The use of radioisotope methods for analyzing drugs is reviewed. It is pointed out that heretofore most methods have been based on isotope dilution principles whereas in the future radioactivation analysis, especially with neutron sources, offers great possibilities. (BBB)
Chemical properties and methods of analysis of refractory compounds
NASA Technical Reports Server (NTRS)
Samsonov, G. V. (Editor); Frantsevich, I. N. (Editor); Yeremenko, V. N. (Editor); Nazarchuk, T. N. (Editor); Popova, O. I. (Editor)
1978-01-01
Reactions involving refractory metals and the alloys based on them are discussed. Chemical, electrochemical, photometric, spectrophotometric, and X-ray analysis are among the methods described for analyzing the results of the reactions and for determining the chemical properties of these materials.
Improving Cluster Analysis with Automatic Variable Selection Based on Trees
2014-12-01
regression trees Daisy DISsimilAritY PAM partitioning around medoids PMA penalized multivariate analysis SPC sparse principal components UPGMA unweighted...unweighted pair-group average method ( UPGMA ). This method measures dissimilarities between all objects in two clusters and takes the average value
Efficacy of Virtual Patients in Medical Education: A Meta-Analysis of Randomized Studies
ERIC Educational Resources Information Center
Consorti, Fabrizio; Mancuso, Rosaria; Nocioni, Martina; Piccolo, Annalisa
2012-01-01
A meta-analysis was performed to assess the Effect Size (ES) from randomized studies comparing the effect of educational interventions in which Virtual patients (VPs) were used either as an alternative method or additive to usual curriculum versus interventions based on more traditional methods. Meta-analysis was designed, conducted and reported…
The assessment of vulnerability to natural disasters in China by using the DEA method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wei Yiming; Fan Ying; Lu Cong
2004-05-01
China has been greatly affected by natural disasters, so that it is of great importance to analyze the impact of natural disasters on national economy. Usually, the frequency of disasters or absolute loss inflicted by disasters is the first priority to be considered, while the capability of regions to overcome disasters is ignored. The concept of vulnerability is used to measure the capability to overcome disasters in different regions with distinctive economies. Traditional methods for vulnerability analysis calculate sub-indices based on disaster frequency, loss, the economic impact and the population of each region, and then add the sub-indices to getmore » a composite index for regional vulnerability. But those methods are sensitive to the weights selected for sub-indices when multi-indexes are added up to get an index of total vulnerability. The analytic results are less convincing because of the subjectivity of different weighting methods. A data envelopment analysis (DEA)-based model for analysis of regional vulnerability to natural disasters is presented here to improve upon the traditional method. This paper systematically describes the DEA method to evaluate the relative severity of disasters in each region. A model for regional vulnerability analysis is developed, based on the annual governmental statistics from 1989 to 2000. The regional vulnerabilities in China's mainland are illustrated as a case study, and a new method for the classification of regional vulnerability to natural disasters in China is proposed.« less
Time-variant random interval natural frequency analysis of structures
NASA Astrophysics Data System (ADS)
Wu, Binhua; Wu, Di; Gao, Wei; Song, Chongmin
2018-02-01
This paper presents a new robust method namely, unified interval Chebyshev-based random perturbation method, to tackle hybrid random interval structural natural frequency problem. In the proposed approach, random perturbation method is implemented to furnish the statistical features (i.e., mean and standard deviation) and Chebyshev surrogate model strategy is incorporated to formulate the statistical information of natural frequency with regards to the interval inputs. The comprehensive analysis framework combines the superiority of both methods in a way that computational cost is dramatically reduced. This presented method is thus capable of investigating the day-to-day based time-variant natural frequency of structures accurately and efficiently under concrete intrinsic creep effect with probabilistic and interval uncertain variables. The extreme bounds of the mean and standard deviation of natural frequency are captured through the embedded optimization strategy within the analysis procedure. Three particularly motivated numerical examples with progressive relationship in perspective of both structure type and uncertainty variables are demonstrated to justify the computational applicability, accuracy and efficiency of the proposed method.
NASA Astrophysics Data System (ADS)
Kiyono, Ken; Tsujimoto, Yutaka
2016-07-01
We develop a general framework to study the time and frequency domain characteristics of detrending-operation-based scaling analysis methods, such as detrended fluctuation analysis (DFA) and detrending moving average (DMA) analysis. In this framework, using either the time or frequency domain approach, the frequency responses of detrending operations are calculated analytically. Although the frequency domain approach based on conventional linear analysis techniques is only applicable to linear detrending operations, the time domain approach presented here is applicable to both linear and nonlinear detrending operations. Furthermore, using the relationship between the time and frequency domain representations of the frequency responses, the frequency domain characteristics of nonlinear detrending operations can be obtained. Based on the calculated frequency responses, it is possible to establish a direct connection between the root-mean-square deviation of the detrending-operation-based scaling analysis and the power spectrum for linear stochastic processes. Here, by applying our methods to DFA and DMA, including higher-order cases, exact frequency responses are calculated. In addition, we analytically investigate the cutoff frequencies of DFA and DMA detrending operations and show that these frequencies are not optimally adjusted to coincide with the corresponding time scale.
Kiyono, Ken; Tsujimoto, Yutaka
2016-07-01
We develop a general framework to study the time and frequency domain characteristics of detrending-operation-based scaling analysis methods, such as detrended fluctuation analysis (DFA) and detrending moving average (DMA) analysis. In this framework, using either the time or frequency domain approach, the frequency responses of detrending operations are calculated analytically. Although the frequency domain approach based on conventional linear analysis techniques is only applicable to linear detrending operations, the time domain approach presented here is applicable to both linear and nonlinear detrending operations. Furthermore, using the relationship between the time and frequency domain representations of the frequency responses, the frequency domain characteristics of nonlinear detrending operations can be obtained. Based on the calculated frequency responses, it is possible to establish a direct connection between the root-mean-square deviation of the detrending-operation-based scaling analysis and the power spectrum for linear stochastic processes. Here, by applying our methods to DFA and DMA, including higher-order cases, exact frequency responses are calculated. In addition, we analytically investigate the cutoff frequencies of DFA and DMA detrending operations and show that these frequencies are not optimally adjusted to coincide with the corresponding time scale.
Use of Language Sample Analysis by School-Based SLPs: Results of a Nationwide Survey
ERIC Educational Resources Information Center
Pavelko, Stacey L.; Owens, Robert E., Jr.; Ireland, Marie; Hahs-Vaughn, Debbie L.
2016-01-01
Purpose: This article examines use of language sample analysis (LSA) by school-based speech-language pathologists (SLPs), including characteristics of language samples, methods of transcription and analysis, barriers to LSA use, and factors affecting LSA use, such as American Speech-Language-Hearing Association certification, number of years'…
Yang, Ze-Hui; Zheng, Rui; Gao, Yuan; Zhang, Qiang
2016-09-01
With the widespread application of high-throughput technology, numerous meta-analysis methods have been proposed for differential expression profiling across multiple studies. We identified the suitable differentially expressed (DE) genes that contributed to lung adenocarcinoma (ADC) clustering based on seven popular multiple meta-analysis methods. Seven microarray expression profiles of ADC and normal controls were extracted from the ArrayExpress database. The Bioconductor was used to perform the data preliminary preprocessing. Then, DE genes across multiple studies were identified. Hierarchical clustering was applied to compare the classification performance for microarray data samples. The classification efficiency was compared based on accuracy, sensitivity and specificity. Across seven datasets, 573 ADC cases and 222 normal controls were collected. After filtering out unexpressed and noninformative genes, 3688 genes were remained for further analysis. The classification efficiency analysis showed that DE genes identified by sum of ranks method separated ADC from normal controls with the best accuracy, sensitivity and specificity of 0.953, 0.969 and 0.932, respectively. The gene set with the highest classification accuracy mainly participated in the regulation of response to external stimulus (P = 7.97E-04), cyclic nucleotide-mediated signaling (P = 0.01), regulation of cell morphogenesis (P = 0.01) and regulation of cell proliferation (P = 0.01). Evaluation of DE genes identified by different meta-analysis methods in classification efficiency provided a new perspective to the choice of the suitable method in a given application. Varying meta-analysis methods always present varying abilities, so synthetic consideration should be taken when providing meta-analysis methods for particular research. © 2015 John Wiley & Sons Ltd.
2014-01-01
Background Leptotrombidium pallidum and Leptotrombidium scutellare are the major vector mites for Orientia tsutsugamushi, the causative agent of scrub typhus. Before these organisms can be subjected to whole-genome sequencing, it is necessary to estimate their genome sizes to obtain basic information for establishing the strategies that should be used for genome sequencing and assembly. Method The genome sizes of L. pallidum and L. scutellare were estimated by a method based on quantitative real-time PCR. In addition, a k-mer analysis of the whole-genome sequences obtained through Illumina sequencing was conducted to verify the mutual compatibility and reliability of the results. Results The genome sizes estimated using qPCR were 191 ± 7 Mb for L. pallidum and 262 ± 13 Mb for L. scutellare. The k-mer analysis-based genome lengths were estimated to be 175 Mb for L. pallidum and 286 Mb for L. scutellare. The estimates from these two independent methods were mutually complementary and within a similar range to those of other Acariform mites. Conclusions The estimation method based on qPCR appears to be a useful alternative when the standard methods, such as flow cytometry, are impractical. The relatively small estimated genome sizes should facilitate whole-genome analysis, which could contribute to our understanding of Arachnida genome evolution and provide key information for scrub typhus prevention and mite vector competence. PMID:24947244
Chen, Xiaoxia; Zhao, Jing; Chen, Tianshu; Gao, Tao; Zhu, Xiaoli; Li, Genxi
2018-01-01
Comprehensive analysis of the expression level and location of tumor-associated membrane proteins (TMPs) is of vital importance for the profiling of tumor cells. Currently, two kinds of independent techniques, i.e. ex situ detection and in situ imaging, are usually required for the quantification and localization of TMPs respectively, resulting in some inevitable problems. Methods: Herein, based on a well-designed and fluorophore-labeled DNAzyme, we develop an integrated and facile method, in which imaging and quantification of TMPs in situ are achieved simultaneously in a single system. The labeled DNAzyme not only produces localized fluorescence for the visualization of TMPs but also catalyzes the cleavage of a substrate to produce quantitative fluorescent signals that can be collected from solution for the sensitive detection of TMPs. Results: Results from the DNAzyme-based in situ imaging and quantification of TMPs match well with traditional immunofluorescence and western blotting. In addition to the advantage of two-in-one, the DNAzyme-based method is highly sensitivity, allowing the detection of TMPs in only 100 cells. Moreover, the method is nondestructive. Cells after analysis could retain their physiological activity and could be cultured for other applications. Conclusion: The integrated system provides solid results for both imaging and quantification of TMPs, making it a competitive method over some traditional techniques for the analysis of TMPs, which offers potential application as a toolbox in the future.
NASA Astrophysics Data System (ADS)
Chen, Bin; Kitasaka, Takayuki; Honma, Hirotoshi; Takabatake, Hirotsugu; Mori, Masaki; Natori, Hiroshi; Mori, Kensaku
2012-03-01
This paper presents a solitary pulmonary nodule (SPN) segmentation method based on local intensity structure analysis and neighborhood feature analysis in chest CT images. Automated segmentation of SPNs is desirable for a chest computer-aided detection/diagnosis (CAS) system since a SPN may indicate early stage of lung cancer. Due to the similar intensities of SPNs and other chest structures such as blood vessels, many false positives (FPs) are generated by nodule detection methods. To reduce such FPs, we introduce two features that analyze the relation between each segmented nodule candidate and it neighborhood region. The proposed method utilizes a blob-like structure enhancement (BSE) filter based on Hessian analysis to augment the blob-like structures as initial nodule candidates. Then a fine segmentation is performed to segment much more accurate region of each nodule candidate. FP reduction is mainly addressed by investigating two neighborhood features based on volume ratio and eigenvector of Hessian that are calculates from the neighborhood region of each nodule candidate. We evaluated the proposed method by using 40 chest CT images, include 20 standard-dose CT images that we randomly chosen from a local database and 20 low-dose CT images that were randomly chosen from a public database: LIDC. The experimental results revealed that the average TP rate of proposed method was 93.6% with 12.3 FPs/case.
NASA Astrophysics Data System (ADS)
Chen, Guoxiong; Cheng, Qiuming
2016-02-01
Multi-resolution and scale-invariance have been increasingly recognized as two closely related intrinsic properties endowed in geofields such as geochemical and geophysical anomalies, and they are commonly investigated by using multiscale- and scaling-analysis methods. In this paper, the wavelet-based multiscale decomposition (WMD) method was proposed to investigate the multiscale natures of geochemical pattern from large scale to small scale. In the light of the wavelet transformation of fractal measures, we demonstrated that the wavelet approximation operator provides a generalization of box-counting method for scaling analysis of geochemical patterns. Specifically, the approximation coefficient acts as the generalized density-value in density-area fractal modeling of singular geochemical distributions. Accordingly, we presented a novel local singularity analysis (LSA) using the WMD algorithm which extends the conventional moving averaging to a kernel-based operator for implementing LSA. Finally, the novel LSA was validated using a case study dealing with geochemical data (Fe2O3) in stream sediments for mineral exploration in Inner Mongolia, China. In comparison with the LSA implemented using the moving averaging method the novel LSA using WMD identified improved weak geochemical anomalies associated with mineralization in covered area.
Zanchetti Meneghini, Leonardo; Rübensam, Gabriel; Claudino Bica, Vinicius; Ceccon, Amanda; Barreto, Fabiano; Flores Ferrão, Marco; Bergold, Ana Maria
2014-01-01
A simple and inexpensive method based on solvent extraction followed by low temperature clean-up was applied for determination of seven pyrethroids residues in bovine raw milk using gas chromatography coupled to tandem mass spectrometry (GC-MS/MS) and gas chromatography with electron-capture detector (GC-ECD). Sample extraction procedure was established through the evaluation of seven different extraction protocols, evaluated in terms of analyte recovery and cleanup efficiency. Sample preparation optimization was based on Doehlert design using fifteen runs with three different variables. Response surface methodologies and polynomial analysis were used to define the best extraction conditions. Method validation was carried out based on SANCO guide parameters and assessed by multivariate analysis. Method performance was considered satisfactory since mean recoveries were between 87% and 101% for three distinct concentrations. Accuracy and precision were lower than ±20%, and led to no significant differences (p < 0.05) between results obtained by GC-ECD and GC-MS/MS techniques. The method has been applied to routine analysis for determination of pyrethroid residues in bovine raw milk in the Brazilian National Residue Control Plan since 2013, in which a total of 50 samples were analyzed. PMID:25380457
Complexity analysis based on generalized deviation for financial markets
NASA Astrophysics Data System (ADS)
Li, Chao; Shang, Pengjian
2018-03-01
In this paper, a new modified method is proposed as a measure to investigate the correlation between past price and future volatility for financial time series, known as the complexity analysis based on generalized deviation. In comparison with the former retarded volatility model, the new approach is both simple and computationally efficient. The method based on the generalized deviation function presents us an exhaustive way showing the quantization of the financial market rules. Robustness of this method is verified by numerical experiments with both artificial and financial time series. Results show that the generalized deviation complexity analysis method not only identifies the volatility of financial time series, but provides a comprehensive way distinguishing the different characteristics between stock indices and individual stocks. Exponential functions can be used to successfully fit the volatility curves and quantify the changes of complexity for stock market data. Then we study the influence for negative domain of deviation coefficient and differences during the volatile periods and calm periods. after the data analysis of the experimental model, we found that the generalized deviation model has definite advantages in exploring the relationship between the historical returns and future volatility.
Xie, Xin-Ping; Xie, Yu-Feng; Wang, Hong-Qiang
2017-08-23
Large-scale accumulation of omics data poses a pressing challenge of integrative analysis of multiple data sets in bioinformatics. An open question of such integrative analysis is how to pinpoint consistent but subtle gene activity patterns across studies. Study heterogeneity needs to be addressed carefully for this goal. This paper proposes a regulation probability model-based meta-analysis, jGRP, for identifying differentially expressed genes (DEGs). The method integrates multiple transcriptomics data sets in a gene regulatory space instead of in a gene expression space, which makes it easy to capture and manage data heterogeneity across studies from different laboratories or platforms. Specifically, we transform gene expression profiles into a united gene regulation profile across studies by mathematically defining two gene regulation events between two conditions and estimating their occurring probabilities in a sample. Finally, a novel differential expression statistic is established based on the gene regulation profiles, realizing accurate and flexible identification of DEGs in gene regulation space. We evaluated the proposed method on simulation data and real-world cancer datasets and showed the effectiveness and efficiency of jGRP in identifying DEGs identification in the context of meta-analysis. Data heterogeneity largely influences the performance of meta-analysis of DEGs identification. Existing different meta-analysis methods were revealed to exhibit very different degrees of sensitivity to study heterogeneity. The proposed method, jGRP, can be a standalone tool due to its united framework and controllable way to deal with study heterogeneity.
Lee, Ga-Young; Kim, Jeonghun; Kim, Ju Han; Kim, Kiwoong; Seong, Joon-Kyung
2014-01-01
Mobile healthcare applications are becoming a growing trend. Also, the prevalence of dementia in modern society is showing a steady growing trend. Among degenerative brain diseases that cause dementia, Alzheimer disease (AD) is the most common. The purpose of this study was to identify AD patients using magnetic resonance imaging in the mobile environment. We propose an incremental classification for mobile healthcare systems. Our classification method is based on incremental learning for AD diagnosis and AD prediction using the cortical thickness data and hippocampus shape. We constructed a classifier based on principal component analysis and linear discriminant analysis. We performed initial learning and mobile subject classification. Initial learning is the group learning part in our server. Our smartphone agent implements the mobile classification and shows various results. With use of cortical thickness data analysis alone, the discrimination accuracy was 87.33% (sensitivity 96.49% and specificity 64.33%). When cortical thickness data and hippocampal shape were analyzed together, the achieved accuracy was 87.52% (sensitivity 96.79% and specificity 63.24%). In this paper, we presented a classification method based on online learning for AD diagnosis by employing both cortical thickness data and hippocampal shape analysis data. Our method was implemented on smartphone devices and discriminated AD patients for normal group.
EBprot: Statistical analysis of labeling-based quantitative proteomics data.
Koh, Hiromi W L; Swa, Hannah L F; Fermin, Damian; Ler, Siok Ghee; Gunaratne, Jayantha; Choi, Hyungwon
2015-08-01
Labeling-based proteomics is a powerful method for detection of differentially expressed proteins (DEPs). The current data analysis platform typically relies on protein-level ratios, which is obtained by summarizing peptide-level ratios for each protein. In shotgun proteomics, however, some proteins are quantified with more peptides than others, and this reproducibility information is not incorporated into the differential expression (DE) analysis. Here, we propose a novel probabilistic framework EBprot that directly models the peptide-protein hierarchy and rewards the proteins with reproducible evidence of DE over multiple peptides. To evaluate its performance with known DE states, we conducted a simulation study to show that the peptide-level analysis of EBprot provides better receiver-operating characteristic and more accurate estimation of the false discovery rates than the methods based on protein-level ratios. We also demonstrate superior classification performance of peptide-level EBprot analysis in a spike-in dataset. To illustrate the wide applicability of EBprot in different experimental designs, we applied EBprot to a dataset for lung cancer subtype analysis with biological replicates and another dataset for time course phosphoproteome analysis of EGF-stimulated HeLa cells with multiplexed labeling. Through these examples, we show that the peptide-level analysis of EBprot is a robust alternative to the existing statistical methods for the DE analysis of labeling-based quantitative datasets. The software suite is freely available on the Sourceforge website http://ebprot.sourceforge.net/. All MS data have been deposited in the ProteomeXchange with identifier PXD001426 (http://proteomecentral.proteomexchange.org/dataset/PXD001426/). © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Artificial intelligence techniques used in respiratory sound analysis--a systematic review.
Palaniappan, Rajkumar; Sundaraj, Kenneth; Sundaraj, Sebastian
2014-02-01
Artificial intelligence (AI) has recently been established as an alternative method to many conventional methods. The implementation of AI techniques for respiratory sound analysis can assist medical professionals in the diagnosis of lung pathologies. This article highlights the importance of AI techniques in the implementation of computer-based respiratory sound analysis. Articles on computer-based respiratory sound analysis using AI techniques were identified by searches conducted on various electronic resources, such as the IEEE, Springer, Elsevier, PubMed, and ACM digital library databases. Brief descriptions of the types of respiratory sounds and their respective characteristics are provided. We then analyzed each of the previous studies to determine the specific respiratory sounds/pathology analyzed, the number of subjects, the signal processing method used, the AI techniques used, and the performance of the AI technique used in the analysis of respiratory sounds. A detailed description of each of these studies is provided. In conclusion, this article provides recommendations for further advancements in respiratory sound analysis.
Maas, Miriam; van Roon, Annika; Dam-Deisz, Cecile; Opsteegh, Marieke; Massolo, Alessandro; Deksne, Gunita; Teunis, Peter; van der Giessen, Joke
2016-10-30
A new method, based on a magnetic capture based DNA extraction followed by qPCR, was developed for the detection of the zoonotic parasite Echinococcus multilocularis in definitive hosts. Latent class analysis was used to compare this new method with the currently used phenol-chloroform DNA extraction followed by single tube nested PCR. In total, 60 red foxes and coyotes from three different locations were tested with both molecular methods and the sedimentation and counting technique (SCT) or intestinal scraping technique (IST). Though based on a limited number of samples, it could be established that the magnetic capture based DNA extraction followed by qPCR showed similar sensitivity and specificity as the currently used phenol-chloroform DNA extraction followed by single tube nested PCR. All methods have a high specificity as shown by Bayesian latent class analysis. Both molecular assays have higher sensitivities than the combined SCT and IST, though the uncertainties in sensitivity estimates were wide for all assays tested. The magnetic capture based DNA extraction followed by qPCR has the advantage of not requiring hazardous chemicals like the phenol-chloroform DNA extraction followed by single tube nested PCR. This supports the replacement of the phenol-chloroform DNA extraction followed by single tube nested PCR by the magnetic capture based DNA extraction followed by qPCR for molecular detection of E. multilocularis in definitive hosts. Copyright © 2016 Elsevier B.V. All rights reserved.
Multivariate meta-analysis: a robust approach based on the theory of U-statistic.
Ma, Yan; Mazumdar, Madhu
2011-10-30
Meta-analysis is the methodology for combining findings from similar research studies asking the same question. When the question of interest involves multiple outcomes, multivariate meta-analysis is used to synthesize the outcomes simultaneously taking into account the correlation between the outcomes. Likelihood-based approaches, in particular restricted maximum likelihood (REML) method, are commonly utilized in this context. REML assumes a multivariate normal distribution for the random-effects model. This assumption is difficult to verify, especially for meta-analysis with small number of component studies. The use of REML also requires iterative estimation between parameters, needing moderately high computation time, especially when the dimension of outcomes is large. A multivariate method of moments (MMM) is available and is shown to perform equally well to REML. However, there is a lack of information on the performance of these two methods when the true data distribution is far from normality. In this paper, we propose a new nonparametric and non-iterative method for multivariate meta-analysis on the basis of the theory of U-statistic and compare the properties of these three procedures under both normal and skewed data through simulation studies. It is shown that the effect on estimates from REML because of non-normal data distribution is marginal and that the estimates from MMM and U-statistic-based approaches are very similar. Therefore, we conclude that for performing multivariate meta-analysis, the U-statistic estimation procedure is a viable alternative to REML and MMM. Easy implementation of all three methods are illustrated by their application to data from two published meta-analysis from the fields of hip fracture and periodontal disease. We discuss ideas for future research based on U-statistic for testing significance of between-study heterogeneity and for extending the work to meta-regression setting. Copyright © 2011 John Wiley & Sons, Ltd.
Modeling eye gaze patterns in clinician-patient interaction with lag sequential analysis.
Montague, Enid; Xu, Jie; Chen, Ping-Yu; Asan, Onur; Barrett, Bruce P; Chewning, Betty
2011-10-01
The aim of this study was to examine whether lag sequential analysis could be used to describe eye gaze orientation between clinicians and patients in the medical encounter. This topic is particularly important as new technologies are implemented into multiuser health care settings in which trust is critical and nonverbal cues are integral to achieving trust. This analysis method could lead to design guidelines for technologies and more effective assessments of interventions. Nonverbal communication patterns are important aspects of clinician-patient interactions and may affect patient outcomes. The eye gaze behaviors of clinicians and patients in 110 videotaped medical encounters were analyzed using the lag sequential method to identify significant behavior sequences. Lag sequential analysis included both event-based lag and time-based lag. Results from event-based lag analysis showed that the patient's gaze followed that of the clinician, whereas the clinician's gaze did not follow the patient's. Time-based sequential analysis showed that responses from the patient usually occurred within 2 s after the initial behavior of the clinician. Our data suggest that the clinician's gaze significantly affects the medical encounter but that the converse is not true. Findings from this research have implications for the design of clinical work systems and modeling interactions. Similar research methods could be used to identify different behavior patterns in clinical settings (physical layout, technology, etc.) to facilitate and evaluate clinical work system designs.
2012-01-01
Background Gene Set Analysis (GSA) has proven to be a useful approach to microarray analysis. However, most of the method development for GSA has focused on the statistical tests to be used rather than on the generation of sets that will be tested. Existing methods of set generation are often overly simplistic. The creation of sets from individual pathways (in isolation) is a poor reflection of the complexity of the underlying metabolic network. We have developed a novel approach to set generation via the use of Principal Component Analysis of the Laplacian matrix of a metabolic network. We have analysed a relatively simple data set to show the difference in results between our method and the current state-of-the-art pathway-based sets. Results The sets generated with this method are semi-exhaustive and capture much of the topological complexity of the metabolic network. The semi-exhaustive nature of this method has also allowed us to design a hypergeometric enrichment test to determine which genes are likely responsible for set significance. We show that our method finds significant aspects of biology that would be missed (i.e. false negatives) and addresses the false positive rates found with the use of simple pathway-based sets. Conclusions The set generation step for GSA is often neglected but is a crucial part of the analysis as it defines the full context for the analysis. As such, set generation methods should be robust and yield as complete a representation of the extant biological knowledge as possible. The method reported here achieves this goal and is demonstrably superior to previous set analysis methods. PMID:22876834
Developing and Assessing Teachers' Knowledge of Game-Based Learning
ERIC Educational Resources Information Center
Shah, Mamta; Foster, Aroutis
2015-01-01
Research focusing on the development and assessment of teacher knowledge in game-based learning is in its infancy. A mixed-methods study was undertaken to educate pre-service teachers in game-based learning using the Game Network Analysis (GaNA) framework. Fourteen pre-service teachers completed a methods course, which prepared them in game…
Tackenberg, Oliver
2007-01-01
Background and Aims Biomass is an important trait in functional ecology and growth analysis. The typical methods for measuring biomass are destructive. Thus, they do not allow the development of individual plants to be followed and they require many individuals to be cultivated for repeated measurements. Non-destructive methods do not have these limitations. Here, a non-destructive method based on digital image analysis is presented, addressing not only above-ground fresh biomass (FBM) and oven-dried biomass (DBM), but also vertical biomass distribution as well as dry matter content (DMC) and growth rates. Methods Scaled digital images of the plants silhouettes were taken for 582 individuals of 27 grass species (Poaceae). Above-ground biomass and DMC were measured using destructive methods. With image analysis software Zeiss KS 300, the projected area and the proportion of greenish pixels were calculated, and generalized linear models (GLMs) were developed with destructively measured parameters as dependent variables and parameters derived from image analysis as independent variables. A bootstrap analysis was performed to assess the number of individuals required for re-calibration of the models. Key Results The results of the developed models showed no systematic errors compared with traditionally measured values and explained most of their variance (R2 ≥ 0·85 for all models). The presented models can be directly applied to herbaceous grasses without further calibration. Applying the models to other growth forms might require a re-calibration which can be based on only 10–20 individuals for FBM or DMC and on 40–50 individuals for DBM. Conclusions The methods presented are time and cost effective compared with traditional methods, especially if development or growth rates are to be measured repeatedly. Hence, they offer an alternative way of determining biomass, especially as they are non-destructive and address not only FBM and DBM, but also vertical biomass distribution and DMC. PMID:17353204
A Method for Generating Reduced Order Linear Models of Supersonic Inlets
NASA Technical Reports Server (NTRS)
Chicatelli, Amy; Hartley, Tom T.
1997-01-01
For the modeling of high speed propulsion systems, there are at least two major categories of models. One is based on computational fluid dynamics (CFD), and the other is based on design and analysis of control systems. CFD is accurate and gives a complete view of the internal flow field, but it typically has many states and runs much slower dm real-time. Models based on control design typically run near real-time but do not always capture the fundamental dynamics. To provide improved control models, methods are needed that are based on CFD techniques but yield models that are small enough for control analysis and design.
2013-01-01
Despite its prominence for characterization of complex mixtures, LC–MS/MS frequently fails to identify many proteins. Network-based analysis methods, based on protein–protein interaction networks (PPINs), biological pathways, and protein complexes, are useful for recovering non-detected proteins, thereby enhancing analytical resolution. However, network-based analysis methods do come in varied flavors for which the respective efficacies are largely unknown. We compare the recovery performance and functional insights from three distinct instances of PPIN-based approaches, viz., Proteomics Expansion Pipeline (PEP), Functional Class Scoring (FCS), and Maxlink, in a test scenario of valproic acid (VPA)-treated mice. We find that the most comprehensive functional insights, as well as best non-detected protein recovery performance, are derived from FCS utilizing real biological complexes. This outstrips other network-based methods such as Maxlink or Proteomics Expansion Pipeline (PEP). From FCS, we identified known biological complexes involved in epigenetic modifications, neuronal system development, and cytoskeletal rearrangements. This is congruent with the observed phenotype where adult mice showed an increase in dendritic branching to allow the rewiring of visual cortical circuitry and an improvement in their visual acuity when tested behaviorally. In addition, PEP also identified a novel complex, comprising YWHAB, NR1, NR2B, ACTB, and TJP1, which is functionally related to the observed phenotype. Although our results suggest different network analysis methods can produce different results, on the whole, the findings are mutually supportive. More critically, the non-overlapping information each provides can provide greater holistic understanding of complex phenotypes. PMID:23557376
NASA Astrophysics Data System (ADS)
Galiana-Merino, J. J.; Pla, C.; Fernandez-Cortes, A.; Cuezva, S.; Ortiz, J.; Benavente, D.
2014-10-01
A MATLAB-based computer code has been developed for the simultaneous wavelet analysis and filtering of several environmental time series, particularly focused on the analyses of cave monitoring data. The continuous wavelet transform, the discrete wavelet transform and the discrete wavelet packet transform have been implemented to provide a fast and precise time-period examination of the time series at different period bands. Moreover, statistic methods to examine the relation between two signals have been included. Finally, the entropy of curves and splines based methods have also been developed for segmenting and modeling the analyzed time series. All these methods together provide a user-friendly and fast program for the environmental signal analysis, with useful, practical and understandable results.
ADAM: Analysis of Discrete Models of Biological Systems Using Computer Algebra
2011-01-01
Background Many biological systems are modeled qualitatively with discrete models, such as probabilistic Boolean networks, logical models, Petri nets, and agent-based models, to gain a better understanding of them. The computational complexity to analyze the complete dynamics of these models grows exponentially in the number of variables, which impedes working with complex models. There exist software tools to analyze discrete models, but they either lack the algorithmic functionality to analyze complex models deterministically or they are inaccessible to many users as they require understanding the underlying algorithm and implementation, do not have a graphical user interface, or are hard to install. Efficient analysis methods that are accessible to modelers and easy to use are needed. Results We propose a method for efficiently identifying attractors and introduce the web-based tool Analysis of Dynamic Algebraic Models (ADAM), which provides this and other analysis methods for discrete models. ADAM converts several discrete model types automatically into polynomial dynamical systems and analyzes their dynamics using tools from computer algebra. Specifically, we propose a method to identify attractors of a discrete model that is equivalent to solving a system of polynomial equations, a long-studied problem in computer algebra. Based on extensive experimentation with both discrete models arising in systems biology and randomly generated networks, we found that the algebraic algorithms presented in this manuscript are fast for systems with the structure maintained by most biological systems, namely sparseness and robustness. For a large set of published complex discrete models, ADAM identified the attractors in less than one second. Conclusions Discrete modeling techniques are a useful tool for analyzing complex biological systems and there is a need in the biological community for accessible efficient analysis tools. ADAM provides analysis methods based on mathematical algorithms as a web-based tool for several different input formats, and it makes analysis of complex models accessible to a larger community, as it is platform independent as a web-service and does not require understanding of the underlying mathematics. PMID:21774817
NASA Astrophysics Data System (ADS)
Zhang, Dashan; Guo, Jie; Jin, Yi; Zhu, Chang'an
2017-09-01
High-speed cameras provide full field measurement of structure motions and have been applied in nondestructive testing and noncontact structure monitoring. Recently, a phase-based method has been proposed to extract sound-induced vibrations from phase variations in videos, and this method provides insights into the study of remote sound surveillance and material analysis. An efficient singular value decomposition (SVD)-based approach is introduced to detect sound-induced subtle motions from pixel intensities in silent high-speed videos. A high-speed camera is initially applied to capture a video of the vibrating objects stimulated by sound fluctuations. Then, subimages collected from a small region on the captured video are reshaped into vectors and reconstructed to form a matrix. Orthonormal image bases (OIBs) are obtained from the SVD of the matrix; available vibration signal can then be obtained by projecting subsequent subimages onto specific OIBs. A simulation test is initiated to validate the effectiveness and efficiency of the proposed method. Two experiments are conducted to demonstrate the potential applications in sound recovery and material analysis. Results show that the proposed method efficiently detects subtle motions from the video.
An online sleep apnea detection method based on recurrence quantification analysis.
Nguyen, Hoa Dinh; Wilkins, Brek A; Cheng, Qi; Benjamin, Bruce Allen
2014-07-01
This paper introduces an online sleep apnea detection method based on heart rate complexity as measured by recurrence quantification analysis (RQA) statistics of heart rate variability (HRV) data. RQA statistics can capture nonlinear dynamics of a complex cardiorespiratory system during obstructive sleep apnea. In order to obtain a more robust measurement of the nonstationarity of the cardiorespiratory system, we use different fixed amount of neighbor thresholdings for recurrence plot calculation. We integrate a feature selection algorithm based on conditional mutual information to select the most informative RQA features for classification, and hence, to speed up the real-time classification process without degrading the performance of the system. Two types of binary classifiers, i.e., support vector machine and neural network, are used to differentiate apnea from normal sleep. A soft decision fusion rule is developed to combine the results of these classifiers in order to improve the classification performance of the whole system. Experimental results show that our proposed method achieves better classification results compared with the previous recurrence analysis-based approach. We also show that our method is flexible and a strong candidate for a real efficient sleep apnea detection system.
Modeling and Analysis of Wrinkled Membranes: An Overview
NASA Technical Reports Server (NTRS)
Yang, B.; Ding, H.; Lou, M.; Fang, H.; Broduer, Steve (Technical Monitor)
2001-01-01
Thin-film membranes are basic elements of a variety of space inflatable/deployable structures. Wrinkling degrades the performance and reliability of these membrane structures, and hence has been a topic of continued interest. Wrinkling analysis of membranes for general geometry and arbitrary boundary conditions is quite challenging. The objective of this presentation is two-fold. Firstly, the existing models of wrinkled membranes and related numerical solution methods are reviewed. The important issues to be discussed are the capability of a membrane model to characterize taut, wrinkled and slack states of membranes in a consistent and physically reasonable manner; the ability of a wrinkling analysis method to predict the formation and growth of wrinkled regions, and to determine out-of-plane deformation and wrinkled waves; the convergence of a numerical solution method for wrinkling analysis; and the compatibility of a wrinkling analysis with general-purpose finite element codes. According to this review, several opening issues in modeling and analysis of wrinkled membranes that are to be addressed in future research are summarized, The second objective of this presentation is to discuss a newly developed membrane model of two viable parameters (2-VP model) and associated parametric finite element method (PFEM) for wrinkling analysis are introduced. The innovations and advantages of the proposed membrane model and PFEM-based wrinkling analysis are: (1) Via a unified stress-strain relation; the 2-VP model treat the taut, wrinkled, and slack states of membranes consistently; (2) The PFEM-based wrinkling analysis has guaranteed convergence; (3) The 2-VP model along with PFEM is capable of predicting membrane out-of-plane deformations; and (4) The PFEM can be integrated into any existing finite element code. Preliminary numerical examples are also included in this presentation to demonstrate the 2-VP model and PFEM-based wrinkling analysis approach.
ERIC Educational Resources Information Center
Zhang, Zhidong
2016-01-01
This study explored an alternative assessment procedure to examine learning trajectories of matrix multiplication. It took rule-based analytical and cognitive task analysis methods specifically to break down operation rules for a given matrix multiplication. Based on the analysis results, a hierarchical Bayesian network, an assessment model,…
Kwon, Deukwoo; Reis, Isildinha M
2015-08-12
When conducting a meta-analysis of a continuous outcome, estimated means and standard deviations from the selected studies are required in order to obtain an overall estimate of the mean effect and its confidence interval. If these quantities are not directly reported in the publications, they must be estimated from other reported summary statistics, such as the median, the minimum, the maximum, and quartiles. We propose a simulation-based estimation approach using the Approximate Bayesian Computation (ABC) technique for estimating mean and standard deviation based on various sets of summary statistics found in published studies. We conduct a simulation study to compare the proposed ABC method with the existing methods of Hozo et al. (2005), Bland (2015), and Wan et al. (2014). In the estimation of the standard deviation, our ABC method performs better than the other methods when data are generated from skewed or heavy-tailed distributions. The corresponding average relative error (ARE) approaches zero as sample size increases. In data generated from the normal distribution, our ABC performs well. However, the Wan et al. method is best for estimating standard deviation under normal distribution. In the estimation of the mean, our ABC method is best regardless of assumed distribution. ABC is a flexible method for estimating the study-specific mean and standard deviation for meta-analysis, especially with underlying skewed or heavy-tailed distributions. The ABC method can be applied using other reported summary statistics such as the posterior mean and 95 % credible interval when Bayesian analysis has been employed.
[Study on ecological suitability regionalization of Eucommia ulmoides in Guizhou].
Kang, Chuan-Zhi; Wang, Qing-Qing; Zhou, Tao; Jiang, Wei-Ke; Xiao, Cheng-Hong; Xie, Yu
2014-05-01
To study the ecological suitability regionalization of Eucommia ulmoides, for selecting artificial planting base and high-quality industrial raw material purchase area of the herb in Guizhou. Based on the investigation of 14 Eucommia ulmoides producing areas, pinoresinol diglucoside content and ecological factors were obtained. Using spatial analysis method to carry on ecological suitability regionalization. Meanwhile, combining pinoresinol diglucoside content, the correlation of major active components and environmental factors were analyzed by statistical analysis. The most suitability planting area of Eucommia ulmoides was the northwest of Guizhou. The distribution of Eucommia ulmoides was mainly affected by the type and pH value of soil, and monthly precipitation. The spatial structure of major active components in Eucommia ulmoides were randomly distributed in global space, but had only one aggregation point which had a high positive correlation in local space. The major active components of Eucommia ulmoides had no correlation with altitude, longitude or latitude. Using the spatial analysis method and statistical analysis method, based on environmental factor and pinoresinol diglucoside content, the ecological suitability regionalization of Eucommia ulmoides can provide reference for the selection of suitable planting area, artificial planting base and directing production layout.
SSVEP recognition using common feature analysis in brain-computer interface.
Zhang, Yu; Zhou, Guoxu; Jin, Jing; Wang, Xingyu; Cichocki, Andrzej
2015-04-15
Canonical correlation analysis (CCA) has been successfully applied to steady-state visual evoked potential (SSVEP) recognition for brain-computer interface (BCI) application. Although the CCA method outperforms the traditional power spectral density analysis through multi-channel detection, it requires additionally pre-constructed reference signals of sine-cosine waves. It is likely to encounter overfitting in using a short time window since the reference signals include no features from training data. We consider that a group of electroencephalogram (EEG) data trials recorded at a certain stimulus frequency on a same subject should share some common features that may bear the real SSVEP characteristics. This study therefore proposes a common feature analysis (CFA)-based method to exploit the latent common features as natural reference signals in using correlation analysis for SSVEP recognition. Good performance of the CFA method for SSVEP recognition is validated with EEG data recorded from ten healthy subjects, in contrast to CCA and a multiway extension of CCA (MCCA). Experimental results indicate that the CFA method significantly outperformed the CCA and the MCCA methods for SSVEP recognition in using a short time window (i.e., less than 1s). The superiority of the proposed CFA method suggests it is promising for the development of a real-time SSVEP-based BCI. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Chen, Xinjia; Lacy, Fred; Carriere, Patrick
2015-05-01
Sequential test algorithms are playing increasingly important roles for quick detecting network intrusions such as portscanners. In view of the fact that such algorithms are usually analyzed based on intuitive approximation or asymptotic analysis, we develop an exact computational method for the performance analysis of such algorithms. Our method can be used to calculate the probability of false alarm and average detection time up to arbitrarily pre-specified accuracy.
NASA Technical Reports Server (NTRS)
Padavala, Satyasrinivas; Palazzolo, Alan B.; Vallely, Pat; Ryan, Steve
1994-01-01
An improved dynamic analysis for liquid annular seals with arbitrary profile based on a method, first proposed by Nelson and Nguyen, is presented. An improved first order solution that incorporates a continuous interpolation of perturbed quantities in the circumferential direction, is presented. The original method uses an approximation scheme for circumferential gradients, based on Fast Fourier Transforms (FFT). A simpler scheme based on cubic splines is found to be computationally more efficient with better convergence at higher eccentricities. A new approach of computing dynamic coefficients based on external specified load is introduced. This improved analysis is extended to account for arbitrarily varying seal profile in both axial and circumferential directions. An example case of an elliptical seal with varying degrees of axial curvature is analyzed. A case study based on actual operating clearances of an interstage seal of the Space Shuttle Main Engine High Pressure Oxygen Turbopump is presented.
Faassen, Elisabeth J; Antoniou, Maria G; Beekman-Lukassen, Wendy; Blahova, Lucie; Chernova, Ekaterina; Christophoridis, Christophoros; Combes, Audrey; Edwards, Christine; Fastner, Jutta; Harmsen, Joop; Hiskia, Anastasia; Ilag, Leopold L; Kaloudis, Triantafyllos; Lopicic, Srdjan; Lürling, Miquel; Mazur-Marzec, Hanna; Meriluoto, Jussi; Porojan, Cristina; Viner-Mozzini, Yehudit; Zguna, Nadezda
2016-02-29
Exposure to β-N-methylamino-l-alanine (BMAA) might be linked to the incidence of amyotrophic lateral sclerosis, Alzheimer's disease and Parkinson's disease. Analytical chemistry plays a crucial role in determining human BMAA exposure and the associated health risk, but the performance of various analytical methods currently employed is rarely compared. A CYANOCOST initiated workshop was organized aimed at training scientists in BMAA analysis, creating mutual understanding and paving the way towards interlaboratory comparison exercises. During this workshop, we tested different methods (extraction followed by derivatization and liquid chromatography coupled to tandem mass spectrometry (LC-MS/MS) analysis, or directly followed by LC-MS/MS analysis) for trueness and intermediate precision. We adapted three workup methods for the underivatized analysis of animal, brain and cyanobacterial samples. Based on recovery of the internal standard D₃BMAA, the underivatized methods were accurate (mean recovery 80%) and precise (mean relative standard deviation 10%), except for the cyanobacterium Leptolyngbya. However, total BMAA concentrations in the positive controls (cycad seeds) showed higher variation (relative standard deviation 21%-32%), implying that D₃BMAA was not a good indicator for the release of BMAA from bound forms. Significant losses occurred during workup for the derivatized method, resulting in low recovery (<10%). Most BMAA was found in a trichloroacetic acid soluble, bound form and we recommend including this fraction during analysis.
A better understanding of long-range temporal dependence of traffic flow time series
NASA Astrophysics Data System (ADS)
Feng, Shuo; Wang, Xingmin; Sun, Haowei; Zhang, Yi; Li, Li
2018-02-01
Long-range temporal dependence is an important research perspective for modelling of traffic flow time series. Various methods have been proposed to depict the long-range temporal dependence, including autocorrelation function analysis, spectral analysis and fractal analysis. However, few researches have studied the daily temporal dependence (i.e. the similarity between different daily traffic flow time series), which can help us better understand the long-range temporal dependence, such as the origin of crossover phenomenon. Moreover, considering both types of dependence contributes to establishing more accurate model and depicting the properties of traffic flow time series. In this paper, we study the properties of daily temporal dependence by simple average method and Principal Component Analysis (PCA) based method. Meanwhile, we also study the long-range temporal dependence by Detrended Fluctuation Analysis (DFA) and Multifractal Detrended Fluctuation Analysis (MFDFA). The results show that both the daily and long-range temporal dependence exert considerable influence on the traffic flow series. The DFA results reveal that the daily temporal dependence creates crossover phenomenon when estimating the Hurst exponent which depicts the long-range temporal dependence. Furthermore, through the comparison of the DFA test, PCA-based method turns out to be a better method to extract the daily temporal dependence especially when the difference between days is significant.
[New method of mixed gas infrared spectrum analysis based on SVM].
Bai, Peng; Xie, Wen-Jun; Liu, Jun-Hua
2007-07-01
A new method of infrared spectrum analysis based on support vector machine (SVM) for mixture gas was proposed. The kernel function in SVM was used to map the seriously overlapping absorption spectrum into high-dimensional space, and after transformation, the high-dimensional data could be processed in the original space, so the regression calibration model was established, then the regression calibration model with was applied to analyze the concentration of component gas. Meanwhile it was proved that the regression calibration model with SVM also could be used for component recognition of mixture gas. The method was applied to the analysis of different data samples. Some factors such as scan interval, range of the wavelength, kernel function and penalty coefficient C that affect the model were discussed. Experimental results show that the component concentration maximal Mean AE is 0.132%, and the component recognition accuracy is higher than 94%. The problems of overlapping absorption spectrum, using the same method for qualitative and quantitative analysis, and limit number of training sample, were solved. The method could be used in other mixture gas infrared spectrum analyses, promising theoretic and application values.
A wideband FMBEM for 2D acoustic design sensitivity analysis based on direct differentiation method
NASA Astrophysics Data System (ADS)
Chen, Leilei; Zheng, Changjun; Chen, Haibo
2013-09-01
This paper presents a wideband fast multipole boundary element method (FMBEM) for two dimensional acoustic design sensitivity analysis based on the direct differentiation method. The wideband fast multipole method (FMM) formed by combining the original FMM and the diagonal form FMM is used to accelerate the matrix-vector products in the boundary element analysis. The Burton-Miller formulation is used to overcome the fictitious frequency problem when using a single Helmholtz boundary integral equation for exterior boundary-value problems. The strongly singular and hypersingular integrals in the sensitivity equations can be evaluated explicitly and directly by using the piecewise constant discretization. The iterative solver GMRES is applied to accelerate the solution of the linear system of equations. A set of optimal parameters for the wideband FMBEM design sensitivity analysis are obtained by observing the performances of the wideband FMM algorithm in terms of computing time and memory usage. Numerical examples are presented to demonstrate the efficiency and validity of the proposed algorithm.
Orsi, Rebecca
2017-02-01
Concept mapping is now a commonly-used technique for articulating and evaluating programmatic outcomes. However, research regarding validity of knowledge and outcomes produced with concept mapping is sparse. The current study describes quantitative validity analyses using a concept mapping dataset. We sought to increase the validity of concept mapping evaluation results by running multiple cluster analysis methods and then using several metrics to choose from among solutions. We present four different clustering methods based on analyses using the R statistical software package: partitioning around medoids (PAM), fuzzy analysis (FANNY), agglomerative nesting (AGNES) and divisive analysis (DIANA). We then used the Dunn and Davies-Bouldin indices to assist in choosing a valid cluster solution for a concept mapping outcomes evaluation. We conclude that the validity of the outcomes map is high, based on the analyses described. Finally, we discuss areas for further concept mapping methods research. Copyright © 2016 Elsevier Ltd. All rights reserved.
Questel, E; Durbise, E; Bardy, A-L; Schmitt, A-M; Josse, G
2015-05-01
To assess an objective method evaluating the effects of a retinaldehyde-based cream (RA-cream) on solar lentigines; 29 women randomly applied RA-cream on lentigines of one hand and a control cream on the other, once daily for 3 months. A specific method enabling a reliable visualisation of the lesions was proposed, using high-magnification colour-calibrated camera imaging. Assessment was performed using clinical evaluation by Physician Global Assessment score and image analysis. Luminance determination on the numeric images was performed either on the basis of 5 independent expert's consensus borders or probability map analysis via an algorithm automatically detecting the pigmented area. Both image analysis methods showed a similar lightening of ΔL* = 2 after a 3-month treatment by RA-cream, in agreement with single-blind clinical evaluation. High-magnification colour-calibrated camera imaging combined with probability map analysis is a fast and precise method to follow lentigo depigmentation. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Missing RRI interpolation for HRV analysis using locally-weighted partial least squares regression.
Kamata, Keisuke; Fujiwara, Koichi; Yamakawa, Toshiki; Kano, Manabu
2016-08-01
The R-R interval (RRI) fluctuation in electrocardiogram (ECG) is called heart rate variability (HRV). Since HRV reflects autonomic nervous function, HRV-based health monitoring services, such as stress estimation, drowsy driving detection, and epileptic seizure prediction, have been proposed. In these HRV-based health monitoring services, precise R wave detection from ECG is required; however, R waves cannot always be detected due to ECG artifacts. Missing RRI data should be interpolated appropriately for HRV analysis. The present work proposes a missing RRI interpolation method by utilizing using just-in-time (JIT) modeling. The proposed method adopts locally weighted partial least squares (LW-PLS) for RRI interpolation, which is a well-known JIT modeling method used in the filed of process control. The usefulness of the proposed method was demonstrated through a case study of real RRI data collected from healthy persons. The proposed JIT-based interpolation method could improve the interpolation accuracy in comparison with a static interpolation method.
Mu, Lan; Wang, Fahui; Chen, Vivien W.; Wu, Xiao-Cheng
2015-01-01
Similar geographic areas often have great variations in population size. In health data management and analysis, it is desirable to obtain regions of comparable population by decomposing areas of large population (to gain more spatial variability) and merging areas of small population (to mask privacy of data). Based on the Peano curve algorithm and modified scale-space clustering, this research proposes a mixed-level regionalization (MLR) method to construct geographic areas with comparable population. The method accounts for spatial connectivity and compactness, attributive homogeneity, and exogenous criteria such as minimum (and approximately equal) population or disease counts. A case study using Louisiana cancer data illustrates the MLR method and its strengths and limitations. A major benefit of the method is that most upper level geographic boundaries can be preserved to increase familiarity of constructed areas. Therefore, the MLR method is more human-oriented and place-based than computer-oriented and space-based. PMID:26251551
NASA Technical Reports Server (NTRS)
Viezee, W.; Russell, P. B.; Hake, R. D., Jr.
1974-01-01
The matching method of lidar data analysis is explained, and the results from two flights studying the stratospheric aerosol using lidar techniques are summarized and interpreted. Support is lent to the matching method of lidar data analysis by the results, but it is not yet apparent that the analysis technique leads to acceptable results on all nights in all seasons.
Seismic facies analysis based on self-organizing map and empirical mode decomposition
NASA Astrophysics Data System (ADS)
Du, Hao-kun; Cao, Jun-xing; Xue, Ya-juan; Wang, Xing-jian
2015-01-01
Seismic facies analysis plays an important role in seismic interpretation and reservoir model building by offering an effective way to identify the changes in geofacies inter wells. The selections of input seismic attributes and their time window have an obvious effect on the validity of classification and require iterative experimentation and prior knowledge. In general, it is sensitive to noise when waveform serves as the input data to cluster analysis, especially with a narrow window. To conquer this limitation, the Empirical Mode Decomposition (EMD) method is introduced into waveform classification based on SOM. We first de-noise the seismic data using EMD and then cluster the data using 1D grid SOM. The main advantages of this method are resolution enhancement and noise reduction. 3D seismic data from the western Sichuan basin, China, are collected for validation. The application results show that seismic facies analysis can be improved and better help the interpretation. The powerful tolerance for noise makes the proposed method to be a better seismic facies analysis tool than classical 1D grid SOM method, especially for waveform cluster with a narrow window.
2007-01-01
including tree- based methods such as the unweighted pair group method of analysis ( UPGMA ) and Neighbour-joining (NJ) (Saitou & Nei, 1987). By...based Bayesian approach and the tree-based UPGMA and NJ cluster- ing methods. The results obtained suggest that far more species occur in the An...unlikely that groups that differ by more than these levels are conspecific. Genetic distances were clustered using the UPGMA and NJ algorithms in MEGA
An analysis of random projection for changeable and privacy-preserving biometric verification.
Wang, Yongjin; Plataniotis, Konstantinos N
2010-10-01
Changeability and privacy protection are important factors for widespread deployment of biometrics-based verification systems. This paper presents a systematic analysis of a random-projection (RP)-based method for addressing these problems. The employed method transforms biometric data using a random matrix with each entry an independent and identically distributed Gaussian random variable. The similarity- and privacy-preserving properties, as well as the changeability of the biometric information in the transformed domain, are analyzed in detail. Specifically, RP on both high-dimensional image vectors and dimensionality-reduced feature vectors is discussed and compared. A vector translation method is proposed to improve the changeability of the generated templates. The feasibility of the introduced solution is well supported by detailed theoretical analyses. Extensive experimentation on a face-based biometric verification problem shows the effectiveness of the proposed method.
Application of artificial neural networks in nonlinear analysis of trusses
NASA Technical Reports Server (NTRS)
Alam, J.; Berke, L.
1991-01-01
A method is developed to incorporate neural network model based upon the Backpropagation algorithm for material response into nonlinear elastic truss analysis using the initial stiffness method. Different network configurations are developed to assess the accuracy of neural network modeling of nonlinear material response. In addition to this, a scheme based upon linear interpolation for material data, is also implemented for comparison purposes. It is found that neural network approach can yield very accurate results if used with care. For the type of problems under consideration, it offers a viable alternative to other material modeling methods.
Convergence acceleration of the Proteus computer code with multigrid methods
NASA Technical Reports Server (NTRS)
Demuren, A. O.; Ibraheem, S. O.
1992-01-01
Presented here is the first part of a study to implement convergence acceleration techniques based on the multigrid concept in the Proteus computer code. A review is given of previous studies on the implementation of multigrid methods in computer codes for compressible flow analysis. Also presented is a detailed stability analysis of upwind and central-difference based numerical schemes for solving the Euler and Navier-Stokes equations. Results are given of a convergence study of the Proteus code on computational grids of different sizes. The results presented here form the foundation for the implementation of multigrid methods in the Proteus code.
Hock, Sabrina; Hasenauer, Jan; Theis, Fabian J
2013-01-01
Diffusion is a key component of many biological processes such as chemotaxis, developmental differentiation and tissue morphogenesis. Since recently, the spatial gradients caused by diffusion can be assessed in-vitro and in-vivo using microscopy based imaging techniques. The resulting time-series of two dimensional, high-resolutions images in combination with mechanistic models enable the quantitative analysis of the underlying mechanisms. However, such a model-based analysis is still challenging due to measurement noise and sparse observations, which result in uncertainties of the model parameters. We introduce a likelihood function for image-based measurements with log-normal distributed noise. Based upon this likelihood function we formulate the maximum likelihood estimation problem, which is solved using PDE-constrained optimization methods. To assess the uncertainty and practical identifiability of the parameters we introduce profile likelihoods for diffusion processes. As proof of concept, we model certain aspects of the guidance of dendritic cells towards lymphatic vessels, an example for haptotaxis. Using a realistic set of artificial measurement data, we estimate the five kinetic parameters of this model and compute profile likelihoods. Our novel approach for the estimation of model parameters from image data as well as the proposed identifiability analysis approach is widely applicable to diffusion processes. The profile likelihood based method provides more rigorous uncertainty bounds in contrast to local approximation methods.
Linear stability analysis of detonations via numerical computation and dynamic mode decomposition
NASA Astrophysics Data System (ADS)
Kabanov, Dmitry I.; Kasimov, Aslan R.
2018-03-01
We introduce a new method to investigate linear stability of gaseous detonations that is based on an accurate shock-fitting numerical integration of the linearized reactive Euler equations with a subsequent analysis of the computed solution via the dynamic mode decomposition. The method is applied to the detonation models based on both the standard one-step Arrhenius kinetics and two-step exothermic-endothermic reaction kinetics. Stability spectra for all cases are computed and analyzed. The new approach is shown to be a viable alternative to the traditional normal-mode analysis used in detonation theory.
NASA Astrophysics Data System (ADS)
Hsu, Kuo-Hsien
2012-11-01
Formosat-2 image is a kind of high-spatial-resolution (2 meters GSD) remote sensing satellite data, which includes one panchromatic band and four multispectral bands (Blue, Green, Red, near-infrared). An essential sector in the daily processing of received Formosat-2 image is to estimate the cloud statistic of image using Automatic Cloud Coverage Assessment (ACCA) algorithm. The information of cloud statistic of image is subsequently recorded as an important metadata for image product catalog. In this paper, we propose an ACCA method with two consecutive stages: preprocessing and post-processing analysis. For pre-processing analysis, the un-supervised K-means classification, Sobel's method, thresholding method, non-cloudy pixels reexamination, and cross-band filter method are implemented in sequence for cloud statistic determination. For post-processing analysis, Box-Counting fractal method is implemented. In other words, the cloud statistic is firstly determined via pre-processing analysis, the correctness of cloud statistic of image of different spectral band is eventually cross-examined qualitatively and quantitatively via post-processing analysis. The selection of an appropriate thresholding method is very critical to the result of ACCA method. Therefore, in this work, We firstly conduct a series of experiments of the clustering-based and spatial thresholding methods that include Otsu's, Local Entropy(LE), Joint Entropy(JE), Global Entropy(GE), and Global Relative Entropy(GRE) method, for performance comparison. The result shows that Otsu's and GE methods both perform better than others for Formosat-2 image. Additionally, our proposed ACCA method by selecting Otsu's method as the threshoding method has successfully extracted the cloudy pixels of Formosat-2 image for accurate cloud statistic estimation.
NASA Astrophysics Data System (ADS)
Cheng, K.; Guo, L. M.; Wang, Y. K.; Zafar, M. T.
2017-11-01
In order to select effective samples in the large number of data of PV power generation years and improve the accuracy of PV power generation forecasting model, this paper studies the application of clustering analysis in this field and establishes forecasting model based on neural network. Based on three different types of weather on sunny, cloudy and rainy days, this research screens samples of historical data by the clustering analysis method. After screening, it establishes BP neural network prediction models using screened data as training data. Then, compare the six types of photovoltaic power generation prediction models before and after the data screening. Results show that the prediction model combining with clustering analysis and BP neural networks is an effective method to improve the precision of photovoltaic power generation.
[Electroencephalogram Feature Selection Based on Correlation Coefficient Analysis].
Zhou, Jinzhi; Tang, Xiaofang
2015-08-01
In order to improve the accuracy of classification with small amount of motor imagery training data on the development of brain-computer interface (BCD systems, we proposed an analyzing method to automatically select the characteristic parameters based on correlation coefficient analysis. Throughout the five sample data of dataset IV a from 2005 BCI Competition, we utilized short-time Fourier transform (STFT) and correlation coefficient calculation to reduce the number of primitive electroencephalogram dimension, then introduced feature extraction based on common spatial pattern (CSP) and classified by linear discriminant analysis (LDA). Simulation results showed that the average rate of classification accuracy could be improved by using correlation coefficient feature selection method than those without using this algorithm. Comparing with support vector machine (SVM) optimization features algorithm, the correlation coefficient analysis can lead better selection parameters to improve the accuracy of classification.
No control genes required: Bayesian analysis of qRT-PCR data.
Matz, Mikhail V; Wright, Rachel M; Scott, James G
2013-01-01
Model-based analysis of data from quantitative reverse-transcription PCR (qRT-PCR) is potentially more powerful and versatile than traditional methods. Yet existing model-based approaches cannot properly deal with the higher sampling variances associated with low-abundant targets, nor do they provide a natural way to incorporate assumptions about the stability of control genes directly into the model-fitting process. In our method, raw qPCR data are represented as molecule counts, and described using generalized linear mixed models under Poisson-lognormal error. A Markov Chain Monte Carlo (MCMC) algorithm is used to sample from the joint posterior distribution over all model parameters, thereby estimating the effects of all experimental factors on the expression of every gene. The Poisson-based model allows for the correct specification of the mean-variance relationship of the PCR amplification process, and can also glean information from instances of no amplification (zero counts). Our method is very flexible with respect to control genes: any prior knowledge about the expected degree of their stability can be directly incorporated into the model. Yet the method provides sensible answers without such assumptions, or even in the complete absence of control genes. We also present a natural Bayesian analogue of the "classic" analysis, which uses standard data pre-processing steps (logarithmic transformation and multi-gene normalization) but estimates all gene expression changes jointly within a single model. The new methods are considerably more flexible and powerful than the standard delta-delta Ct analysis based on pairwise t-tests. Our methodology expands the applicability of the relative-quantification analysis protocol all the way to the lowest-abundance targets, and provides a novel opportunity to analyze qRT-PCR data without making any assumptions concerning target stability. These procedures have been implemented as the MCMC.qpcr package in R.
A survey on object detection in optical remote sensing images
NASA Astrophysics Data System (ADS)
Cheng, Gong; Han, Junwei
2016-07-01
Object detection in optical remote sensing images, being a fundamental but challenging problem in the field of aerial and satellite image analysis, plays an important role for a wide range of applications and is receiving significant attention in recent years. While enormous methods exist, a deep review of the literature concerning generic object detection is still lacking. This paper aims to provide a review of the recent progress in this field. Different from several previously published surveys that focus on a specific object class such as building and road, we concentrate on more generic object categories including, but are not limited to, road, building, tree, vehicle, ship, airport, urban-area. Covering about 270 publications we survey (1) template matching-based object detection methods, (2) knowledge-based object detection methods, (3) object-based image analysis (OBIA)-based object detection methods, (4) machine learning-based object detection methods, and (5) five publicly available datasets and three standard evaluation metrics. We also discuss the challenges of current studies and propose two promising research directions, namely deep learning-based feature representation and weakly supervised learning-based geospatial object detection. It is our hope that this survey will be beneficial for the researchers to have better understanding of this research field.
New classification methods on singularity of mechanism
NASA Astrophysics Data System (ADS)
Luo, Jianguo; Han, Jianyou
2010-07-01
Based on the analysis of base and methods of singularity of mechanism, four methods obtained according to the factors of moving states of mechanism and cause of singularity and property of linear complex of singularity and methods in studying singularity, these bases and methods can't reflect the direct property and systematic property and controllable property of the structure of mechanism in macro, thus can't play an excellent role in guiding to evade the configuration before the appearance of singularity. In view of the shortcomings of forementioned four bases and methods, six new methods combined with the structure and exterior phenomena and motion control of mechanism directly and closely, classfication carried out based on the factors of moving base and joint component and executor and branch and acutating source and input parameters, these factors display the systemic property in macro, excellent guiding performance can be expected in singularity evasion and machine design and machine control based on these new bases and methods.
Optimization of Turbine Engine Cycle Analysis with Analytic Derivatives
NASA Technical Reports Server (NTRS)
Hearn, Tristan; Hendricks, Eric; Chin, Jeffrey; Gray, Justin; Moore, Kenneth T.
2016-01-01
A new engine cycle analysis tool, called Pycycle, was built using the OpenMDAO framework. Pycycle provides analytic derivatives allowing for an efficient use of gradient-based optimization methods on engine cycle models, without requiring the use of finite difference derivative approximation methods. To demonstrate this, a gradient-based design optimization was performed on a turbofan engine model. Results demonstrate very favorable performance compared to an optimization of an identical model using finite-difference approximated derivatives.
NASA Astrophysics Data System (ADS)
Bezmaternykh, P. V.; Nikolaev, D. P.; Arlazarov, V. L.
2018-04-01
Textual blocks rectification or slant correction is an important stage of document image processing in OCR systems. This paper considers existing methods and introduces an approach for the construction of such algorithms based on Fast Hough Transform analysis. A quality measurement technique is proposed and obtained results are shown for both printed and handwritten textual blocks processing as a part of an industrial system of identity documents recognition on mobile devices.
De Spiegelaere, Ward; Malatinkova, Eva; Lynch, Lindsay; Van Nieuwerburgh, Filip; Messiaen, Peter; O'Doherty, Una; Vandekerckhove, Linos
2014-06-01
Quantification of integrated proviral HIV DNA by repetitive-sampling Alu-HIV PCR is a candidate virological tool to monitor the HIV reservoir in patients. However, the experimental procedures and data analysis of the assay are complex and hinder its widespread use. Here, we provide an improved and simplified data analysis method by adopting binomial and Poisson statistics. A modified analysis method on the basis of Poisson statistics was used to analyze the binomial data of positive and negative reactions from a 42-replicate Alu-HIV PCR by use of dilutions of an integration standard and on samples of 57 HIV-infected patients. Results were compared with the quantitative output of the previously described Alu-HIV PCR method. Poisson-based quantification of the Alu-HIV PCR was linearly correlated with the standard dilution series, indicating that absolute quantification with the Poisson method is a valid alternative for data analysis of repetitive-sampling Alu-HIV PCR data. Quantitative outputs of patient samples assessed by the Poisson method correlated with the previously described Alu-HIV PCR analysis, indicating that this method is a valid alternative for quantifying integrated HIV DNA. Poisson-based analysis of the Alu-HIV PCR data enables absolute quantification without the need of a standard dilution curve. Implementation of the CI estimation permits improved qualitative analysis of the data and provides a statistical basis for the required minimal number of technical replicates. © 2014 The American Association for Clinical Chemistry.