Measurement System Analyses - Gauge Repeatability and Reproducibility Methods
NASA Astrophysics Data System (ADS)
Cepova, Lenka; Kovacikova, Andrea; Cep, Robert; Klaput, Pavel; Mizera, Ondrej
2018-02-01
The submitted article focuses on a detailed explanation of the average and range method (Automotive Industry Action Group, Measurement System Analysis approach) and of the honest Gauge Repeatability and Reproducibility method (Evaluating the Measurement Process approach). The measured data (thickness of plastic parts) were evaluated by both methods and their results were compared on the basis of numerical evaluation. Both methods were additionally compared and their advantages and disadvantages were discussed. One difference between both methods is the calculation of variation components. The AIAG method calculates the variation components based on standard deviation (then a sum of variation components does not give 100 %) and the honest GRR study calculates the variation components based on variance, where the sum of all variation components (part to part variation, EV & AV) gives the total variation of 100 %. Acceptance of both methods among the professional society, future use, and acceptance by manufacturing industry were also discussed. Nowadays, the AIAG is the leading method in the industry.
Wagner Mackenzie, Brett; Waite, David W; Taylor, Michael W
2015-01-01
The human gut contains dense and diverse microbial communities which have profound influences on human health. Gaining meaningful insights into these communities requires provision of high quality microbial nucleic acids from human fecal samples, as well as an understanding of the sources of variation and their impacts on the experimental model. We present here a systematic analysis of commonly used microbial DNA extraction methods, and identify significant sources of variation. Five extraction methods (Human Microbiome Project protocol, MoBio PowerSoil DNA Isolation Kit, QIAamp DNA Stool Mini Kit, ZR Fecal DNA MiniPrep, phenol:chloroform-based DNA isolation) were evaluated based on the following criteria: DNA yield, quality and integrity, and microbial community structure based on Illumina amplicon sequencing of the V4 region of bacterial and archaeal 16S rRNA genes. Our results indicate that the largest portion of variation within the model was attributed to differences between subjects (biological variation), with a smaller proportion of variation associated with DNA extraction method (technical variation) and intra-subject variation. A comprehensive understanding of the potential impact of technical variation on the human gut microbiota will help limit preventable bias, enabling more accurate diversity estimates.
The Schwinger Variational Method
NASA Technical Reports Server (NTRS)
Huo, Winifred M.
1995-01-01
Variational methods have proven invaluable in theoretical physics and chemistry, both for bound state problems and for the study of collision phenomena. For collisional problems they can be grouped into two types: those based on the Schroedinger equation and those based on the Lippmann-Schwinger equation. The application of the Schwinger variational (SV) method to e-molecule collisions and photoionization has been reviewed previously. The present chapter discusses the implementation of the SV method as applied to e-molecule collisions.
Estimation and Partitioning of Heritability in Human Populations using Whole Genome Analysis Methods
Vinkhuyzen, Anna AE; Wray, Naomi R; Yang, Jian; Goddard, Michael E; Visscher, Peter M
2014-01-01
Understanding genetic variation of complex traits in human populations has moved from the quantification of the resemblance between close relatives to the dissection of genetic variation into the contributions of individual genomic loci. But major questions remain unanswered: how much phenotypic variation is genetic, how much of the genetic variation is additive and what is the joint distribution of effect size and allele frequency at causal variants? We review and compare three whole-genome analysis methods that use mixed linear models (MLM) to estimate genetic variation, using the relationship between close or distant relatives based on pedigree or SNPs. We discuss theory, estimation procedures, bias and precision of each method and review recent advances in the dissection of additive genetic variation of complex traits in human populations that are based upon the application of MLM. Using genome wide data, SNPs account for far more of the genetic variation than the highly significant SNPs associated with a trait, but they do not account for all of the genetic variance estimated by pedigree based methods. We explain possible reasons for this ‘missing’ heritability. PMID:23988118
Iterative Nonlocal Total Variation Regularization Method for Image Restoration
Xu, Huanyu; Sun, Quansen; Luo, Nan; Cao, Guo; Xia, Deshen
2013-01-01
In this paper, a Bregman iteration based total variation image restoration algorithm is proposed. Based on the Bregman iteration, the algorithm splits the original total variation problem into sub-problems that are easy to solve. Moreover, non-local regularization is introduced into the proposed algorithm, and a method to choose the non-local filter parameter locally and adaptively is proposed. Experiment results show that the proposed algorithms outperform some other regularization methods. PMID:23776560
Zhang, Hanming; Wang, Linyuan; Yan, Bin; Li, Lei; Cai, Ailong; Hu, Guoen
2016-01-01
Total generalized variation (TGV)-based computed tomography (CT) image reconstruction, which utilizes high-order image derivatives, is superior to total variation-based methods in terms of the preservation of edge information and the suppression of unfavorable staircase effects. However, conventional TGV regularization employs l1-based form, which is not the most direct method for maximizing sparsity prior. In this study, we propose a total generalized p-variation (TGpV) regularization model to improve the sparsity exploitation of TGV and offer efficient solutions to few-view CT image reconstruction problems. To solve the nonconvex optimization problem of the TGpV minimization model, we then present an efficient iterative algorithm based on the alternating minimization of augmented Lagrangian function. All of the resulting subproblems decoupled by variable splitting admit explicit solutions by applying alternating minimization method and generalized p-shrinkage mapping. In addition, approximate solutions that can be easily performed and quickly calculated through fast Fourier transform are derived using the proximal point method to reduce the cost of inner subproblems. The accuracy and efficiency of the simulated and real data are qualitatively and quantitatively evaluated to validate the efficiency and feasibility of the proposed method. Overall, the proposed method exhibits reasonable performance and outperforms the original TGV-based method when applied to few-view problems.
Variation block-based genomics method for crop plants.
Kim, Yul Ho; Park, Hyang Mi; Hwang, Tae-Young; Lee, Seuk Ki; Choi, Man Soo; Jho, Sungwoong; Hwang, Seungwoo; Kim, Hak-Min; Lee, Dongwoo; Kim, Byoung-Chul; Hong, Chang Pyo; Cho, Yun Sung; Kim, Hyunmin; Jeong, Kwang Ho; Seo, Min Jung; Yun, Hong Tai; Kim, Sun Lim; Kwon, Young-Up; Kim, Wook Han; Chun, Hye Kyung; Lim, Sang Jong; Shin, Young-Ah; Choi, Ik-Young; Kim, Young Sun; Yoon, Ho-Sung; Lee, Suk-Ha; Lee, Sunghoon
2014-06-15
In contrast with wild species, cultivated crop genomes consist of reshuffled recombination blocks, which occurred by crossing and selection processes. Accordingly, recombination block-based genomics analysis can be an effective approach for the screening of target loci for agricultural traits. We propose the variation block method, which is a three-step process for recombination block detection and comparison. The first step is to detect variations by comparing the short-read DNA sequences of the cultivar to the reference genome of the target crop. Next, sequence blocks with variation patterns are examined and defined. The boundaries between the variation-containing sequence blocks are regarded as recombination sites. All the assumed recombination sites in the cultivar set are used to split the genomes, and the resulting sequence regions are termed variation blocks. Finally, the genomes are compared using the variation blocks. The variation block method identified recurring recombination blocks accurately and successfully represented block-level diversities in the publicly available genomes of 31 soybean and 23 rice accessions. The practicality of this approach was demonstrated by the identification of a putative locus determining soybean hilum color. We suggest that the variation block method is an efficient genomics method for the recombination block-level comparison of crop genomes. We expect that this method will facilitate the development of crop genomics by bringing genomics technologies to the field of crop breeding.
NASA Astrophysics Data System (ADS)
Li, Shuo; Wang, Hui; Wang, Liyong; Yu, Xiangzhou; Yang, Le
2018-01-01
The uneven illumination phenomenon reduces the quality of remote sensing image and causes interference in the subsequent processing and applications. A variational method based on Retinex with double-norm hybrid constraints for uneven illumination correction is proposed. The L1 norm and the L2 norm are adopted to constrain the textures and details of reflectance image and the smoothness of the illumination image, respectively. The problem of separating the illumination image from the reflectance image is transformed into the optimal solution of the variational model. In order to accelerate the solution, the split Bregman method is used to decompose the variational model into three subproblems, which are calculated by alternate iteration. Two groups of experiments are implemented on two synthetic images and three real remote sensing images. Compared with the variational Retinex method with single-norm constraint and the Mask method, the proposed method performs better in both visual evaluation and quantitative measurements. The proposed method can effectively eliminate the uneven illumination while maintaining the textures and details of the remote sensing image. Moreover, the proposed method using split Bregman method is more than 10 times faster than the method with the steepest descent method.
Zhang, Hanming; Wang, Linyuan; Yan, Bin; Li, Lei; Cai, Ailong; Hu, Guoen
2016-01-01
Total generalized variation (TGV)-based computed tomography (CT) image reconstruction, which utilizes high-order image derivatives, is superior to total variation-based methods in terms of the preservation of edge information and the suppression of unfavorable staircase effects. However, conventional TGV regularization employs l1-based form, which is not the most direct method for maximizing sparsity prior. In this study, we propose a total generalized p-variation (TGpV) regularization model to improve the sparsity exploitation of TGV and offer efficient solutions to few-view CT image reconstruction problems. To solve the nonconvex optimization problem of the TGpV minimization model, we then present an efficient iterative algorithm based on the alternating minimization of augmented Lagrangian function. All of the resulting subproblems decoupled by variable splitting admit explicit solutions by applying alternating minimization method and generalized p-shrinkage mapping. In addition, approximate solutions that can be easily performed and quickly calculated through fast Fourier transform are derived using the proximal point method to reduce the cost of inner subproblems. The accuracy and efficiency of the simulated and real data are qualitatively and quantitatively evaluated to validate the efficiency and feasibility of the proposed method. Overall, the proposed method exhibits reasonable performance and outperforms the original TGV-based method when applied to few-view problems. PMID:26901410
NASA Astrophysics Data System (ADS)
Li, Y. Chao; Ding, Q.; Gao, Y.; Ran, L. Ling; Yang, J. Ru; Liu, C. Yu; Wang, C. Hui; Sun, J. Feng
2014-07-01
This paper proposes a novel method of multi-beam laser heterodyne measurement for Young modulus. Based on Doppler effect and heterodyne technology, loaded the information of length variation to the frequency difference of the multi-beam laser heterodyne signal by the frequency modulation of the oscillating mirror, this method can obtain many values of length variation caused by mass variation after the multi-beam laser heterodyne signal demodulation simultaneously. Processing these values by weighted-average, it can obtain length variation accurately, and eventually obtain value of Young modulus of the sample by the calculation. This novel method is used to simulate measurement for Young modulus of wire under different mass by MATLAB, the obtained result shows that the relative measurement error of this method is just 0.3%.
2018-04-01
systems containing ionized gases. 2. Gibbs Method in the Integral Form As per the Gibbs general methodology , based on the concept of heterogeneous...ARL-TR-8348 ● APR 2018 US Army Research Laboratory The Gibbs Variational Method in Thermodynamics of Equilibrium Plasma: 1...ARL-TR-8348 ● APR 2018 US Army Research Laboratory The Gibbs Variational Method in Thermodynamics of Equilibrium Plasma: 1. General
Marinho, V C; Richards, D; Niederman, R
2001-05-01
Variation in health care, and more particularly in dental care, was recently chronicled in a Readers Digest investigative report. The conclusions of this report are consistent with sound scientific studies conducted in various areas of health care, including dental care, which demonstrate substantial variation in the care provided to patients. This variation in care parallels the certainty with which clinicians and faculty members often articulate strongly held, but very different opinions. Using a case-based dental scenario, we present systematic evidence-based methods for accessing dental health care information, evaluating this information for validity and importance, and using this information to make informed curricular and clinical decisions. We also discuss barriers inhibiting these systematic approaches to evidence-based clinical decision making and methods for effectively promoting behavior change in health care professionals.
Aghamohammadi, Amirhossein; Ang, Mei Choo; A Sundararajan, Elankovan; Weng, Ng Kok; Mogharrebi, Marzieh; Banihashem, Seyed Yashar
2018-01-01
Visual tracking in aerial videos is a challenging task in computer vision and remote sensing technologies due to appearance variation difficulties. Appearance variations are caused by camera and target motion, low resolution noisy images, scale changes, and pose variations. Various approaches have been proposed to deal with appearance variation difficulties in aerial videos, and amongst these methods, the spatiotemporal saliency detection approach reported promising results in the context of moving target detection. However, it is not accurate for moving target detection when visual tracking is performed under appearance variations. In this study, a visual tracking method is proposed based on spatiotemporal saliency and discriminative online learning methods to deal with appearance variations difficulties. Temporal saliency is used to represent moving target regions, and it was extracted based on the frame difference with Sauvola local adaptive thresholding algorithms. The spatial saliency is used to represent the target appearance details in candidate moving regions. SLIC superpixel segmentation, color, and moment features can be used to compute feature uniqueness and spatial compactness of saliency measurements to detect spatial saliency. It is a time consuming process, which prompted the development of a parallel algorithm to optimize and distribute the saliency detection processes that are loaded into the multi-processors. Spatiotemporal saliency is then obtained by combining the temporal and spatial saliencies to represent moving targets. Finally, a discriminative online learning algorithm was applied to generate a sample model based on spatiotemporal saliency. This sample model is then incrementally updated to detect the target in appearance variation conditions. Experiments conducted on the VIVID dataset demonstrated that the proposed visual tracking method is effective and is computationally efficient compared to state-of-the-art methods.
2018-01-01
Visual tracking in aerial videos is a challenging task in computer vision and remote sensing technologies due to appearance variation difficulties. Appearance variations are caused by camera and target motion, low resolution noisy images, scale changes, and pose variations. Various approaches have been proposed to deal with appearance variation difficulties in aerial videos, and amongst these methods, the spatiotemporal saliency detection approach reported promising results in the context of moving target detection. However, it is not accurate for moving target detection when visual tracking is performed under appearance variations. In this study, a visual tracking method is proposed based on spatiotemporal saliency and discriminative online learning methods to deal with appearance variations difficulties. Temporal saliency is used to represent moving target regions, and it was extracted based on the frame difference with Sauvola local adaptive thresholding algorithms. The spatial saliency is used to represent the target appearance details in candidate moving regions. SLIC superpixel segmentation, color, and moment features can be used to compute feature uniqueness and spatial compactness of saliency measurements to detect spatial saliency. It is a time consuming process, which prompted the development of a parallel algorithm to optimize and distribute the saliency detection processes that are loaded into the multi-processors. Spatiotemporal saliency is then obtained by combining the temporal and spatial saliencies to represent moving targets. Finally, a discriminative online learning algorithm was applied to generate a sample model based on spatiotemporal saliency. This sample model is then incrementally updated to detect the target in appearance variation conditions. Experiments conducted on the VIVID dataset demonstrated that the proposed visual tracking method is effective and is computationally efficient compared to state-of-the-art methods. PMID:29438421
NASA Astrophysics Data System (ADS)
Li, Yan-Chao; Wang, Chun-Hui; Qu, Yang; Gao, Long; Cong, Hai-Fang; Yang, Yan-Ling; Gao, Jie; Wang, Ao-You
2011-01-01
This paper proposes a novel method of multi-beam laser heterodyne measurement for metal linear expansion coefficient. Based on the Doppler effect and heterodyne technology, the information is loaded of length variation to the frequency difference of the multi-beam laser heterodyne signal by the frequency modulation of the oscillating mirror, this method can obtain many values of length variation caused by temperature variation after the multi-beam laser heterodyne signal demodulation simultaneously. Processing these values by weighted-average, it can obtain length variation accurately, and eventually obtain the value of linear expansion coefficient of metal by the calculation. This novel method is used to simulate measurement for linear expansion coefficient of metal rod under different temperatures by MATLAB, the obtained result shows that the relative measurement error of this method is just 0.4%.
Speckle reduction in optical coherence tomography by adaptive total variation method
NASA Astrophysics Data System (ADS)
Wu, Tong; Shi, Yaoyao; Liu, Youwen; He, Chongjun
2015-12-01
An adaptive total variation method based on the combination of speckle statistics and total variation restoration is proposed and developed for reducing speckle noise in optical coherence tomography (OCT) images. The statistical distribution of the speckle noise in OCT image is investigated and measured. With the measured parameters such as the mean value and variance of the speckle noise, the OCT image is restored by the adaptive total variation restoration method. The adaptive total variation restoration algorithm was applied to the OCT images of a volunteer's hand skin, which showed effective speckle noise reduction and image quality improvement. For image quality comparison, the commonly used median filtering method was also applied to the same images to reduce the speckle noise. The measured results demonstrate the superior performance of the adaptive total variation restoration method in terms of image signal-to-noise ratio, equivalent number of looks, contrast-to-noise ratio, and mean square error.
Wu, Gary D; Lewis, James D; Hoffmann, Christian; Chen, Ying-Yu; Knight, Rob; Bittinger, Kyle; Hwang, Jennifer; Chen, Jun; Berkowsky, Ronald; Nessel, Lisa; Li, Hongzhe; Bushman, Frederic D
2010-07-30
Intense interest centers on the role of the human gut microbiome in health and disease, but optimal methods for analysis are still under development. Here we present a study of methods for surveying bacterial communities in human feces using 454/Roche pyrosequencing of 16S rRNA gene tags. We analyzed fecal samples from 10 individuals and compared methods for storage, DNA purification and sequence acquisition. To assess reproducibility, we compared samples one cm apart on a single stool specimen for each individual. To analyze storage methods, we compared 1) immediate freezing at -80 degrees C, 2) storage on ice for 24 or 3) 48 hours. For DNA purification methods, we tested three commercial kits and bead beating in hot phenol. Variations due to the different methodologies were compared to variation among individuals using two approaches--one based on presence-absence information for bacterial taxa (unweighted UniFrac) and the other taking into account their relative abundance (weighted UniFrac). In the unweighted analysis relatively little variation was associated with the different analytical procedures, and variation between individuals predominated. In the weighted analysis considerable variation was associated with the purification methods. Particularly notable was improved recovery of Firmicutes sequences using the hot phenol method. We also carried out surveys of the effects of different 454 sequencing methods (FLX versus Titanium) and amplification of different 16S rRNA variable gene segments. Based on our findings we present recommendations for protocols to collect, process and sequence bacterial 16S rDNA from fecal samples--some major points are 1) if feasible, bead-beating in hot phenol or use of the PSP kit improves recovery; 2) storage methods can be adjusted based on experimental convenience; 3) unweighted (presence-absence) comparisons are less affected by lysis method.
Augmented classical least squares multivariate spectral analysis
Haaland, David M.; Melgaard, David K.
2004-02-03
A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.
Augmented Classical Least Squares Multivariate Spectral Analysis
Haaland, David M.; Melgaard, David K.
2005-07-26
A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.
Augmented Classical Least Squares Multivariate Spectral Analysis
Haaland, David M.; Melgaard, David K.
2005-01-11
A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.
Szatkiewicz, Jin P; Wang, WeiBo; Sullivan, Patrick F; Wang, Wei; Sun, Wei
2013-02-01
Structural variation is an important class of genetic variation in mammals. High-throughput sequencing (HTS) technologies promise to revolutionize copy-number variation (CNV) detection but present substantial analytic challenges. Converging evidence suggests that multiple types of CNV-informative data (e.g. read-depth, read-pair, split-read) need be considered, and that sophisticated methods are needed for more accurate CNV detection. We observed that various sources of experimental biases in HTS confound read-depth estimation, and note that bias correction has not been adequately addressed by existing methods. We present a novel read-depth-based method, GENSENG, which uses a hidden Markov model and negative binomial regression framework to identify regions of discrete copy-number changes while simultaneously accounting for the effects of multiple confounders. Based on extensive calibration using multiple HTS data sets, we conclude that our method outperforms existing read-depth-based CNV detection algorithms. The concept of simultaneous bias correction and CNV detection can serve as a basis for combining read-depth with other types of information such as read-pair or split-read in a single analysis. A user-friendly and computationally efficient implementation of our method is freely available.
Scaling up functional traits for ecosystem services with remote sensing: concepts and methods.
Abelleira Martínez, Oscar J; Fremier, Alexander K; Günter, Sven; Ramos Bendaña, Zayra; Vierling, Lee; Galbraith, Sara M; Bosque-Pérez, Nilsa A; Ordoñez, Jenny C
2016-07-01
Ecosystem service-based management requires an accurate understanding of how human modification influences ecosystem processes and these relationships are most accurate when based on functional traits. Although trait variation is typically sampled at local scales, remote sensing methods can facilitate scaling up trait variation to regional scales needed for ecosystem service management. We review concepts and methods for scaling up plant and animal functional traits from local to regional spatial scales with the goal of assessing impacts of human modification on ecosystem processes and services. We focus our objectives on considerations and approaches for (1) conducting local plot-level sampling of trait variation and (2) scaling up trait variation to regional spatial scales using remotely sensed data. We show that sampling methods for scaling up traits need to account for the modification of trait variation due to land cover change and species introductions. Sampling intraspecific variation, stratification by land cover type or landscape context, or inference of traits from published sources may be necessary depending on the traits of interest. Passive and active remote sensing are useful for mapping plant phenological, chemical, and structural traits. Combining these methods can significantly improve their capacity for mapping plant trait variation. These methods can also be used to map landscape and vegetation structure in order to infer animal trait variation. Due to high context dependency, relationships between trait variation and remotely sensed data are not directly transferable across regions. We end our review with a brief synthesis of issues to consider and outlook for the development of these approaches. Research that relates typical functional trait metrics, such as the community-weighted mean, with remote sensing data and that relates variation in traits that cannot be remotely sensed to other proxies is needed. Our review narrows the gap between functional trait and remote sensing methods for ecosystem service management.
An Analysis of Periodic Components in BL Lac Object S5 0716 +714 with MUSIC Method
NASA Astrophysics Data System (ADS)
Tang, J.
2012-01-01
Multiple signal classification (MUSIC) algorithms are introduced to the estimation of the period of variation of BL Lac objects.The principle of MUSIC spectral analysis method and theoretical analysis of the resolution of frequency spectrum using analog signals are included. From a lot of literatures, we have collected a lot of effective observation data of BL Lac object S5 0716 + 714 in V, R, I bands from 1994 to 2008. The light variation periods of S5 0716 +714 are obtained by means of the MUSIC spectral analysis method and periodogram spectral analysis method. There exist two major periods: (3.33±0.08) years and (1.24±0.01) years for all bands. The estimation of the period of variation of the algorithm based on the MUSIC spectral analysis method is compared with that of the algorithm based on the periodogram spectral analysis method. It is a super-resolution algorithm with small data length, and could be used to detect the period of variation of weak signals.
Comparison of variational real-space representations of the kinetic energy operator
NASA Astrophysics Data System (ADS)
Skylaris, Chris-Kriton; Diéguez, Oswaldo; Haynes, Peter D.; Payne, Mike C.
2002-08-01
We present a comparison of real-space methods based on regular grids for electronic structure calculations that are designed to have basis set variational properties, using as a reference the conventional method of finite differences (a real-space method that is not variational) and the reciprocal-space plane-wave method which is fully variational. We find that a definition of the finite-difference method [P. Maragakis, J. Soler, and E. Kaxiras, Phys. Rev. B 64, 193101 (2001)] satisfies one of the two properties of variational behavior at the cost of larger errors than the conventional finite-difference method. On the other hand, a technique which represents functions in a number of plane waves which is independent of system size closely follows the plane-wave method and therefore also the criteria for variational behavior. Its application is only limited by the requirement of having functions strictly localized in regions of real space, but this is a characteristic of an increasing number of modern real-space methods, as they are designed to have a computational cost that scales linearly with system size.
Song, Junqiang; Leng, Hongze; Lu, Fengshun
2014-01-01
We present a new numerical method to get the approximate solutions of fractional differential equations. A new operational matrix of integration for fractional-order Legendre functions (FLFs) is first derived. Then a modified variational iteration formula which can avoid “noise terms” is constructed. Finally a numerical method based on variational iteration method (VIM) and FLFs is developed for fractional differential equations (FDEs). Block-pulse functions (BPFs) are used to calculate the FLFs coefficient matrices of the nonlinear terms. Five examples are discussed to demonstrate the validity and applicability of the technique. PMID:24511303
Total variation approach for adaptive nonuniformity correction in focal-plane arrays.
Vera, Esteban; Meza, Pablo; Torres, Sergio
2011-01-15
In this Letter we propose an adaptive scene-based nonuniformity correction method for fixed-pattern noise removal in imaging arrays. It is based on the minimization of the total variation of the estimated irradiance, and the resulting function is optimized by an isotropic total variation approach making use of an alternating minimization strategy. The proposed method provides enhanced results when applied to a diverse set of real IR imagery, accurately estimating the nonunifomity parameters of each detector in the focal-plane array at a fast convergence rate, while also forming fewer ghosting artifacts.
FROG - Fingerprinting Genomic Variation Ontology
Bhardwaj, Anshu
2015-01-01
Genetic variations play a crucial role in differential phenotypic outcomes. Given the complexity in establishing this correlation and the enormous data available today, it is imperative to design machine-readable, efficient methods to store, label, search and analyze this data. A semantic approach, FROG: “FingeRprinting Ontology of Genomic variations” is implemented to label variation data, based on its location, function and interactions. FROG has six levels to describe the variation annotation, namely, chromosome, DNA, RNA, protein, variations and interactions. Each level is a conceptual aggregation of logically connected attributes each of which comprises of various properties for the variant. For example, in chromosome level, one of the attributes is location of variation and which has two properties, allosomes or autosomes. Another attribute is variation kind which has four properties, namely, indel, deletion, insertion, substitution. Likewise, there are 48 attributes and 278 properties to capture the variation annotation across six levels. Each property is then assigned a bit score which in turn leads to generation of a binary fingerprint based on the combination of these properties (mostly taken from existing variation ontologies). FROG is a novel and unique method designed for the purpose of labeling the entire variation data generated till date for efficient storage, search and analysis. A web-based platform is designed as a test case for users to navigate sample datasets and generate fingerprints. The platform is available at http://ab-openlab.csir.res.in/frog. PMID:26244889
A Calibration Method for Nanowire Biosensors to Suppress Device-to-device Variation
Ishikawa, Fumiaki N.; Curreli, Marco; Chang, Hsiao-Kang; Chen, Po-Chiang; Zhang, Rui; Cote, Richard J.; Thompson, Mark E.; Zhou, Chongwu
2009-01-01
Nanowire/nanotube biosensors have stimulated significant interest; however the inevitable device-to-device variation in the biosensor performance remains a great challenge. We have developed an analytical method to calibrate nanowire biosensor responses that can suppress the device-to-device variation in sensing response significantly. The method is based on our discovery of a strong correlation between the biosensor gate dependence (dIds/dVg) and the absolute response (absolute change in current, ΔI). In2O3 nanowire based biosensors for streptavidin detection were used as the model system. Studying the liquid gate effect and ionic concentration dependence of strepavidin sensing indicates that electrostatic interaction is the dominant mechanism for sensing response. Based on this sensing mechanism and transistor physics, a linear correlation between the absolute sensor response (ΔI) and the gate dependence (dIds/dVg) is predicted and confirmed experimentally. Using this correlation, a calibration method was developed where the absolute response is divided by dIds/dVg for each device, and the calibrated responses from different devices behaved almost identically. Compared to the common normalization method (normalization of the conductance/resistance/current by the initial value), this calibration method was proved advantageous using a conventional transistor model. The method presented here substantially suppresses device-to-device variation, allowing the use of nanosensors in large arrays. PMID:19921812
Factor Retention in Exploratory Factor Analysis: A Comparison of Alternative Methods.
ERIC Educational Resources Information Center
Mumford, Karen R.; Ferron, John M.; Hines, Constance V.; Hogarty, Kristine Y.; Kromrey, Jeffery D.
This study compared the effectiveness of 10 methods of determining the number of factors to retain in exploratory common factor analysis. The 10 methods included the Kaiser rule and a modified Kaiser criterion, 3 variations of parallel analysis, 4 regression-based variations of the scree procedure, and the minimum average partial procedure. The…
A visual tracking method based on deep learning without online model updating
NASA Astrophysics Data System (ADS)
Tang, Cong; Wang, Yicheng; Feng, Yunsong; Zheng, Chao; Jin, Wei
2018-02-01
The paper proposes a visual tracking method based on deep learning without online model updating. In consideration of the advantages of deep learning in feature representation, deep model SSD (Single Shot Multibox Detector) is used as the object extractor in the tracking model. Simultaneously, the color histogram feature and HOG (Histogram of Oriented Gradient) feature are combined to select the tracking object. In the process of tracking, multi-scale object searching map is built to improve the detection performance of deep detection model and the tracking efficiency. In the experiment of eight respective tracking video sequences in the baseline dataset, compared with six state-of-the-art methods, the method in the paper has better robustness in the tracking challenging factors, such as deformation, scale variation, rotation variation, illumination variation, and background clutters, moreover, its general performance is better than other six tracking methods.
Adaptive variational mode decomposition method for signal processing based on mode characteristic
NASA Astrophysics Data System (ADS)
Lian, Jijian; Liu, Zhuo; Wang, Haijun; Dong, Xiaofeng
2018-07-01
Variational mode decomposition is a completely non-recursive decomposition model, where all the modes are extracted concurrently. However, the model requires a preset mode number, which limits the adaptability of the method since a large deviation in the number of mode set will cause the discard or mixing of the mode. Hence, a method called Adaptive Variational Mode Decomposition (AVMD) was proposed to automatically determine the mode number based on the characteristic of intrinsic mode function. The method was used to analyze the simulation signals and the measured signals in the hydropower plant. Comparisons have also been conducted to evaluate the performance by using VMD, EMD and EWT. It is indicated that the proposed method has strong adaptability and is robust to noise. It can determine the mode number appropriately without modulation even when the signal frequencies are relatively close.
Image denoising by a direct variational minimization
NASA Astrophysics Data System (ADS)
Janev, Marko; Atanacković, Teodor; Pilipović, Stevan; Obradović, Radovan
2011-12-01
In this article we introduce a novel method for the image de-noising which combines a mathematically well-posdenes of the variational modeling with the efficiency of a patch-based approach in the field of image processing. It based on a direct minimization of an energy functional containing a minimal surface regularizer that uses fractional gradient. The minimization is obtained on every predefined patch of the image, independently. By doing so, we avoid the use of an artificial time PDE model with its inherent problems of finding optimal stopping time, as well as the optimal time step. Moreover, we control the level of image smoothing on each patch (and thus on the whole image) by adapting the Lagrange multiplier using the information on the level of discontinuities on a particular patch, which we obtain by pre-processing. In order to reduce the average number of vectors in the approximation generator and still to obtain the minimal degradation, we combine a Ritz variational method for the actual minimization on a patch, and a complementary fractional variational principle. Thus, the proposed method becomes computationally feasible and applicable for practical purposes. We confirm our claims with experimental results, by comparing the proposed method with a couple of PDE-based methods, where we get significantly better denoising results specially on the oscillatory regions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niu, S; Zhang, Y; Ma, J
Purpose: To investigate iterative reconstruction via prior image constrained total generalized variation (PICTGV) for spectral computed tomography (CT) using fewer projections while achieving greater image quality. Methods: The proposed PICTGV method is formulated as an optimization problem, which balances the data fidelity and prior image constrained total generalized variation of reconstructed images in one framework. The PICTGV method is based on structure correlations among images in the energy domain and high-quality images to guide the reconstruction of energy-specific images. In PICTGV method, the high-quality image is reconstructed from all detector-collected X-ray signals and is referred as the broad-spectrum image. Distinctmore » from the existing reconstruction methods applied on the images with first order derivative, the higher order derivative of the images is incorporated into the PICTGV method. An alternating optimization algorithm is used to minimize the PICTGV objective function. We evaluate the performance of PICTGV on noise and artifacts suppressing using phantom studies and compare the method with the conventional filtered back-projection method as well as TGV based method without prior image. Results: On the digital phantom, the proposed method outperforms the existing TGV method in terms of the noise reduction, artifacts suppression, and edge detail preservation. Compared to that obtained by the TGV based method without prior image, the relative root mean square error in the images reconstructed by the proposed method is reduced by over 20%. Conclusion: The authors propose an iterative reconstruction via prior image constrained total generalize variation for spectral CT. Also, we have developed an alternating optimization algorithm and numerically demonstrated the merits of our approach. Results show that the proposed PICTGV method outperforms the TGV method for spectral CT.« less
Traditional and modern plant breeding methods with examples in rice (Oryza sativa L.).
Breseghello, Flavio; Coelho, Alexandre Siqueira Guedes
2013-09-04
Plant breeding can be broadly defined as alterations caused in plants as a result of their use by humans, ranging from unintentional changes resulting from the advent of agriculture to the application of molecular tools for precision breeding. The vast diversity of breeding methods can be simplified into three categories: (i) plant breeding based on observed variation by selection of plants based on natural variants appearing in nature or within traditional varieties; (ii) plant breeding based on controlled mating by selection of plants presenting recombination of desirable genes from different parents; and (iii) plant breeding based on monitored recombination by selection of specific genes or marker profiles, using molecular tools for tracking within-genome variation. The continuous application of traditional breeding methods in a given species could lead to the narrowing of the gene pool from which cultivars are drawn, rendering crops vulnerable to biotic and abiotic stresses and hampering future progress. Several methods have been devised for introducing exotic variation into elite germplasm without undesirable effects. Cases in rice are given to illustrate the potential and limitations of different breeding approaches.
Christensen, A L; Lundbye-Christensen, S; Dethlefsen, C
2011-12-01
Several statistical methods of assessing seasonal variation are available. Brookhart and Rothman [3] proposed a second-order moment-based estimator based on the geometrical model derived by Edwards [1], and reported that this estimator is superior in estimating the peak-to-trough ratio of seasonal variation compared with Edwards' estimator with respect to bias and mean squared error. Alternatively, seasonal variation may be modelled using a Poisson regression model, which provides flexibility in modelling the pattern of seasonal variation and adjustments for covariates. Based on a Monte Carlo simulation study three estimators, one based on the geometrical model, and two based on log-linear Poisson regression models, were evaluated in regards to bias and standard deviation (SD). We evaluated the estimators on data simulated according to schemes varying in seasonal variation and presence of a secular trend. All methods and analyses in this paper are available in the R package Peak2Trough[13]. Applying a Poisson regression model resulted in lower absolute bias and SD for data simulated according to the corresponding model assumptions. Poisson regression models had lower bias and SD for data simulated to deviate from the corresponding model assumptions than the geometrical model. This simulation study encourages the use of Poisson regression models in estimating the peak-to-trough ratio of seasonal variation as opposed to the geometrical model. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
2010-01-01
Intense interest centers on the role of the human gut microbiome in health and disease, but optimal methods for analysis are still under development. Here we present a study of methods for surveying bacterial communities in human feces using 454/Roche pyrosequencing of 16S rRNA gene tags. We analyzed fecal samples from 10 individuals and compared methods for storage, DNA purification and sequence acquisition. To assess reproducibility, we compared samples one cm apart on a single stool specimen for each individual. To analyze storage methods, we compared 1) immediate freezing at -80°C, 2) storage on ice for 24 or 3) 48 hours. For DNA purification methods, we tested three commercial kits and bead beating in hot phenol. Variations due to the different methodologies were compared to variation among individuals using two approaches--one based on presence-absence information for bacterial taxa (unweighted UniFrac) and the other taking into account their relative abundance (weighted UniFrac). In the unweighted analysis relatively little variation was associated with the different analytical procedures, and variation between individuals predominated. In the weighted analysis considerable variation was associated with the purification methods. Particularly notable was improved recovery of Firmicutes sequences using the hot phenol method. We also carried out surveys of the effects of different 454 sequencing methods (FLX versus Titanium) and amplification of different 16S rRNA variable gene segments. Based on our findings we present recommendations for protocols to collect, process and sequence bacterial 16S rDNA from fecal samples--some major points are 1) if feasible, bead-beating in hot phenol or use of the PSP kit improves recovery; 2) storage methods can be adjusted based on experimental convenience; 3) unweighted (presence-absence) comparisons are less affected by lysis method. PMID:20673359
NASA Astrophysics Data System (ADS)
Huang, Wei; Chen, Xiu; Wang, Yueyun
2018-03-01
Landsat data are widely used in various earth observations, but the clouds interfere with the applications of the images. This paper proposes a weighted variational gradient-based fusion method (WVGBF) for high-fidelity thin cloud removal of Landsat images, which is an improvement of the variational gradient-based fusion (VGBF) method. The VGBF method integrates the gradient information from the reference band into visible bands of cloudy image to enable spatial details and remove thin clouds. The VGBF method utilizes the same gradient constraints to the entire image, which causes the color distortion in cloudless areas. In our method, a weight coefficient is introduced into the gradient approximation term to ensure the fidelity of image. The distribution of weight coefficient is related to the cloud thickness map. The map is built on Independence Component Analysis (ICA) by using multi-temporal Landsat images. Quantitatively, we use R value to evaluate the fidelity in the cloudless regions and metric Q to evaluate the clarity in the cloud areas. The experimental results indicate that the proposed method has the better ability to remove thin cloud and achieve high fidelity.
NASA Astrophysics Data System (ADS)
Feng, Xinzeng; Hormuth, David A.; Yankeelov, Thomas E.
2018-06-01
We present an efficient numerical method to quantify the spatial variation of glioma growth based on subject-specific medical images using a mechanically-coupled tumor model. The method is illustrated in a murine model of glioma in which we consider the tumor as a growing elastic mass that continuously deforms the surrounding healthy-appearing brain tissue. As an inverse parameter identification problem, we quantify the volumetric growth of glioma and the growth component of deformation by fitting the model predicted cell density to the cell density estimated using the diffusion-weighted magnetic resonance imaging data. Numerically, we developed an adjoint-based approach to solve the optimization problem. Results on a set of experimentally measured, in vivo rat glioma data indicate good agreement between the fitted and measured tumor area and suggest a wide variation of in-plane glioma growth with the growth-induced Jacobian ranging from 1.0 to 6.0.
Gallego, Sergi; Márquez, Andrés; Méndez, David; Ortuño, Manuel; Neipp, Cristian; Fernández, Elena; Pascual, Inmaculada; Beléndez, Augusto
2008-05-10
One of the problems associated with photopolymers as optical recording media is the thickness variation during the recording process. Different values of shrinkages or swelling are reported in the literature for photopolymers. Furthermore, these variations depend on the spatial frequencies of the gratings stored in the materials. Thickness variations can be measured using different methods: studying the deviation from the Bragg's angle for nonslanted gratings, using MicroXAM S/N 8038 interferometer, or by the thermomechanical analysis experiments. In a previous paper, we began the characterization of the properties of a polyvinyl alcohol/acrylamide based photopolymer at the lowest end of recorded spatial frequencies. In this work, we continue analyzing the thickness variations of these materials using a reflection interferometer. With this technique we are able to obtain the variations of the layers refractive index and, therefore, a direct estimation of the polymer refractive index.
Nonperturbative calculations in the framework of variational perturbation theory in QCD
NASA Astrophysics Data System (ADS)
Solovtsova, O. P.
2017-07-01
We discuss applications of the method based on the variational perturbation theory to perform calculations down to the lowest energy scale. The variational series is different from the conventional perturbative expansion and can be used to go beyond the weak-coupling regime. We apply this method to investigate the Borel representation of the light Adler function constructed from the τ data and to determine the residual condensates. It is shown that within the method suggested the optimal values of these lower dimension condensates are close to zero.
NASA Astrophysics Data System (ADS)
Sumihara, K.
Based upon legitimate variational principles, one microscopic-macroscopic finite element formulation for linear dynamics is presented by Hybrid Stress Finite Element Method. The microscopic application of Geometric Perturbation introduced by Pian and the introduction of infinitesimal limit core element (Baby Element) have been consistently combined according to the flexible and inherent interpretation of the legitimate variational principles initially originated by Pian and Tong. The conceptual development based upon Hybrid Finite Element Method is extended to linear dynamics with the introduction of physically meaningful higher modes.
Optimal filtering and Bayesian detection for friction-based diagnostics in machines.
Ray, L R; Townsend, J R; Ramasubramanian, A
2001-01-01
Non-model-based diagnostic methods typically rely on measured signals that must be empirically related to process behavior or incipient faults. The difficulty in interpreting a signal that is indirectly related to the fundamental process behavior is significant. This paper presents an integrated non-model and model-based approach to detecting when process behavior varies from a proposed model. The method, which is based on nonlinear filtering combined with maximum likelihood hypothesis testing, is applicable to dynamic systems whose constitutive model is well known, and whose process inputs are poorly known. Here, the method is applied to friction estimation and diagnosis during motion control in a rotating machine. A nonlinear observer estimates friction torque in a machine from shaft angular position measurements and the known input voltage to the motor. The resulting friction torque estimate can be analyzed directly for statistical abnormalities, or it can be directly compared to friction torque outputs of an applicable friction process model in order to diagnose faults or model variations. Nonlinear estimation of friction torque provides a variable on which to apply diagnostic methods that is directly related to model variations or faults. The method is evaluated experimentally by its ability to detect normal load variations in a closed-loop controlled motor driven inertia with bearing friction and an artificially-induced external line contact. Results show an ability to detect statistically significant changes in friction characteristics induced by normal load variations over a wide range of underlying friction behaviors.
Selecting Magnet Laminations Recipes Using the Meth-od of Sim-u-la-ted Annealing
NASA Astrophysics Data System (ADS)
Russell, A. D.; Baiod, R.; Brown, B. C.; Harding, D. J.; Martin, P. S.
1997-05-01
The Fermilab Main Injector project is building 344 dipoles using more than 7000 tons of steel. Budget and logistical constraints required that steel production, lamination stamping and magnet fabrication proceed in parallel. There were significant run-to-run variations in the magnetic properties of the steel (Martin, P.S., et al., Variations in the Steel Properties and the Excitation Characteristics of FMI Dipoles, this conference). The large lamination size (>0.5 m coil opening) resulted in variations of gap height due to differences in stress relief in the steel after stamping. To minimize magnet-to-magnet strength and field shape variations the laminations were shuffled based on the available magnetic and mechanical data and assigned to magnets using a computer program based on the method of simulated annealing. The lamination sets selected by the program have produced magnets which easily satisfy the design requirements. Variations of the average magnet gap are an order of magnitude smaller than the variations in lamination gaps. This paper discusses observed gap variations, the program structure and the strength uniformity results.
Boareto, Marcelo; Cesar, Jonatas; Leite, Vitor B P; Caticha, Nestor
2015-01-01
We introduce Supervised Variational Relevance Learning (Suvrel), a variational method to determine metric tensors to define distance based similarity in pattern classification, inspired in relevance learning. The variational method is applied to a cost function that penalizes large intraclass distances and favors small interclass distances. We find analytically the metric tensor that minimizes the cost function. Preprocessing the patterns by doing linear transformations using the metric tensor yields a dataset which can be more efficiently classified. We test our methods using publicly available datasets, for some standard classifiers. Among these datasets, two were tested by the MAQC-II project and, even without the use of further preprocessing, our results improve on their performance.
Base oils and methods for making the same
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ohler, Nicholas; Fisher, Karl; Tirmizi, Shakeel
Provided herein are isoparaffins derived from hydrocarbon terpenes such as myrcene, ocimene and farnesene, and methods for making the same. In certain variations, the isoparaffins have utility as lubricant base stocks.
A Decision-Based Modified Total Variation Diffusion Method for Impulse Noise Removal
Zhu, Qingxin; Song, Xiuli; Tao, Jinsong
2017-01-01
Impulsive noise removal usually employs median filtering, switching median filtering, the total variation L1 method, and variants. These approaches however often introduce excessive smoothing and can result in extensive visual feature blurring and thus are suitable only for images with low density noise. A new method to remove noise is proposed in this paper to overcome this limitation, which divides pixels into different categories based on different noise characteristics. If an image is corrupted by salt-and-pepper noise, the pixels are divided into corrupted and noise-free; if the image is corrupted by random valued impulses, the pixels are divided into corrupted, noise-free, and possibly corrupted. Pixels falling into different categories are processed differently. If a pixel is corrupted, modified total variation diffusion is applied; if the pixel is possibly corrupted, weighted total variation diffusion is applied; otherwise, the pixel is left unchanged. Experimental results show that the proposed method is robust to different noise strengths and suitable for different images, with strong noise removal capability as shown by PSNR/SSIM results as well as the visual quality of restored images. PMID:28536602
Predictive modeling and reducing cyclic variability in autoignition engines
Hellstrom, Erik; Stefanopoulou, Anna; Jiang, Li; Larimore, Jacob
2016-08-30
Methods and systems are provided for controlling a vehicle engine to reduce cycle-to-cycle combustion variation. A predictive model is applied to predict cycle-to-cycle combustion behavior of an engine based on observed engine performance variables. Conditions are identified, based on the predicted cycle-to-cycle combustion behavior, that indicate high cycle-to-cycle combustion variation. Corrective measures are then applied to prevent the predicted high cycle-to-cycle combustion variation.
Variation and Defect Tolerance for Nano Crossbars
NASA Astrophysics Data System (ADS)
Tunc, Cihan
With the extreme shrinking in CMOS technology, quantum effects and manufacturing issues are getting more crucial. Hence, additional shrinking in CMOS feature size seems becoming more challenging, difficult, and costly. On the other hand, emerging nanotechnology has attracted many researchers since additional scaling down has been demonstrated by manufacturing nanowires, Carbon nanotubes as well as molecular switches using bottom-up manufacturing techniques. In addition to the progress in manufacturing, developments in architecture show that emerging nanoelectronic devices will be promising for the future system designs. Using nano crossbars, which are composed of two sets of perpendicular nanowires with programmable intersections, it is possible to implement logic functions. In addition, nano crossbars present some important features as regularity, reprogrammability, and interchangeability. Combining these features, researchers have presented different effective architectures. Although bottom-up nanofabrication can greatly reduce manufacturing costs, due to low controllability in the manufacturing process, some critical issues occur. Bottom- up nanofabrication process results in high variation compared to conventional top- down lithography used in CMOS technology. In addition, an increased failure rate is expected. Variation and defect tolerance methods used for conventional CMOS technology seem inadequate for adapting to emerging nano technology because the variation and the defect rate for emerging nano technology is much more than current CMOS technology. Therefore, variations and defect tolerance methods for emerging nano technology are necessary for a successful transition. In this work, in order to tolerate variations for crossbars, we introduce a framework that is established based on reprogrammability and interchangeability features of nano crossbars. This framework is shown to be applicable for both FET-based and diode-based nano crossbars. We present a characterization testing method which requires minimal number of test vectors. We formulate the variation optimization problem using Simulated Annealing with different optimization goals. Furthermore, we extend the framework for defect tolerance. Experimental results and comparison of proposed framework with exhaustive methods confirm its effectiveness for both variation and defect tolerance.
NASA Astrophysics Data System (ADS)
Suhartono, Lee, Muhammad Hisyam; Prastyo, Dedy Dwi
2015-12-01
The aim of this research is to develop a calendar variation model for forecasting retail sales data with the Eid ul-Fitr effect. The proposed model is based on two methods, namely two levels ARIMAX and regression methods. Two levels ARIMAX and regression models are built by using ARIMAX for the first level and regression for the second level. Monthly men's jeans and women's trousers sales in a retail company for the period January 2002 to September 2009 are used as case study. In general, two levels of calendar variation model yields two models, namely the first model to reconstruct the sales pattern that already occurred, and the second model to forecast the effect of increasing sales due to Eid ul-Fitr that affected sales at the same and the previous months. The results show that the proposed two level calendar variation model based on ARIMAX and regression methods yields better forecast compared to the seasonal ARIMA model and Neural Networks.
Variable Density Effects in Stochastic Lagrangian Models for Turbulent Combustion
2016-07-20
PDF methods in dealing with chemical reaction and convection are preserved irrespective of density variation. Since the density variation in a typical...combustion process may be as large as factor of seven, including variable- density effects in PDF methods is of significance. Conventionally, the...strategy of modelling variable density flows in PDF methods is similar to that used for second-moment closure models (SMCM): models are developed based on
Numerical realization of the variational method for generating self-trapped beams
NASA Astrophysics Data System (ADS)
Duque, Erick I.; Lopez-Aguayo, Servando; Malomed, Boris A.
2018-03-01
We introduce a numerical variational method based on the Rayleigh-Ritz optimization principle for predicting two-dimensional self-trapped beams in nonlinear media. This technique overcomes the limitation of the traditional variational approximation in performing analytical Lagrangian integration and differentiation. Approximate soliton solutions of a generalized nonlinear Schr\\"odinger equation are obtained, demonstrating robustness of the beams of various types (fundamental, vortices, multipoles, azimuthons) in the course of their propagation. The algorithm offers possibilities to produce more sophisticated soliton profiles in general nonlinear models.
NASA Astrophysics Data System (ADS)
Damay, Nicolas; Forgez, Christophe; Bichat, Marie-Pierre; Friedrich, Guy
2016-11-01
The entropy-variation of a battery is responsible for heat generation or consumption during operation and its prior measurement is mandatory for developing a thermal model. It is generally done through the potentiometric method which is considered as a reference. However, it requires several days or weeks to get a look-up table with a 5 or 10% SoC (State of Charge) resolution. In this study, a calorimetric method based on the inversion of a thermal model is proposed for the fast estimation of a nearly continuous curve of entropy-variation. This is achieved by separating the heats produced while charging and discharging the battery. The entropy-variation is then deduced from the extracted entropic heat. The proposed method is validated by comparing the results obtained with several current rates to measurements made with the potentiometric method.
LINEAR LATTICE AND TRAJECTORY RECONSTRUCTION AND CORRECTION AT FAST LINEAR ACCELERATOR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romanov, A.; Edstrom, D.; Halavanau, A.
2017-07-16
The low energy part of the FAST linear accelerator based on 1.3 GHz superconducting RF cavities was successfully commissioned [1]. During commissioning, beam based model dependent methods were used to correct linear lattice and trajectory. Lattice correction algorithm is based on analysis of beam shape from profile monitors and trajectory responses to dipole correctors. Trajectory responses to field gradient variations in quadrupoles and phase variations in superconducting RF cavities were used to correct bunch offsets in quadrupoles and accelerating cavities relative to their magnetic axes. Details of used methods and experimental results are presented.
NASA Astrophysics Data System (ADS)
Wong, Kin-Yiu; Gao, Jiali
2007-12-01
Based on Kleinert's variational perturbation (KP) theory [Path Integrals in Quantum Mechanics, Statistics, Polymer Physics, and Financial Markets, 3rd ed. (World Scientific, Singapore, 2004)], we present an analytic path-integral approach for computing the effective centroid potential. The approach enables the KP theory to be applied to any realistic systems beyond the first-order perturbation (i.e., the original Feynman-Kleinert [Phys. Rev. A 34, 5080 (1986)] variational method). Accurate values are obtained for several systems in which exact quantum results are known. Furthermore, the computed kinetic isotope effects for a series of proton transfer reactions, in which the potential energy surfaces are evaluated by density-functional theory, are in good accordance with experiments. We hope that our method could be used by non-path-integral experts or experimentalists as a "black box" for any given system.
An Evaluation Method of Words Tendency Depending on Time-Series Variation and Its Improvements.
ERIC Educational Resources Information Center
Atlam, El-Sayed; Okada, Makoto; Shishibori, Masami; Aoe, Jun-ichi
2002-01-01
Discussion of word frequency and keywords in text focuses on a method to estimate automatically the stability classes that indicate a word's popularity with time-series variations based on the frequency change in past electronic text data. Compares the evaluation of decision tree stability class results with manual classification results.…
Finite-frequency sensitivity kernels for global seismic wave propagation based upon adjoint methods
NASA Astrophysics Data System (ADS)
Liu, Qinya; Tromp, Jeroen
2008-07-01
We determine adjoint equations and Fréchet kernels for global seismic wave propagation based upon a Lagrange multiplier method. We start from the equations of motion for a rotating, self-gravitating earth model initially in hydrostatic equilibrium, and derive the corresponding adjoint equations that involve motions on an earth model that rotates in the opposite direction. Variations in the misfit function χ then may be expressed as , where δlnm = δm/m denotes relative model perturbations in the volume V, δlnd denotes relative topographic variations on solid-solid or fluid-solid boundaries Σ, and ∇Σδlnd denotes surface gradients in relative topographic variations on fluid-solid boundaries ΣFS. The 3-D Fréchet kernel Km determines the sensitivity to model perturbations δlnm, and the 2-D kernels Kd and Kd determine the sensitivity to topographic variations δlnd. We demonstrate also how anelasticity may be incorporated within the framework of adjoint methods. Finite-frequency sensitivity kernels are calculated by simultaneously computing the adjoint wavefield forward in time and reconstructing the regular wavefield backward in time. Both the forward and adjoint simulations are based upon a spectral-element method. We apply the adjoint technique to generate finite-frequency traveltime kernels for global seismic phases (P, Pdiff, PKP, S, SKS, depth phases, surface-reflected phases, surface waves, etc.) in both 1-D and 3-D earth models. For 1-D models these adjoint-generated kernels generally agree well with results obtained from ray-based methods. However, adjoint methods do not have the same theoretical limitations as ray-based methods, and can produce sensitivity kernels for any given phase in any 3-D earth model. The Fréchet kernels presented in this paper illustrate the sensitivity of seismic observations to structural parameters and topography on internal discontinuities. These kernels form the basis of future 3-D tomographic inversions.
Microfluidic-Based Measurement Method of Red Blood Cell Aggregation under Hematocrit Variations
2017-01-01
Red blood cell (RBC) aggregation and erythrocyte sedimentation rate (ESR) are considered to be promising biomarkers for effectively monitoring blood rheology at extremely low shear rates. In this study, a microfluidic-based measurement technique is suggested to evaluate RBC aggregation under hematocrit variations due to the continuous ESR. After the pipette tip is tightly fitted into an inlet port, a disposable suction pump is connected to the outlet port through a polyethylene tube. After dropping blood (approximately 0.2 mL) into the pipette tip, the blood flow can be started and stopped by periodically operating a pinch valve. To evaluate variations in RBC aggregation due to the continuous ESR, an EAI (Erythrocyte-sedimentation-rate Aggregation Index) is newly suggested, which uses temporal variations of image intensity. To demonstrate the proposed method, the dynamic characterization of the disposable suction pump is first quantitatively measured by varying the hematocrit levels and cavity volume of the suction pump. Next, variations in RBC aggregation and ESR are quantified by varying the hematocrit levels. The conventional aggregation index (AI) is maintained constant, unrelated to the hematocrit values. However, the EAI significantly decreased with respect to the hematocrit values. Thus, the EAI is more effective than the AI for monitoring variations in RBC aggregation due to the ESR. Lastly, the proposed method is employed to detect aggregated blood and thermally-induced blood. The EAI gradually increased as the concentration of a dextran solution increased. In addition, the EAI significantly decreased for thermally-induced blood. From this experimental demonstration, the proposed method is able to effectively measure variations in RBC aggregation due to continuous hematocrit variations, especially by quantifying the EAI. PMID:28878199
Wang, Gang; Zhao, Zhikai; Ning, Yongjie
2018-05-28
As the application of a coal mine Internet of Things (IoT), mobile measurement devices, such as intelligent mine lamps, cause moving measurement data to be increased. How to transmit these large amounts of mobile measurement data effectively has become an urgent problem. This paper presents a compressed sensing algorithm for the large amount of coal mine IoT moving measurement data based on a multi-hop network and total variation. By taking gas data in mobile measurement data as an example, two network models for the transmission of gas data flow, namely single-hop and multi-hop transmission modes, are investigated in depth, and a gas data compressed sensing collection model is built based on a multi-hop network. To utilize the sparse characteristics of gas data, the concept of total variation is introduced and a high-efficiency gas data compression and reconstruction method based on Total Variation Sparsity based on Multi-Hop (TVS-MH) is proposed. According to the simulation results, by using the proposed method, the moving measurement data flow from an underground distributed mobile network can be acquired and transmitted efficiently.
Confidence bounds for normal and lognormal distribution coefficients of variation
Steve Verrill
2003-01-01
This paper compares the so-called exact approach for obtaining confidence intervals on normal distribution coefficients of variation to approximate methods. Approximate approaches were found to perform less well than the exact approach for large coefficients of variation and small sample sizes. Web-based computer programs are described for calculating confidence...
Identification and ranking of environmental threats with ecosystem vulnerability distributions.
Zijp, Michiel C; Huijbregts, Mark A J; Schipper, Aafke M; Mulder, Christian; Posthuma, Leo
2017-08-24
Responses of ecosystems to human-induced stress vary in space and time, because both stressors and ecosystem vulnerabilities vary in space and time. Presently, ecosystem impact assessments mainly take into account variation in stressors, without considering variation in ecosystem vulnerability. We developed a method to address ecosystem vulnerability variation by quantifying ecosystem vulnerability distributions (EVDs) based on monitoring data of local species compositions and environmental conditions. The method incorporates spatial variation of both abiotic and biotic variables to quantify variation in responses among species and ecosystems. We show that EVDs can be derived based on a selection of locations, existing monitoring data and a selected impact boundary, and can be used in stressor identification and ranking for a region. A case study on Ohio's freshwater ecosystems, with freshwater fish as target species group, showed that physical habitat impairment and nutrient loads ranked highest as current stressors, with species losses higher than 5% for at least 6% of the locations. EVDs complement existing approaches of stressor assessment and management, which typically account only for variability in stressors, by accounting for variation in the vulnerability of the responding ecosystems.
NASA Technical Reports Server (NTRS)
Shiau, Jyh-Jen; Wahba, Grace; Johnson, Donald R.
1986-01-01
A new method, based on partial spline models, is developed for including specified discontinuities in otherwise smooth two- and three-dimensional objective analyses. The method is appropriate for including tropopause height information in two- and three-dimensinal temperature analyses, using the O'Sullivan-Wahba physical variational method for analysis of satellite radiance data, and may in principle be used in a combined variational analysis of observed, forecast, and climate information. A numerical method for its implementation is described and a prototype two-dimensional analysis based on simulated radiosonde and tropopause height data is shown. The method may also be appropriate for other geophysical problems, such as modeling the ocean thermocline, fronts, discontinuities, etc.
GEMINI: Integrative Exploration of Genetic Variation and Genome Annotations
Paila, Umadevi; Chapman, Brad A.; Kirchner, Rory; Quinlan, Aaron R.
2013-01-01
Modern DNA sequencing technologies enable geneticists to rapidly identify genetic variation among many human genomes. However, isolating the minority of variants underlying disease remains an important, yet formidable challenge for medical genetics. We have developed GEMINI (GEnome MINIng), a flexible software package for exploring all forms of human genetic variation. Unlike existing tools, GEMINI integrates genetic variation with a diverse and adaptable set of genome annotations (e.g., dbSNP, ENCODE, UCSC, ClinVar, KEGG) into a unified database to facilitate interpretation and data exploration. Whereas other methods provide an inflexible set of variant filters or prioritization methods, GEMINI allows researchers to compose complex queries based on sample genotypes, inheritance patterns, and both pre-installed and custom genome annotations. GEMINI also provides methods for ad hoc queries and data exploration, a simple programming interface for custom analyses that leverage the underlying database, and both command line and graphical tools for common analyses. We demonstrate GEMINI's utility for exploring variation in personal genomes and family based genetic studies, and illustrate its ability to scale to studies involving thousands of human samples. GEMINI is designed for reproducibility and flexibility and our goal is to provide researchers with a standard framework for medical genomics. PMID:23874191
Estimating nonrigid motion from inconsistent intensity with robust shape features
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Wenyang; Ruan, Dan, E-mail: druan@mednet.ucla.edu; Department of Radiation Oncology, University of California, Los Angeles, California 90095
2013-12-15
Purpose: To develop a nonrigid motion estimation method that is robust to heterogeneous intensity inconsistencies amongst the image pairs or image sequence. Methods: Intensity and contrast variations, as in dynamic contrast enhanced magnetic resonance imaging, present a considerable challenge to registration methods based on general discrepancy metrics. In this study, the authors propose and validate a novel method that is robust to such variations by utilizing shape features. The geometry of interest (GOI) is represented with a flexible zero level set, segmented via well-behaved regularized optimization. The optimization energy drives the zero level set to high image gradient regions, andmore » regularizes it with area and curvature priors. The resulting shape exhibits high consistency even in the presence of intensity or contrast variations. Subsequently, a multiscale nonrigid registration is performed to seek a regular deformation field that minimizes shape discrepancy in the vicinity of GOIs. Results: To establish the working principle, realistic 2D and 3D images were subject to simulated nonrigid motion and synthetic intensity variations, so as to enable quantitative evaluation of registration performance. The proposed method was benchmarked against three alternative registration approaches, specifically, optical flow, B-spline based mutual information, and multimodality demons. When intensity consistency was satisfied, all methods had comparable registration accuracy for the GOIs. When intensities among registration pairs were inconsistent, however, the proposed method yielded pronounced improvement in registration accuracy, with an approximate fivefold reduction in mean absolute error (MAE = 2.25 mm, SD = 0.98 mm), compared to optical flow (MAE = 9.23 mm, SD = 5.36 mm), B-spline based mutual information (MAE = 9.57 mm, SD = 8.74 mm) and mutimodality demons (MAE = 10.07 mm, SD = 4.03 mm). Applying the proposed method on a real MR image sequence also provided qualitatively appealing results, demonstrating good feasibility and applicability of the proposed method. Conclusions: The authors have developed a novel method to estimate the nonrigid motion of GOIs in the presence of spatial intensity and contrast variations, taking advantage of robust shape features. Quantitative analysis and qualitative evaluation demonstrated good promise of the proposed method. Further clinical assessment and validation is being performed.« less
Rotational stellar structures based on the Lagrangian variational principle
NASA Astrophysics Data System (ADS)
Yasutake, Nobutoshi; Fujisawa, Kotaro; Yamada, Shoichi
2017-06-01
A new method for multi-dimensional stellar structures is proposed in this study. As for stellar evolution calculations, the Heney method is the defacto standard now, but basically assumed to be spherical symmetric. It is one of the difficulties for deformed stellar-evolution calculations to trace the potentially complex movements of each fluid element. On the other hand, our new method is very suitable to follow such movements, since it is based on the Lagrange coordinate. This scheme is also based on the variational principle, which is adopted to the studies for the pasta structures inside of neutron stars. Our scheme could be a major break through for evolution calculations of any types of deformed stars: proto-planets, proto-stars, and proto-neutron stars, etc.
A Continuous Variation Study of Heats of Neutralization.
ERIC Educational Resources Information Center
Mahoney, Dennis W.; And Others
1981-01-01
Suggests that students study heats of neutralization of a 1 M solution of an unknown acid by 1 M solution of a strong base using the method continuous variation. Reviews results using several common acids. (SK)
Zhang, Yanyan; Zhao, Jianlin; Di, Jianglei; Jiang, Hongzhen; Wang, Qian; Wang, Jun; Guo, Yunzhu; Yin, Dachuan
2012-07-30
We report a real-time measurement method of the solution concentration variation during the growth of protein-lysozyme crystals based on digital holographic interferometry. A series of holograms containing the information of the solution concentration variation in the whole crystallization process is recorded by CCD. Based on the principle of double-exposure holographic interferometry and the relationship between the phase difference of the reconstructed object wave and the solution concentration, the solution concentration variation with time for arbitrary point in the solution can be obtained, and then the two-dimensional concentration distribution of the solution during crystallization process can also be figured out under the precondition which the refractive index is constant through the light propagation direction. The experimental results turns out that it is feasible to in situ, full-field and real-time monitor the crystal growth process by using this method.
NASA Astrophysics Data System (ADS)
Diffey, Jenny; Berks, Michael; Hufton, Alan; Chung, Camilla; Verow, Rosanne; Morrison, Joanna; Wilson, Mary; Boggis, Caroline; Morris, Julie; Maxwell, Anthony; Astley, Susan
2010-04-01
Breast density is positively linked to the risk of developing breast cancer. We have developed a semi-automated, stepwedge-based method that has been applied to the mammograms of 1,289 women in the UK breast screening programme to measure breast density by volume and area. 116 images were analysed by three independent operators to assess inter-observer variability; 24 of these were analysed on 10 separate occasions by the same operator to determine intra-observer variability. 168 separate images were analysed using the stepwedge method and by two radiologists who independently estimated percentage breast density by area. There was little intra-observer variability in the stepwedge method (average coefficients of variation 3.49% - 5.73%). There were significant differences in the volumes of glandular tissue obtained by the three operators. This was attributed to variations in the operators' definition of the breast edge. For fatty and dense breasts, there was good correlation between breast density assessed by the stepwedge method and the radiologists. This was also observed between radiologists, despite significant inter-observer variation. Based on analysis of thresholds used in the stepwedge method, radiologists' definition of a dense pixel is one in which the percentage of glandular tissue is between 10 and 20% of the total thickness of tissue.
NASA Technical Reports Server (NTRS)
Roth, Don J.
1996-01-01
This article describes a single transducer ultrasonic imaging method that eliminates the effect of plate thickness variation in the image. The method thus isolates ultrasonic variations due to material microstructure. The use of this method can result in significant cost savings because the ultrasonic image can be interpreted correctly without the need for machining to achieve precise thickness uniformity during nondestructive evaluations of material development. The method is based on measurement of ultrasonic velocity. Images obtained using the thickness-independent methodology are compared with conventional velocity and c-scan echo peak amplitude images for monolithic ceramic (silicon nitride), metal matrix composite and polymer matrix composite materials. It was found that the thickness-independent ultrasonic images reveal and quantify correctly areas of global microstructural (pore and fiber volume fraction) variation due to the elimination of thickness effects. The thickness-independent ultrasonic imaging method described in this article is currently being commercialized under a cooperative agreement between NASA Lewis Research Center and Sonix, Inc.
Scene-based nonuniformity correction using local constant statistics.
Zhang, Chao; Zhao, Wenyi
2008-06-01
In scene-based nonuniformity correction, the statistical approach assumes all possible values of the true-scene pixel are seen at each pixel location. This global-constant-statistics assumption does not distinguish fixed pattern noise from spatial variations in the average image. This often causes the "ghosting" artifacts in the corrected images since the existing spatial variations are treated as noises. We introduce a new statistical method to reduce the ghosting artifacts. Our method proposes a local-constant statistics that assumes that the temporal signal distribution is not constant at each pixel but is locally true. This considers statistically a constant distribution in a local region around each pixel but uneven distribution in a larger scale. Under the assumption that the fixed pattern noise concentrates in a higher spatial-frequency domain than the distribution variation, we apply a wavelet method to the gain and offset image of the noise and separate out the pattern noise from the spatial variations in the temporal distribution of the scene. We compare the results to the global-constant-statistics method using a clean sequence with large artificial pattern noises. We also apply the method to a challenging CCD video sequence and a LWIR sequence to show how effective it is in reducing noise and the ghosting artifacts.
Estimating nonrigid motion from inconsistent intensity with robust shape features.
Liu, Wenyang; Ruan, Dan
2013-12-01
To develop a nonrigid motion estimation method that is robust to heterogeneous intensity inconsistencies amongst the image pairs or image sequence. Intensity and contrast variations, as in dynamic contrast enhanced magnetic resonance imaging, present a considerable challenge to registration methods based on general discrepancy metrics. In this study, the authors propose and validate a novel method that is robust to such variations by utilizing shape features. The geometry of interest (GOI) is represented with a flexible zero level set, segmented via well-behaved regularized optimization. The optimization energy drives the zero level set to high image gradient regions, and regularizes it with area and curvature priors. The resulting shape exhibits high consistency even in the presence of intensity or contrast variations. Subsequently, a multiscale nonrigid registration is performed to seek a regular deformation field that minimizes shape discrepancy in the vicinity of GOIs. To establish the working principle, realistic 2D and 3D images were subject to simulated nonrigid motion and synthetic intensity variations, so as to enable quantitative evaluation of registration performance. The proposed method was benchmarked against three alternative registration approaches, specifically, optical flow, B-spline based mutual information, and multimodality demons. When intensity consistency was satisfied, all methods had comparable registration accuracy for the GOIs. When intensities among registration pairs were inconsistent, however, the proposed method yielded pronounced improvement in registration accuracy, with an approximate fivefold reduction in mean absolute error (MAE = 2.25 mm, SD = 0.98 mm), compared to optical flow (MAE = 9.23 mm, SD = 5.36 mm), B-spline based mutual information (MAE = 9.57 mm, SD = 8.74 mm) and mutimodality demons (MAE = 10.07 mm, SD = 4.03 mm). Applying the proposed method on a real MR image sequence also provided qualitatively appealing results, demonstrating good feasibility and applicability of the proposed method. The authors have developed a novel method to estimate the nonrigid motion of GOIs in the presence of spatial intensity and contrast variations, taking advantage of robust shape features. Quantitative analysis and qualitative evaluation demonstrated good promise of the proposed method. Further clinical assessment and validation is being performed.
Elastic least-squares reverse time migration with velocities and density perturbation
NASA Astrophysics Data System (ADS)
Qu, Yingming; Li, Jinli; Huang, Jianping; Li, Zhenchun
2018-02-01
Elastic least-squares reverse time migration (LSRTM) based on the non-density-perturbation assumption can generate false-migrated interfaces caused by density variations. We perform an elastic LSRTM scheme with density variations for multicomponent seismic data to produce high-quality images in Vp, Vs and ρ components. However, the migrated images may suffer from crosstalk artefacts caused by P- and S-waves coupling in elastic LSRTM no matter what model parametrizations used. We have proposed an elastic LSRTM with density variations method based on wave modes separation to reduce these crosstalk artefacts by using P- and S-wave decoupled elastic velocity-stress equations to derive demigration equations and gradient formulae with respect to Vp, Vs and ρ. Numerical experiments with synthetic data demonstrate the capability and superiority of the proposed method. The imaging results suggest that our method promises imaging results with higher quality and has a faster residual convergence rate. Sensitivity analysis of migration velocity, migration density and stochastic noise verifies the robustness of the proposed method for field data.
General constraints on sampling wildlife on FIA plots
Bailey, L.L.; Sauer, J.R.; Nichols, J.D.; Geissler, P.H.; McRoberts, Ronald E.; Reams, Gregory A.; Van Deusen, Paul C.; McWilliams, William H.; Cieszewski, Chris J.
2005-01-01
This paper reviews the constraints to sampling wildlife populations at FIA points. Wildlife sampling programs must have well-defined goals and provide information adequate to meet those goals. Investigators should choose a State variable based on information needs and the spatial sampling scale. We discuss estimation-based methods for three State variables: species richness, abundance, and patch occupancy. All methods incorporate two essential sources of variation: detectability estimation and spatial variation. FIA sampling imposes specific space and time criteria that may need to be adjusted to meet local wildlife objectives.
Vertical Bridgman growth of Hg 1-xMn xTe with variational withdrawal rate
NASA Astrophysics Data System (ADS)
Zhi, Gu; Wan-Qi, Jie; Guo-Qiang, Li; Long, Zhang
2004-09-01
Based on the solute redistribution models, Vertical Bridgman growth of Hg1-xMnxTe with variational withdrawal rate is studied. Both theoretical analysis and experimental results show that the axial composition uniformity is improved and the crystal growth rate is also increased at the optimized variational method of withdrawal rate.
Measurement and Socio-Demographic Variation of Social Capital in a Large Population-Based Survey
ERIC Educational Resources Information Center
Nieminen, Tarja; Martelin, Tuija; Koskinen, Seppo; Simpura, Jussi; Alanen, Erkki; Harkanen, Tommi; Aromaa, Arpo
2008-01-01
Objectives: The main objective of this study was to describe the variation of individual social capital according to socio-demographic factors, and to develop a suitable way to measure social capital for this purpose. The similarity of socio-demographic variation between the genders was also assessed. Data and methods: The study applied…
Methods of determining complete sensor requirements for autonomous mobility
NASA Technical Reports Server (NTRS)
Curtis, Steven A. (Inventor)
2012-01-01
A method of determining complete sensor requirements for autonomous mobility of an autonomous system includes computing a time variation of each behavior of a set of behaviors of the autonomous system, determining mobility sensitivity to each behavior of the autonomous system, and computing a change in mobility based upon the mobility sensitivity to each behavior and the time variation of each behavior. The method further includes determining the complete sensor requirements of the autonomous system through analysis of the relative magnitude of the change in mobility, the mobility sensitivity to each behavior, and the time variation of each behavior, wherein the relative magnitude of the change in mobility, the mobility sensitivity to each behavior, and the time variation of each behavior are characteristic of the stability of the autonomous system.
Geometric constrained variational calculus. II: The second variation (Part I)
NASA Astrophysics Data System (ADS)
Massa, Enrico; Bruno, Danilo; Luria, Gianvittorio; Pagani, Enrico
2016-10-01
Within the geometrical framework developed in [Geometric constrained variational calculus. I: Piecewise smooth extremals, Int. J. Geom. Methods Mod. Phys. 12 (2015) 1550061], the problem of minimality for constrained calculus of variations is analyzed among the class of differentiable curves. A fully covariant representation of the second variation of the action functional, based on a suitable gauge transformation of the Lagrangian, is explicitly worked out. Both necessary and sufficient conditions for minimality are proved, and reinterpreted in terms of Jacobi fields.
A parameter-free variational coupling approach for trimmed isogeometric thin shells
NASA Astrophysics Data System (ADS)
Guo, Yujie; Ruess, Martin; Schillinger, Dominik
2017-04-01
The non-symmetric variant of Nitsche's method was recently applied successfully for variationally enforcing boundary and interface conditions in non-boundary-fitted discretizations. In contrast to its symmetric variant, it does not require stabilization terms and therefore does not depend on the appropriate estimation of stabilization parameters. In this paper, we further consolidate the non-symmetric Nitsche approach by establishing its application in isogeometric thin shell analysis, where variational coupling techniques are of particular interest for enforcing interface conditions along trimming curves. To this end, we extend its variational formulation within Kirchhoff-Love shell theory, combine it with the finite cell method, and apply the resulting framework to a range of representative shell problems based on trimmed NURBS surfaces. We demonstrate that the non-symmetric variant applied in this context is stable and can lead to the same accuracy in terms of displacements and stresses as its symmetric counterpart. Based on our numerical evidence, the non-symmetric Nitsche method is a viable parameter-free alternative to the symmetric variant in elastostatic shell analysis.
NASA Astrophysics Data System (ADS)
Zhang, Z.; Werner, F.; Cho, H.-M.; Wind, G.; Platnick, S.; Ackerman, A. S.; Di Girolamo, L.; Marshak, A.; Meyer, K.
2016-06-01
The bispectral method retrieves cloud optical thickness (τ) and cloud droplet effective radius (re) simultaneously from a pair of cloud reflectance observations, one in a visible or near-infrared (VIS/NIR) band and the other in a shortwave infrared (SWIR) band. A cloudy pixel is usually assumed to be horizontally homogeneous in the retrieval. Ignoring subpixel variations of cloud reflectances can lead to a significant bias in the retrieved τ and re. In the literature, the retrievals of τ and re are often assumed to be independent and considered separately when investigating the impact of subpixel cloud reflectance variations on the bispectral method. As a result, the impact on τ is contributed only by the subpixel variation of VIS/NIR band reflectance and the impact on re only by the subpixel variation of SWIR band reflectance. In our new framework, we use the Taylor expansion of a two-variable function to understand and quantify the impacts of subpixel variances of VIS/NIR and SWIR cloud reflectances and their covariance on the τ and re retrievals. This framework takes into account the fact that the retrievals are determined by both VIS/NIR and SWIR band observations in a mutually dependent way. In comparison with previous studies, it provides a more comprehensive understanding of how subpixel cloud reflectance variations impact the τ and re retrievals based on the bispectral method. In particular, our framework provides a mathematical explanation of how the subpixel variation in VIS/NIR band influences the re retrieval and why it can sometimes outweigh the influence of variations in the SWIR band and dominate the error in re retrievals, leading to a potential contribution of positive bias to the re retrieval. We test our framework using synthetic cloud fields from a large-eddy simulation and real observations from Moderate Resolution Imaging Spectroradiometer. The predicted results based on our framework agree very well with the numerical simulations. Our framework can be used to estimate the retrieval uncertainty from subpixel reflectance variations in operational satellite cloud products and to help understand the differences in τ and re retrievals between two instruments.
NASA Technical Reports Server (NTRS)
Zhang, Z.; Werner, F.; Cho, H. -M.; Wind, G.; Platnick, S.; Ackerman, A. S.; Di Girolamo, L.; Marshak, A.; Meyer, Kerry
2016-01-01
The bi-spectral method retrieves cloud optical thickness and cloud droplet effective radius simultaneously from a pair of cloud reflectance observations, one in a visible or near-infrared (VISNIR) band and the other in a shortwave infrared (SWIR) band. A cloudy pixel is usually assumed to be horizontally homogeneous in the retrieval. Ignoring sub-pixel variations of cloud reflectances can lead to a significant bias in the retrieved and re. In the literature, the retrievals of and re are often assumed to be independent and considered separately when investigating the impact of sub-pixel cloud reflectance variations on the bi-spectral method. As a result, the impact on is contributed only by the sub-pixel variation of VISNIR band reflectance and the impact on re only by the sub-pixel variation of SWIR band reflectance. In our new framework, we use the Taylor expansion of a two-variable function to understand and quantify the impacts of sub-pixel variances of VISNIR and SWIR cloud reflectances and their covariance on the and re retrievals. This framework takes into account the fact that the retrievals are determined by both VISNIR and SWIR band observations in a mutually dependent way. In comparison with previous studies, it provides a more comprehensive understanding of how sub-pixel cloud reflectance variations impact the and re retrievals based on the bi-spectral method. In particular, our framework provides a mathematical explanation of how the sub-pixel variation in VISNIR band influences the re retrieval and why it can sometimes outweigh the influence of variations in the SWIR band and dominate the error in re retrievals, leading to a potential contribution of positive bias to the re retrieval. We test our framework using synthetic cloud fields from a large-eddy simulation and real observations from Moderate Resolution Imaging Spectroradiometer. The predicted results based on our framework agree very well with the numerical simulations. Our framework can be used to estimate the retrieval uncertainty from sub-pixel reflectance variations in operational satellite cloud products and to help understand the differences in and re retrievals between two instruments.
NASA Technical Reports Server (NTRS)
Zhang, Z.; Werner, F.; Cho, H.-M.; Wind, G.; Platnick, S.; Ackerman, A. S.; Di Girolamo, L.; Marshak, A.; Meyer, K.
2016-01-01
The bispectral method retrieves cloud optical thickness (t) and cloud droplet effective radius (re) simultaneously from a pair of cloud reflectance observations, one in a visible or near-infrared (VIS/NIR) band and the other in a shortwave infrared (SWIR) band. A cloudy pixel is usually assumed to be horizontally homogeneous in the retrieval. Ignoring subpixel variations of cloud reflectances can lead to a significant bias in the retrieved t and re. In the literature, the retrievals of t and re are often assumed to be independent and considered separately when investigating the impact of subpixel cloud reflectance variations on the bispectral method. As a result, the impact on t is contributed only by the subpixel variation of VIS/NIR band reflectance and the impact on re only by the subpixel variation of SWIR band reflectance. In our new framework, we use the Taylor expansion of a two-variable function to understand and quantify the impacts of subpixel variances of VIS/NIR and SWIR cloud reflectances and their covariance on the t and re retrievals. This framework takes into account the fact that the retrievals are determined by both VIS/NIR and SWIR band observations in a mutually dependent way. In comparison with previous studies, it provides a more comprehensive understanding of how subpixel cloud reflectance variations impact the t and re retrievals based on the bispectral method. In particular, our framework provides a mathematical explanation of how the subpixel variation in VIS/NIR band influences the re retrieval and why it can sometimes outweigh the influence of variations in the SWIR band and dominate the error in re retrievals, leading to a potential contribution of positive bias to the re retrieval. We test our framework using synthetic cloud fields from a large-eddy simulation and real observations from Moderate Resolution Imaging Spectroradiometer. The predicted results based on our framework agree very well with the numerical simulations. Our framework can be used to estimate the retrieval uncertainty from subpixel reflectance variations in operational satellite cloud products and to help understand the differences in t and re retrievals between two instruments.
Numerical realization of the variational method for generating self-trapped beams.
Duque, Erick I; Lopez-Aguayo, Servando; Malomed, Boris A
2018-03-19
We introduce a numerical variational method based on the Rayleigh-Ritz optimization principle for predicting two-dimensional self-trapped beams in nonlinear media. This technique overcomes the limitation of the traditional variational approximation in performing analytical Lagrangian integration and differentiation. Approximate soliton solutions of a generalized nonlinear Schrödinger equation are obtained, demonstrating robustness of the beams of various types (fundamental, vortices, multipoles, azimuthons) in the course of their propagation. The algorithm offers possibilities to produce more sophisticated soliton profiles in general nonlinear models.
Buckley, Mike
2016-03-24
Collagen is one of the most ubiquitous proteins in the animal kingdom and the dominant protein in extracellular tissues such as bone, skin and other connective tissues in which it acts primarily as a supporting scaffold. It has been widely investigated scientifically, not only as a biomedical material for regenerative medicine, but also for its role as a food source for both humans and livestock. Due to the long-term stability of collagen, as well as its abundance in bone, it has been proposed as a source of biomarkers for species identification not only for heat- and pressure-rendered animal feed but also in ancient archaeological and palaeontological specimens, typically carried out by peptide mass fingerprinting (PMF) as well as in-depth liquid chromatography (LC)-based tandem mass spectrometric methods. Through the analysis of the three most common domesticates species, cow, sheep, and pig, this research investigates the advantages of each approach over the other, investigating sites of sequence variation with known functional properties of the collagen molecule. Results indicate that the previously identified species biomarkers through PMF analysis are not among the most variable type 1 collagen peptides present in these tissues, the latter of which can be detected by LC-based methods. However, it is clear that the highly repetitive sequence motif of collagen throughout the molecule, combined with the variability of the sites and relative abundance levels of hydroxylation, can result in high scoring false positive peptide matches using these LC-based methods. Additionally, the greater alpha 2(I) chain sequence variation, in comparison to the alpha 1(I) chain, did not appear to be specific to any particular functional properties, implying that intra-chain functional constraints on sequence variation are not as great as inter-chain constraints. However, although some of the most variable peptides were only observed in LC-based methods, until the range of publicly available collagen sequences improves, the simplicity of the PMF approach and suitable range of peptide sequence variation observed makes it the ideal method for initial taxonomic identification prior to further analysis by LC-based methods only when required.
Vertical profiles of wind and temperature by remote acoustical sounding
NASA Technical Reports Server (NTRS)
Fox, H. L.
1969-01-01
An acoustical method was investigated for obtaining meteorological soundings based on the refraction due to the vertical variation of wind and temperature. The method has the potential of yielding horizontally averaged measurements of the vertical variation of wind and temperature up to heights of a few kilometers; the averaging takes place over a radius of 10 to 15 km. An outline of the basic concepts and some of the results obtained with the method are presented.
NASA Astrophysics Data System (ADS)
Li, Zhe; Feng, Jinchao; Liu, Pengyu; Sun, Zhonghua; Li, Gang; Jia, Kebin
2018-05-01
Temperature is usually considered as a fluctuation in near-infrared spectral measurement. Chemometric methods were extensively studied to correct the effect of temperature variations. However, temperature can be considered as a constructive parameter that provides detailed chemical information when systematically changed during the measurement. Our group has researched the relationship between temperature-induced spectral variation (TSVC) and normalized squared temperature. In this study, we focused on the influence of temperature distribution in calibration set. Multi-temperature calibration set selection (MTCS) method was proposed to improve the prediction accuracy by considering the temperature distribution of calibration samples. Furthermore, double-temperature calibration set selection (DTCS) method was proposed based on MTCS method and the relationship between TSVC and normalized squared temperature. We compare the prediction performance of PLS models based on random sampling method and proposed methods. The results from experimental studies showed that the prediction performance was improved by using proposed methods. Therefore, MTCS method and DTCS method will be the alternative methods to improve prediction accuracy in near-infrared spectral measurement.
Zhao, Min; Wang, Qingguo; Wang, Quan; Jia, Peilin; Zhao, Zhongming
2013-01-01
Copy number variation (CNV) is a prevalent form of critical genetic variation that leads to an abnormal number of copies of large genomic regions in a cell. Microarray-based comparative genome hybridization (arrayCGH) or genotyping arrays have been standard technologies to detect large regions subject to copy number changes in genomes until most recently high-resolution sequence data can be analyzed by next-generation sequencing (NGS). During the last several years, NGS-based analysis has been widely applied to identify CNVs in both healthy and diseased individuals. Correspondingly, the strong demand for NGS-based CNV analyses has fuelled development of numerous computational methods and tools for CNV detection. In this article, we review the recent advances in computational methods pertaining to CNV detection using whole genome and whole exome sequencing data. Additionally, we discuss their strengths and weaknesses and suggest directions for future development.
2013-01-01
Copy number variation (CNV) is a prevalent form of critical genetic variation that leads to an abnormal number of copies of large genomic regions in a cell. Microarray-based comparative genome hybridization (arrayCGH) or genotyping arrays have been standard technologies to detect large regions subject to copy number changes in genomes until most recently high-resolution sequence data can be analyzed by next-generation sequencing (NGS). During the last several years, NGS-based analysis has been widely applied to identify CNVs in both healthy and diseased individuals. Correspondingly, the strong demand for NGS-based CNV analyses has fuelled development of numerous computational methods and tools for CNV detection. In this article, we review the recent advances in computational methods pertaining to CNV detection using whole genome and whole exome sequencing data. Additionally, we discuss their strengths and weaknesses and suggest directions for future development. PMID:24564169
A Survey on Gas Sensing Technology
Liu, Xiao; Cheng, Sitian; Liu, Hong; Hu, Sha; Zhang, Daqiang; Ning, Huansheng
2012-01-01
Sensing technology has been widely investigated and utilized for gas detection. Due to the different applicability and inherent limitations of different gas sensing technologies, researchers have been working on different scenarios with enhanced gas sensor calibration. This paper reviews the descriptions, evaluation, comparison and recent developments in existing gas sensing technologies. A classification of sensing technologies is given, based on the variation of electrical and other properties. Detailed introduction to sensing methods based on electrical variation is discussed through further classification according to sensing materials, including metal oxide semiconductors, polymers, carbon nanotubes, and moisture absorbing materials. Methods based on other kinds of variations such as optical, calorimetric, acoustic and gas-chromatographic, are presented in a general way. Several suggestions related to future development are also discussed. Furthermore, this paper focuses on sensitivity and selectivity for performance indicators to compare different sensing technologies, analyzes the factors that influence these two indicators, and lists several corresponding improved approaches. PMID:23012563
NASA Astrophysics Data System (ADS)
Bell, L. R.; Dowling, J. A.; Pogson, E. M.; Metcalfe, P.; Holloway, L.
2017-01-01
Accurate, efficient auto-segmentation methods are essential for the clinical efficacy of adaptive radiotherapy delivered with highly conformal techniques. Current atlas based auto-segmentation techniques are adequate in this respect, however fail to account for inter-observer variation. An atlas-based segmentation method that incorporates inter-observer variation is proposed. This method is validated for a whole breast radiotherapy cohort containing 28 CT datasets with CTVs delineated by eight observers. To optimise atlas accuracy, the cohort was divided into categories by mean body mass index and laterality, with atlas’ generated for each in a leave-one-out approach. Observer CTVs were merged and thresholded to generate an auto-segmentation model representing both inter-observer and inter-patient differences. For each category, the atlas was registered to the left-out dataset to enable propagation of the auto-segmentation from atlas space. Auto-segmentation time was recorded. The segmentation was compared to the gold-standard contour using the dice similarity coefficient (DSC) and mean absolute surface distance (MASD). Comparison with the smallest and largest CTV was also made. This atlas-based auto-segmentation method incorporating inter-observer variation was shown to be efficient (<4min) and accurate for whole breast radiotherapy, with good agreement (DSC>0.7, MASD <9.3mm) between the auto-segmented contours and CTV volumes.
NASA Technical Reports Server (NTRS)
Xue, W.-M.; Atluri, S. N.
1985-01-01
In this paper, all possible forms of mixed-hybrid finite element methods that are based on multi-field variational principles are examined as to the conditions for existence, stability, and uniqueness of their solutions. The reasons as to why certain 'simplified hybrid-mixed methods' in general, and the so-called 'simplified hybrid-displacement method' in particular (based on the so-called simplified variational principles), become unstable, are discussed. A comprehensive discussion of the 'discrete' BB-conditions, and the rank conditions, of the matrices arising in mixed-hybrid methods, is given. Some recent studies aimed at the assurance of such rank conditions, and the related problem of the avoidance of spurious kinematic modes, are presented.
NASA Astrophysics Data System (ADS)
Liao, S.; Chen, L.; Li, J.; Xiong, W.; Wu, Q.
2015-07-01
Existing spatiotemporal database supports spatiotemporal aggregation query over massive moving objects datasets. Due to the large amounts of data and single-thread processing method, the query speed cannot meet the application requirements. On the other hand, the query efficiency is more sensitive to spatial variation then temporal variation. In this paper, we proposed a spatiotemporal aggregation query method using multi-thread parallel technique based on regional divison and implemented it on the server. Concretely, we divided the spatiotemporal domain into several spatiotemporal cubes, computed spatiotemporal aggregation on all cubes using the technique of multi-thread parallel processing, and then integrated the query results. By testing and analyzing on the real datasets, this method has improved the query speed significantly.
CNV-seq, a new method to detect copy number variation using high-throughput sequencing.
Xie, Chao; Tammi, Martti T
2009-03-06
DNA copy number variation (CNV) has been recognized as an important source of genetic variation. Array comparative genomic hybridization (aCGH) is commonly used for CNV detection, but the microarray platform has a number of inherent limitations. Here, we describe a method to detect copy number variation using shotgun sequencing, CNV-seq. The method is based on a robust statistical model that describes the complete analysis procedure and allows the computation of essential confidence values for detection of CNV. Our results show that the number of reads, not the length of the reads is the key factor determining the resolution of detection. This favors the next-generation sequencing methods that rapidly produce large amount of short reads. Simulation of various sequencing methods with coverage between 0.1x to 8x show overall specificity between 91.7 - 99.9%, and sensitivity between 72.2 - 96.5%. We also show the results for assessment of CNV between two individual human genomes.
A New Cloud and Aerosol Layer Detection Method Based on Micropulse Lidar Measurements
NASA Astrophysics Data System (ADS)
Wang, Q.; Zhao, C.; Wang, Y.; Li, Z.; Wang, Z.; Liu, D.
2014-12-01
A new algorithm is developed to detect aerosols and clouds based on micropulse lidar (MPL) measurements. In this method, a semi-discretization processing (SDP) technique is first used to inhibit the impact of increasing noise with distance, then a value distribution equalization (VDE) method is introduced to reduce the magnitude of signal variations with distance. Combined with empirical threshold values, clouds and aerosols are detected and separated. This method can detect clouds and aerosols with high accuracy, although classification of aerosols and clouds is sensitive to the thresholds selected. Compared with the existing Atmospheric Radiation Measurement (ARM) program lidar-based cloud product, the new method detects more high clouds. The algorithm was applied to a year of observations at both the U.S. Southern Great Plains (SGP) and China Taihu site. At SGP, the cloud frequency shows a clear seasonal variation with maximum values in winter and spring, and shows bi-modal vertical distributions with maximum frequency at around 3-6 km and 8-12 km. The annual averaged cloud frequency is about 50%. By contrast, the cloud frequency at Taihu shows no clear seasonal variation and the maximum frequency is at around 1 km. The annual averaged cloud frequency is about 15% higher than that at SGP.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brizard, Alain J.; Tronci, Cesare
The variational formulations of guiding-center Vlasov-Maxwell theory based on Lagrange, Euler, and Euler-Poincaré variational principles are presented. Each variational principle yields a different approach to deriving guiding-center polarization and magnetization effects into the guiding-center Maxwell equations. The conservation laws of energy, momentum, and angular momentum are also derived by Noether method, where the guiding-center stress tensor is now shown to be explicitly symmetric.
A monolithic Lagrangian approach for fluid-structure interaction problems
NASA Astrophysics Data System (ADS)
Ryzhakov, P. B.; Rossi, R.; Idelsohn, S. R.; Oñate, E.
2010-11-01
Current work presents a monolithic method for the solution of fluid-structure interaction problems involving flexible structures and free-surface flows. The technique presented is based upon the utilization of a Lagrangian description for both the fluid and the structure. A linear displacement-pressure interpolation pair is used for the fluid whereas the structure utilizes a standard displacement-based formulation. A slight fluid compressibility is assumed that allows to relate the mechanical pressure to the local volume variation. The method described features a global pressure condensation which in turn enables the definition of a purely displacement-based linear system of equations. A matrix-free technique is used for the solution of such linear system, leading to an efficient implementation. The result is a robust method which allows dealing with FSI problems involving arbitrary variations in the shape of the fluid domain. The method is completely free of spurious added-mass effects.
Chen, Cheng; Wang, Wei; Ozolek, John A.; Rohde, Gustavo K.
2013-01-01
We describe a new supervised learning-based template matching approach for segmenting cell nuclei from microscopy images. The method uses examples selected by a user for building a statistical model which captures the texture and shape variations of the nuclear structures from a given dataset to be segmented. Segmentation of subsequent, unlabeled, images is then performed by finding the model instance that best matches (in the normalized cross correlation sense) local neighborhood in the input image. We demonstrate the application of our method to segmenting nuclei from a variety of imaging modalities, and quantitatively compare our results to several other methods. Quantitative results using both simulated and real image data show that, while certain methods may work well for certain imaging modalities, our software is able to obtain high accuracy across several imaging modalities studied. Results also demonstrate that, relative to several existing methods, the template-based method we propose presents increased robustness in the sense of better handling variations in illumination, variations in texture from different imaging modalities, providing more smooth and accurate segmentation borders, as well as handling better cluttered nuclei. PMID:23568787
Variational approach to direct and inverse problems of atmospheric pollution studies
NASA Astrophysics Data System (ADS)
Penenko, Vladimir; Tsvetova, Elena; Penenko, Alexey
2016-04-01
We present the development of a variational approach for solving interrelated problems of atmospheric hydrodynamics and chemistry concerning air pollution transport and transformations. The proposed approach allows us to carry out complex studies of different-scale physical and chemical processes using the methods of direct and inverse modeling [1-3]. We formulate the problems of risk/vulnerability and uncertainty assessment, sensitivity studies, variational data assimilation procedures [4], etc. A computational technology of constructing consistent mathematical models and methods of their numerical implementation is based on the variational principle in the weak constraint formulation specifically designed to account for uncertainties in models and observations. Algorithms for direct and inverse modeling are designed with the use of global and local adjoint problems. Implementing the idea of adjoint integrating factors provides unconditionally monotone and stable discrete-analytic approximations for convection-diffusion-reaction problems [5,6]. The general framework is applied to the direct and inverse problems for the models of transport and transformation of pollutants in Siberian and Arctic regions. The work has been partially supported by the RFBR grant 14-01-00125 and RAS Presidium Program I.33P. References: 1. V. Penenko, A.Baklanov, E. Tsvetova and A. Mahura . Direct and inverse problems in a variational concept of environmental modeling //Pure and Applied Geoph.(2012) v.169: 447-465. 2. V. V. Penenko, E. A. Tsvetova, and A. V. Penenko Development of variational approach for direct and inverse problems of atmospheric hydrodynamics and chemistry, Izvestiya, Atmospheric and Oceanic Physics, 2015, Vol. 51, No. 3, p. 311-319, DOI: 10.1134/S0001433815030093. 3. V.V. Penenko, E.A. Tsvetova, A.V. Penenko. Methods based on the joint use of models and observational data in the framework of variational approach to forecasting weather and atmospheric composition quality// Russian meteorology and hydrology, V. 40, Issue: 6, Pages: 365-373, DOI: 10.3103/S1068373915060023. 4. A.V. Penenko and V.V. Penenko. Direct data assimilation method for convection-diffusion models based on splitting scheme. Computational technologies, 19(4):69-83, 2014. 5. V.V. Penenko, E.A. Tsvetova, A.V. Penenko Variational approach and Euler's integrating factors for environmental studies// Computers and Mathematics with Applications, 2014, V.67, Issue 12, Pages 2240-2256, DOI:10.1016/j.camwa.2014.04.004 6. V.V. Penenko, E.A. Tsvetova. Variational methods of constructing monotone approximations for atmospheric chemistry models // Numerical analysis and applications, 2013, V. 6, Issue 3, pp 210-220, DOI 10.1134/S199542391303004X
The energetic cost of walking: a comparison of predictive methods.
Kramer, Patricia Ann; Sylvester, Adam D
2011-01-01
The energy that animals devote to locomotion has been of intense interest to biologists for decades and two basic methodologies have emerged to predict locomotor energy expenditure: those based on metabolic and those based on mechanical energy. Metabolic energy approaches share the perspective that prediction of locomotor energy expenditure should be based on statistically significant proxies of metabolic function, while mechanical energy approaches, which derive from many different perspectives, focus on quantifying the energy of movement. Some controversy exists as to which mechanical perspective is "best", but from first principles all mechanical methods should be equivalent if the inputs to the simulation are of similar quality. Our goals in this paper are 1) to establish the degree to which the various methods of calculating mechanical energy are correlated, and 2) to investigate to what degree the prediction methods explain the variation in energy expenditure. We use modern humans as the model organism in this experiment because their data are readily attainable, but the methodology is appropriate for use in other species. Volumetric oxygen consumption and kinematic and kinetic data were collected on 8 adults while walking at their self-selected slow, normal and fast velocities. Using hierarchical statistical modeling via ordinary least squares and maximum likelihood techniques, the predictive ability of several metabolic and mechanical approaches were assessed. We found that all approaches are correlated and that the mechanical approaches explain similar amounts of the variation in metabolic energy expenditure. Most methods predict the variation within an individual well, but are poor at accounting for variation between individuals. Our results indicate that the choice of predictive method is dependent on the question(s) of interest and the data available for use as inputs. Although we used modern humans as our model organism, these results can be extended to other species.
Nicholas C. Coops; Richard H. Waring; Todd A. Schroeder
2009-01-01
Although long-lived tree species experience considerable environmental variation over their life spans, their geographical distributions reflect sensitivity mainly to mean monthly climatic conditions.We introduce an approach that incorporates a physiologically based growth model to illustrate how a half-dozen tree species differ in their responses to monthly variation...
Compensation of flare-induced CD changes EUVL
Bjorkholm, John E [Pleasanton, CA; Stearns, Daniel G [Los Altos, CA; Gullikson, Eric M [Oakland, CA; Tichenor, Daniel A [Castro Valley, CA; Hector, Scott D [Oakland, CA
2004-11-09
A method for compensating for flare-induced critical dimensions (CD) changes in photolithography. Changes in the flare level results in undesirable CD changes. The method when used in extreme ultraviolet (EUV) lithography essentially eliminates the unwanted CD changes. The method is based on the recognition that the intrinsic level of flare for an EUV camera (the flare level for an isolated sub-resolution opaque dot in a bright field mask) is essentially constant over the image field. The method involves calculating the flare and its variation over the area of a patterned mask that will be imaged and then using mask biasing to largely eliminate the CD variations that the flare and its variations would otherwise cause. This method would be difficult to apply to optical or DUV lithography since the intrinsic flare for those lithographies is not constant over the image field.
Lopez-Martin, Manuel; Carro, Belen; Sanchez-Esguevillas, Antonio; Lloret, Jaime
2017-08-26
The purpose of a Network Intrusion Detection System is to detect intrusive, malicious activities or policy violations in a host or host's network. In current networks, such systems are becoming more important as the number and variety of attacks increase along with the volume and sensitiveness of the information exchanged. This is of particular interest to Internet of Things networks, where an intrusion detection system will be critical as its economic importance continues to grow, making it the focus of future intrusion attacks. In this work, we propose a new network intrusion detection method that is appropriate for an Internet of Things network. The proposed method is based on a conditional variational autoencoder with a specific architecture that integrates the intrusion labels inside the decoder layers. The proposed method is less complex than other unsupervised methods based on a variational autoencoder and it provides better classification results than other familiar classifiers. More important, the method can perform feature reconstruction, that is, it is able to recover missing features from incomplete training datasets. We demonstrate that the reconstruction accuracy is very high, even for categorical features with a high number of distinct values. This work is unique in the network intrusion detection field, presenting the first application of a conditional variational autoencoder and providing the first algorithm to perform feature recovery.
Carro, Belen; Sanchez-Esguevillas, Antonio
2017-01-01
The purpose of a Network Intrusion Detection System is to detect intrusive, malicious activities or policy violations in a host or host’s network. In current networks, such systems are becoming more important as the number and variety of attacks increase along with the volume and sensitiveness of the information exchanged. This is of particular interest to Internet of Things networks, where an intrusion detection system will be critical as its economic importance continues to grow, making it the focus of future intrusion attacks. In this work, we propose a new network intrusion detection method that is appropriate for an Internet of Things network. The proposed method is based on a conditional variational autoencoder with a specific architecture that integrates the intrusion labels inside the decoder layers. The proposed method is less complex than other unsupervised methods based on a variational autoencoder and it provides better classification results than other familiar classifiers. More important, the method can perform feature reconstruction, that is, it is able to recover missing features from incomplete training datasets. We demonstrate that the reconstruction accuracy is very high, even for categorical features with a high number of distinct values. This work is unique in the network intrusion detection field, presenting the first application of a conditional variational autoencoder and providing the first algorithm to perform feature recovery. PMID:28846608
Method to improve the blade tip-timing accuracy of fiber bundle sensor under varying tip clearance
NASA Astrophysics Data System (ADS)
Duan, Fajie; Zhang, Jilong; Jiang, Jiajia; Guo, Haotian; Ye, Dechao
2016-01-01
Blade vibration measurement based on the blade tip-timing method has become an industry-standard procedure. Fiber bundle sensors are widely used for tip-timing measurement. However, the variation of clearance between the sensor and the blade will bring a tip-timing error to fiber bundle sensors due to the change in signal amplitude. This article presents methods based on software and hardware to reduce the error caused by the tip clearance change. The software method utilizes both the rising and falling edges of the tip-timing signal to determine the blade arrival time, and a calibration process suitable for asymmetric tip-timing signals is presented. The hardware method uses an automatic gain control circuit to stabilize the signal amplitude. Experiments are conducted and the results prove that both methods can effectively reduce the impact of tip clearance variation on the blade tip-timing and improve the accuracy of measurements.
Total Variation with Overlapping Group Sparsity for Image Deblurring under Impulse Noise
Liu, Gang; Huang, Ting-Zhu; Liu, Jun; Lv, Xiao-Guang
2015-01-01
The total variation (TV) regularization method is an effective method for image deblurring in preserving edges. However, the TV based solutions usually have some staircase effects. In order to alleviate the staircase effects, we propose a new model for restoring blurred images under impulse noise. The model consists of an ℓ1-fidelity term and a TV with overlapping group sparsity (OGS) regularization term. Moreover, we impose a box constraint to the proposed model for getting more accurate solutions. The solving algorithm for our model is under the framework of the alternating direction method of multipliers (ADMM). We use an inner loop which is nested inside the majorization minimization (MM) iteration for the subproblem of the proposed method. Compared with other TV-based methods, numerical results illustrate that the proposed method can significantly improve the restoration quality, both in terms of peak signal-to-noise ratio (PSNR) and relative error (ReE). PMID:25874860
Technical variations in low-input RNA-seq methodologies.
Bhargava, Vipul; Head, Steven R; Ordoukhanian, Phillip; Mercola, Mark; Subramaniam, Shankar
2014-01-14
Recent advances in RNA-seq methodologies from limiting amounts of mRNA have facilitated the characterization of rare cell-types in various biological systems. So far, however, technical variations in these methods have not been adequately characterized, vis-à-vis sensitivity, starting with reduced levels of mRNA. Here, we generated sequencing libraries from limiting amounts of mRNA using three amplification-based methods, viz. Smart-seq, DP-seq and CEL-seq, and demonstrated significant technical variations in these libraries. Reduction in mRNA levels led to inefficient amplification of the majority of low to moderately expressed transcripts. Furthermore, noise in primer hybridization and/or enzyme incorporation was magnified during the amplification step resulting in significant distortions in fold changes of the transcripts. Consequently, the majority of the differentially expressed transcripts identified were either high-expressed and/or exhibited high fold changes. High technical variations ultimately masked subtle biological differences mandating the development of improved amplification-based strategies for quantitative transcriptomics from limiting amounts of mRNA.
There are numerous PCR-based methods available to characterize human fecal pollution in ambient waters. Each assay employs distinct oligonucleotides and many target different genes and microorganisms leading to potential variations in method performance. Laboratory comparisons ...
A dictionary learning approach for Poisson image deblurring.
Ma, Liyan; Moisan, Lionel; Yu, Jian; Zeng, Tieyong
2013-07-01
The restoration of images corrupted by blur and Poisson noise is a key issue in medical and biological image processing. While most existing methods are based on variational models, generally derived from a maximum a posteriori (MAP) formulation, recently sparse representations of images have shown to be efficient approaches for image recovery. Following this idea, we propose in this paper a model containing three terms: a patch-based sparse representation prior over a learned dictionary, the pixel-based total variation regularization term and a data-fidelity term capturing the statistics of Poisson noise. The resulting optimization problem can be solved by an alternating minimization technique combined with variable splitting. Extensive experimental results suggest that in terms of visual quality, peak signal-to-noise ratio value and the method noise, the proposed algorithm outperforms state-of-the-art methods.
Conditional Random Fields for Fast, Large-Scale Genome-Wide Association Studies
Huang, Jim C.; Meek, Christopher; Kadie, Carl; Heckerman, David
2011-01-01
Understanding the role of genetic variation in human diseases remains an important problem to be solved in genomics. An important component of such variation consist of variations at single sites in DNA, or single nucleotide polymorphisms (SNPs). Typically, the problem of associating particular SNPs to phenotypes has been confounded by hidden factors such as the presence of population structure, family structure or cryptic relatedness in the sample of individuals being analyzed. Such confounding factors lead to a large number of spurious associations and missed associations. Various statistical methods have been proposed to account for such confounding factors such as linear mixed-effect models (LMMs) or methods that adjust data based on a principal components analysis (PCA), but these methods either suffer from low power or cease to be tractable for larger numbers of individuals in the sample. Here we present a statistical model for conducting genome-wide association studies (GWAS) that accounts for such confounding factors. Our method scales in runtime quadratic in the number of individuals being studied with only a modest loss in statistical power as compared to LMM-based and PCA-based methods when testing on synthetic data that was generated from a generalized LMM. Applying our method to both real and synthetic human genotype/phenotype data, we demonstrate the ability of our model to correct for confounding factors while requiring significantly less runtime relative to LMMs. We have implemented methods for fitting these models, which are available at http://www.microsoft.com/science. PMID:21765897
Prakosa, A.; Malamas, P.; Zhang, S.; Pashakhanloo, F.; Arevalo, H.; Herzka, D. A.; Lardo, A.; Halperin, H.; McVeigh, E.; Trayanova, N.; Vadakkumpadan, F.
2014-01-01
Patient-specific modeling of ventricular electrophysiology requires an interpolated reconstruction of the 3-dimensional (3D) geometry of the patient ventricles from the low-resolution (Lo-res) clinical images. The goal of this study was to implement a processing pipeline for obtaining the interpolated reconstruction, and thoroughly evaluate the efficacy of this pipeline in comparison with alternative methods. The pipeline implemented here involves contouring the epi- and endocardial boundaries in Lo-res images, interpolating the contours using the variational implicit functions method, and merging the interpolation results to obtain the ventricular reconstruction. Five alternative interpolation methods, namely linear, cubic spline, spherical harmonics, cylindrical harmonics, and shape-based interpolation were implemented for comparison. In the thorough evaluation of the processing pipeline, Hi-res magnetic resonance (MR), computed tomography (CT), and diffusion tensor (DT) MR images from numerous hearts were used. Reconstructions obtained from the Hi-res images were compared with the reconstructions computed by each of the interpolation methods from a sparse sample of the Hi-res contours, which mimicked Lo-res clinical images. Qualitative and quantitative comparison of these ventricular geometry reconstructions showed that the variational implicit functions approach performed better than others. Additionally, the outcomes of electrophysiological simulations (sinus rhythm activation maps and pseudo-ECGs) conducted using models based on the various reconstructions were compared. These electrophysiological simulations demonstrated that our implementation of the variational implicit functions-based method had the best accuracy. PMID:25148771
Investigation of IRT-Based Equating Methods in the Presence of Outlier Common Items
ERIC Educational Resources Information Center
Hu, Huiqin; Rogers, W. Todd; Vukmirovic, Zarko
2008-01-01
Common items with inconsistent b-parameter estimates may have a serious impact on item response theory (IRT)--based equating results. To find a better way to deal with the outlier common items with inconsistent b-parameters, the current study investigated the comparability of 10 variations of four IRT-based equating methods (i.e., concurrent…
Schmieder, Daniela A.; Benítez, Hugo A.; Borissov, Ivailo M.; Fruciano, Carmelo
2015-01-01
External morphology is commonly used to identify bats as well as to investigate flight and foraging behavior, typically relying on simple length and area measures or ratios. However, geometric morphometrics is increasingly used in the biological sciences to analyse variation in shape and discriminate among species and populations. Here we compare the ability of traditional versus geometric morphometric methods in discriminating between closely related bat species – in this case European horseshoe bats (Rhinolophidae, Chiroptera) – based on morphology of the wing, body and tail. In addition to comparing morphometric methods, we used geometric morphometrics to detect interspecies differences as shape changes. Geometric morphometrics yielded improved species discrimination relative to traditional methods. The predicted shape for the variation along the between group principal components revealed that the largest differences between species lay in the extent to which the wing reaches in the direction of the head. This strong trend in interspecific shape variation is associated with size, which we interpret as an evolutionary allometry pattern. PMID:25965335
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, D; Kang, S; Kim, T
2014-06-01
Purpose: In this paper, we implemented the four-dimensional (4D) digital tomosynthesis (DTS) imaging based on algebraic image reconstruction technique and total-variation minimization method in order to compensate the undersampled projection data and improve the image quality. Methods: The projection data were acquired as supposed the cone-beam computed tomography system in linear accelerator by the Monte Carlo simulation and the in-house 4D digital phantom generation program. We performed 4D DTS based upon simultaneous algebraic reconstruction technique (SART) among the iterative image reconstruction technique and total-variation minimization method (TVMM). To verify the effectiveness of this reconstruction algorithm, we performed systematic simulation studiesmore » to investigate the imaging performance. Results: The 4D DTS algorithm based upon the SART and TVMM seems to give better results than that based upon the existing method, or filtered-backprojection. Conclusion: The advanced image reconstruction algorithm for the 4D DTS would be useful to validate each intra-fraction motion during radiation therapy. In addition, it will be possible to give advantage to real-time imaging for the adaptive radiation therapy. This research was supported by Leading Foreign Research Institute Recruitment Program (Grant No.2009-00420) and Basic Atomic Energy Research Institute (BAERI); (Grant No. 2009-0078390) through the National Research Foundation of Korea(NRF) funded by the Ministry of Science, ICT and Future Planning (MSIP)« less
NASA Astrophysics Data System (ADS)
Ren, W. X.; Lin, Y. Q.; Fang, S. E.
2011-11-01
One of the key issues in vibration-based structural health monitoring is to extract the damage-sensitive but environment-insensitive features from sampled dynamic response measurements and to carry out the statistical analysis of these features for structural damage detection. A new damage feature is proposed in this paper by using the system matrices of the forward innovation model based on the covariance-driven stochastic subspace identification of a vibrating system. To overcome the variations of the system matrices, a non-singularity transposition matrix is introduced so that the system matrices are normalized to their standard forms. For reducing the effects of modeling errors, noise and environmental variations on measured structural responses, a statistical pattern recognition paradigm is incorporated into the proposed method. The Mahalanobis and Euclidean distance decision functions of the damage feature vector are adopted by defining a statistics-based damage index. The proposed structural damage detection method is verified against one numerical signal and two numerical beams. It is demonstrated that the proposed statistics-based damage index is sensitive to damage and shows some robustness to the noise and false estimation of the system ranks. The method is capable of locating damage of the beam structures under different types of excitations. The robustness of the proposed damage detection method to the variations in environmental temperature is further validated in a companion paper by a reinforced concrete beam tested in the laboratory and a full-scale arch bridge tested in the field.
A new cloud and aerosol layer detection method based on micropulse lidar measurements
NASA Astrophysics Data System (ADS)
Zhao, Chuanfeng; Wang, Yuzhao; Wang, Qianqian; Li, Zhanqing; Wang, Zhien; Liu, Dong
2014-06-01
This paper introduces a new algorithm to detect aerosols and clouds based on micropulse lidar measurements. A semidiscretization processing technique is first used to inhibit the impact of increasing noise with distance. The value distribution equalization method which reduces the magnitude of signal variations with distance is then introduced. Combined with empirical threshold values, we determine if the signal waves indicate clouds or aerosols. This method can separate clouds and aerosols with high accuracy, although differentiation between aerosols and clouds are subject to more uncertainties depending on the thresholds selected. Compared with the existing Atmospheric Radiation Measurement program lidar-based cloud product, the new method appears more reliable and detects more clouds with high bases. The algorithm is applied to a year of observations at both the U.S. Southern Great Plains (SGP) and China Taihu sites. At the SGP site, the cloud frequency shows a clear seasonal variation with maximum values in winter and spring and shows bimodal vertical distributions with maximum occurrences at around 3-6 km and 8-12 km. The annual averaged cloud frequency is about 50%. The dominant clouds are stratiform in winter and convective in summer. By contrast, the cloud frequency at the Taihu site shows no clear seasonal variation and the maximum occurrence is at around 1 km. The annual averaged cloud frequency is about 15% higher than that at the SGP site. A seasonal analysis of cloud base occurrence frequency suggests that stratiform clouds dominate at the Taihu site.
Zhang, Bitao; Pi, YouGuo
2013-07-01
The traditional integer order proportional-integral-differential (IO-PID) controller is sensitive to the parameter variation or/and external load disturbance of permanent magnet synchronous motor (PMSM). And the fractional order proportional-integral-differential (FO-PID) control scheme based on robustness tuning method is proposed to enhance the robustness. But the robustness focuses on the open-loop gain variation of controlled plant. In this paper, an enhanced robust fractional order proportional-plus-integral (ERFOPI) controller based on neural network is proposed. The control law of the ERFOPI controller is acted on a fractional order implement function (FOIF) of tracking error but not tracking error directly, which, according to theory analysis, can enhance the robust performance of system. Tuning rules and approaches, based on phase margin, crossover frequency specification and robustness rejecting gain variation, are introduced to obtain the parameters of ERFOPI controller. And the neural network algorithm is used to adjust the parameter of FOIF. Simulation and experimental results show that the method proposed in this paper not only achieve favorable tracking performance, but also is robust with regard to external load disturbance and parameter variation. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.
Floré, Katelijne M J; Fiers, Tom; Delanghe, Joris R
2008-01-01
In recent years a number of point of care testing (POCT) glucometers were introduced on the market. We investigated the analytical variability (lot-to-lot variation, calibration error, inter-instrument and inter-operator variability) of glucose POCT systems in a university hospital environment and compared these results with the analytical needs required for tight glucose monitoring. The reference hexokinase method was compared to different POCT systems based on glucose oxidase (blood gas instruments) or glucose dehydrogenase (handheld glucometers). Based upon daily internal quality control data, total errors were calculated for the various glucose methods and the analytical variability of the glucometers was estimated. The total error of the glucometers exceeded by far the desirable analytical specifications (based on a biological variability model). Lot-to-lot variation, inter-instrument variation and inter-operator variability contributed approximately equally to total variance. As in a hospital environment, distribution of hematocrit values is broad, converting blood glucose into plasma values using a fixed factor further increases variance. The percentage of outliers exceeded the ISO 15197 criteria in a broad glucose concentration range. Total analytical variation of handheld glucometers is larger than expected. Clinicians should be aware that the variability of glucose measurements obtained by blood gas instruments is lower than results obtained with handheld glucometers on capillary blood.
Structural Organization and Strain Variation in the Genome of Varicella Zoster Virus
1984-10-23
Zoster 6 Growth of VZV in tissue culture 9 Structure and proteins of VZV 15 Structure of HSV DNA 20 Classification of herpesviruses based on DNA...structure 28 Strain variation in herpesvirus DNA 31 VZV DNA 33 Specific aims 36 II. MATERIALS AND METHODS 38 Cells and viruses 38 Isolation of virus...endonuclease fragments by colony hybridization 106 21. Selected methods of restriction endonuclease mapping .... 109 22. Identification of
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shalashilin, Dmitrii V.; Burghardt, Irene
2008-08-28
In this article, two coherent-state based methods of quantum propagation, namely, coupled coherent states (CCS) and Gaussian-based multiconfiguration time-dependent Hartree (G-MCTDH), are put on the same formal footing, using a derivation from a variational principle in Lagrangian form. By this approach, oscillations of the classical-like Gaussian parameters and oscillations of the quantum amplitudes are formally treated in an identical fashion. We also suggest a new approach denoted here as coupled coherent states trajectories (CCST), which completes the family of Gaussian-based methods. Using the same formalism for all related techniques allows their systematization and a straightforward comparison of their mathematical structuremore » and cost.« less
Ultrasonic wave based pressure measurement in small diameter pipeline.
Wang, Dan; Song, Zhengxiang; Wu, Yuan; Jiang, Yuan
2015-12-01
An effective non-intrusive method of ultrasound-based technique that allows monitoring liquid pressure in small diameter pipeline (less than 10mm) is presented in this paper. Ultrasonic wave could penetrate medium, through the acquisition of representative information from the echoes, properties of medium can be reflected. This pressure measurement is difficult due to that echoes' information is not easy to obtain in small diameter pipeline. The proposed method is a study on pipeline with Kneser liquid and is based on the principle that the transmission speed of ultrasonic wave in pipeline liquid correlates with liquid pressure and transmission speed of ultrasonic wave in pipeline liquid is reflected through ultrasonic propagation time providing that acoustic distance is fixed. Therefore, variation of ultrasonic propagation time can reflect variation of pressure in pipeline. Ultrasonic propagation time is obtained by electric processing approach and is accurately measured to nanosecond through high resolution time measurement module. We used ultrasonic propagation time difference to reflect actual pressure in this paper to reduce the environmental influences. The corresponding pressure values are finally obtained by acquiring the relationship between variation of ultrasonic propagation time difference and pressure with the use of neural network analysis method, the results show that this method is accurate and can be used in practice. Copyright © 2015 Elsevier B.V. All rights reserved.
Tabassum, Shawana; Dong, Liang; Kumar, Ratnesh
2018-03-05
We present an effective yet simple approach to study the dynamic variations in optical properties (such as the refractive index (RI)) of graphene oxide (GO) when exposed to gases in the visible spectral region, using the thin-film interference method. The dynamic variations in the complex refractive index of GO in response to exposure to a gas is an important factor affecting the performance of GO-based gas sensors. In contrast to the conventional ellipsometry, this method alleviates the need of selecting a dispersion model from among a list of model choices, which is limiting if an applicable model is not known a priori. In addition, the method used is computationally simpler, and does not need to employ any functional approximations. Further advantage over ellipsometry is that no bulky optics is required, and as a result it can be easily integrated into the sensing system, thereby allowing the reliable, simple, and dynamic evaluation of the optical performance of any GO-based gas sensor. In addition, the derived values of the dynamically changing RI values of the GO layer obtained from the method we have employed are corroborated by comparing with the values obtained from ellipsometry.
Passarge, Michelle; Fix, Michael K; Manser, Peter; Stampanoni, Marco F M; Siebers, Jeffrey V
2017-04-01
To develop a robust and efficient process that detects relevant dose errors (dose errors of ≥5%) in external beam radiation therapy and directly indicates the origin of the error. The process is illustrated in the context of electronic portal imaging device (EPID)-based angle-resolved volumetric-modulated arc therapy (VMAT) quality assurance (QA), particularly as would be implemented in a real-time monitoring program. A Swiss cheese error detection (SCED) method was created as a paradigm for a cine EPID-based during-treatment QA. For VMAT, the method compares a treatment plan-based reference set of EPID images with images acquired over each 2° gantry angle interval. The process utilizes a sequence of independent consecutively executed error detection tests: an aperture check that verifies in-field radiation delivery and ensures no out-of-field radiation; output normalization checks at two different stages; global image alignment check to examine if rotation, scaling, and translation are within tolerances; pixel intensity check containing the standard gamma evaluation (3%, 3 mm) and pixel intensity deviation checks including and excluding high dose gradient regions. Tolerances for each check were determined. To test the SCED method, 12 different types of errors were selected to modify the original plan. A series of angle-resolved predicted EPID images were artificially generated for each test case, resulting in a sequence of precalculated frames for each modified treatment plan. The SCED method was applied multiple times for each test case to assess the ability to detect introduced plan variations. To compare the performance of the SCED process with that of a standard gamma analysis, both error detection methods were applied to the generated test cases with realistic noise variations. Averaged over ten test runs, 95.1% of all plan variations that resulted in relevant patient dose errors were detected within 2° and 100% within 14° (<4% of patient dose delivery). Including cases that led to slightly modified but clinically equivalent plans, 89.1% were detected by the SCED method within 2°. Based on the type of check that detected the error, determination of error sources was achieved. With noise ranging from no random noise to four times the established noise value, the averaged relevant dose error detection rate of the SCED method was between 94.0% and 95.8% and that of gamma between 82.8% and 89.8%. An EPID-frame-based error detection process for VMAT deliveries was successfully designed and tested via simulations. The SCED method was inspected for robustness with realistic noise variations, demonstrating that it has the potential to detect a large majority of relevant dose errors. Compared to a typical (3%, 3 mm) gamma analysis, the SCED method produced a higher detection rate for all introduced dose errors, identified errors in an earlier stage, displayed a higher robustness to noise variations, and indicated the error source. © 2017 American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
Nirawati, R.
2018-04-01
This research was conducted to see whether the variation of the solution is acceptable and easy to understand by students with different level of ability so that it can be seen the difference of students ability in facilitating the quadratic form in the upper, middle and lower groups. This research used experimental method with factorial design. Based on the result of final test analysis, there were differences of students ability in upper group, medium group, and lower group in putting squared form based on the use certain variation of solution.
Global model of zenith tropospheric delay proposed based on EOF analysis
NASA Astrophysics Data System (ADS)
Sun, Langlang; Chen, Peng; Wei, Erhu; Li, Qinzheng
2017-07-01
Tropospheric delay is one of the main error budgets in Global Navigation Satellite System (GNSS) measurements. Many empirical correction models have been developed to compensate this delay, and models which do not require meteorological parameters have received the most attention. This study established a global troposphere zenith total delay (ZTD) model, called Global Empirical Orthogonal Function Troposphere (GEOFT), based on the empirical orthogonal function (EOF, also known as geographically weighted PCAs) analysis method and the Global Geodetic Observing System (GGOS) Atmosphere data from 2012 to 2015. The results showed that ZTD variation could be well represented by the characteristics of the EOF base function Ek and associated coefficients Pk. Here, E1 mainly signifies the equatorial anomaly; E2 represents north-south asymmetry, and E3 and E4 reflects regional variation. Moreover, P1 mainly reflects annual and semiannual variation components; P2 and P3 mainly contains annual variation components, and P4 displays semiannual variation components. We validated the proposed GEOFT model using tropospheric delay data of GGOS ZTD grid data and the tropospheric product of the International GNSS Service (IGS) over the year 2016. The results showed that GEOFT model has high accuracy with bias and RMS of -0.3 and 3.9 cm, respectively, with respect to the GGOS ZTD data, and of -0.8 and 4.1 cm, respectively, with respect to the global IGS tropospheric product. The accuracy of GEOFT demonstrating that the use of the EOF analysis method to characterize ZTD variation is reasonable.
Mahsoub, Hassan M; Evans, Nicholas P; Beach, Nathan M; Yuan, Lijuan; Zimmerman, Kurt; Pierson, Frank W
2017-01-01
The current in vitro titration method for turkey hemorrhagic enteritis virus (THEV) is the end-point dilution assay (EPD) in suspension cell culture (CC). This assay is subjective and results in high variability among vaccine lots. In this study, a new in vitro infectivity method combining a SYBR Green I-based qPCR assay and CC was developed for titration of live hemorrhagic enteritis (HE) CC vaccines. The qPCR was used to determine the virus genome copy number (vGCN) of the internalized virus particles following inoculation of susceptible RP19 cells with 1 vaccine label dose. The measured vGCN represents the number of infectious viral particles (IVP) per 1 dose. This method was used to compare 9 vaccine lots from 3 companies in the United States. Significant lot-to-lot variations within the same company and among the various companies were found in genomic and qPCR-based infectious titer per label dose. A positive linear relationship was found between qPCR infectious titer and genomic titer. Further, considerable variations in CCID 50 titers were found among tested vaccine lots, indicating the high variability of the current titration methods. The new method provides an alternative to classical titration assays and can help reduce variation among HE vaccine products. Copyright © 2016 Elsevier B.V. All rights reserved.
Fouad, Anthony; Pfefer, T. Joshua; Chen, Chao-Wei; Gong, Wei; Agrawal, Anant; Tomlins, Peter H.; Woolliams, Peter D.; Drezek, Rebekah A.; Chen, Yu
2014-01-01
Point spread function (PSF) phantoms based on unstructured distributions of sub-resolution particles in a transparent matrix have been demonstrated as a useful tool for evaluating resolution and its spatial variation across image volumes in optical coherence tomography (OCT) systems. Measurements based on PSF phantoms have the potential to become a standard test method for consistent, objective and quantitative inter-comparison of OCT system performance. Towards this end, we have evaluated three PSF phantoms and investigated their ability to compare the performance of four OCT systems. The phantoms are based on 260-nm-diameter gold nanoshells, 400-nm-diameter iron oxide particles and 1.5-micron-diameter silica particles. The OCT systems included spectral-domain and swept source systems in free-beam geometries as well as a time-domain system in both free-beam and fiberoptic probe geometries. Results indicated that iron oxide particles and gold nanoshells were most effective for measuring spatial variations in the magnitude and shape of PSFs across the image volume. The intensity of individual particles was also used to evaluate spatial variations in signal intensity uniformity. Significant system-to-system differences in resolution and signal intensity and their spatial variation were readily quantified. The phantoms proved useful for identification and characterization of irregularities such as astigmatism. Our multi-system results provide evidence of the practical utility of PSF-phantom-based test methods for quantitative inter-comparison of OCT system resolution and signal uniformity. PMID:25071949
Gupta, Munish; Kaplan, Heather C
2017-09-01
Quality improvement (QI) is based on measuring performance over time, and variation in data measured over time must be understood to guide change and make optimal improvements. Common cause variation is natural variation owing to factors inherent to any process; special cause variation is unnatural variation owing to external factors. Statistical process control methods, and particularly control charts, are robust tools for understanding data over time and identifying common and special cause variation. This review provides a practical introduction to the use of control charts in health care QI, with a focus on neonatology. Copyright © 2017 Elsevier Inc. All rights reserved.
Fast magnetic resonance imaging based on high degree total variation
NASA Astrophysics Data System (ADS)
Wang, Sujie; Lu, Liangliang; Zheng, Junbao; Jiang, Mingfeng
2018-04-01
In order to eliminating the artifacts and "staircase effect" of total variation in Compressive Sensing MRI, high degree total variation model is proposed for dynamic MRI reconstruction. the high degree total variation regularization term is used as a constraint to reconstruct the magnetic resonance image, and the iterative weighted MM algorithm is proposed to solve the convex optimization problem of the reconstructed MR image model, In addtion, one set of cardiac magnetic resonance data is used to verify the proposed algorithm for MRI. The results show that the high degree total variation method has a better reconstruction effect than the total variation and the total generalized variation, which can obtain higher reconstruction SNR and better structural similarity.
Kashyap, Kanchan L; Bajpai, Manish K; Khanna, Pritee; Giakos, George
2018-01-01
Automatic segmentation of abnormal region is a crucial task in computer-aided detection system using mammograms. In this work, an automatic abnormality detection algorithm using mammographic images is proposed. In the preprocessing step, partial differential equation-based variational level set method is used for breast region extraction. The evolution of the level set method is done by applying mesh-free-based radial basis function (RBF). The limitation of mesh-based approach is removed by using mesh-free-based RBF method. The evolution of variational level set function is also done by mesh-based finite difference method for comparison purpose. Unsharp masking and median filtering is used for mammogram enhancement. Suspicious abnormal regions are segmented by applying fuzzy c-means clustering. Texture features are extracted from the segmented suspicious regions by computing local binary pattern and dominated rotated local binary pattern (DRLBP). Finally, suspicious regions are classified as normal or abnormal regions by means of support vector machine with linear, multilayer perceptron, radial basis, and polynomial kernel function. The algorithm is validated on 322 sample mammograms of mammographic image analysis society (MIAS) and 500 mammograms from digital database for screening mammography (DDSM) datasets. Proficiency of the algorithm is quantified by using sensitivity, specificity, and accuracy. The highest sensitivity, specificity, and accuracy of 93.96%, 95.01%, and 94.48%, respectively, are obtained on MIAS dataset using DRLBP feature with RBF kernel function. Whereas, the highest 92.31% sensitivity, 98.45% specificity, and 96.21% accuracy are achieved on DDSM dataset using DRLBP feature with RBF kernel function. Copyright © 2017 John Wiley & Sons, Ltd.
Multi-observation PET image analysis for patient follow-up quantitation and therapy assessment
NASA Astrophysics Data System (ADS)
David, S.; Visvikis, D.; Roux, C.; Hatt, M.
2011-09-01
In positron emission tomography (PET) imaging, an early therapeutic response is usually characterized by variations of semi-quantitative parameters restricted to maximum SUV measured in PET scans during the treatment. Such measurements do not reflect overall tumor volume and radiotracer uptake variations. The proposed approach is based on multi-observation image analysis for merging several PET acquisitions to assess tumor metabolic volume and uptake variations. The fusion algorithm is based on iterative estimation using a stochastic expectation maximization (SEM) algorithm. The proposed method was applied to simulated and clinical follow-up PET images. We compared the multi-observation fusion performance to threshold-based methods, proposed for the assessment of the therapeutic response based on functional volumes. On simulated datasets the adaptive threshold applied independently on both images led to higher errors than the ASEM fusion and on clinical datasets it failed to provide coherent measurements for four patients out of seven due to aberrant delineations. The ASEM method demonstrated improved and more robust estimation of the evaluation leading to more pertinent measurements. Future work will consist in extending the methodology and applying it to clinical multi-tracer datasets in order to evaluate its potential impact on the biological tumor volume definition for radiotherapy applications.
Illumination Invariant Change Detection (iicd): from Earth to Mars
NASA Astrophysics Data System (ADS)
Wan, X.; Liu, J.; Qin, M.; Li, S. Y.
2018-04-01
Multi-temporal Earth Observation and Mars orbital imagery data with frequent repeat coverage provide great capability for planetary surface change detection. When comparing two images taken at different times of day or in different seasons for change detection, the variation of topographic shades and shadows caused by the change of sunlight angle can be so significant that it overwhelms the real object and environmental changes, making automatic detection unreliable. An effective change detection algorithm therefore has to be robust to the illumination variation. This paper presents our research on developing and testing an Illumination Invariant Change Detection (IICD) method based on the robustness of phase correlation (PC) to the variation of solar illumination for image matching. The IICD is based on two key functions: i) initial change detection based on a saliency map derived from pixel-wise dense PC matching and ii) change quantization which combines change type identification, motion estimation and precise appearance change identification. Experiment using multi-temporal Landsat 7 ETM+ satellite images, Rapid eye satellite images and Mars HiRiSE images demonstrate that our frequency based image matching method can reach sub-pixel accuracy and thus the proposed IICD method can effectively detect and precisely segment large scale change such as landslide as well as small object change such as Mars rover, under daily and seasonal sunlight changes.
NASA Astrophysics Data System (ADS)
Mai, Fei; Chang, Chunqi; Liu, Wenqing; Xu, Weichao; Hung, Yeung S.
2009-10-01
Due to the inherent imperfections in the imaging process, fluorescence microscopy images often suffer from spurious intensity variations, which is usually referred to as intensity inhomogeneity, intensity non uniformity, shading or bias field. In this paper, a retrospective shading correction method for fluorescence microscopy Escherichia coli (E. Coli) images is proposed based on segmentation result. Segmentation and shading correction are coupled together, so we iteratively correct the shading effects based on segmentation result and refine the segmentation by segmenting the image after shading correction. A fluorescence microscopy E. Coli image can be segmented (based on its intensity value) into two classes: the background and the cells, where the intensity variation within each class is close to zero if there is no shading. Therefore, we make use of this characteristics to correct the shading in each iteration. Shading is mathematically modeled as a multiplicative component and an additive noise component. The additive component is removed by a denoising process, and the multiplicative component is estimated using a fast algorithm to minimize the intra-class intensity variation. We tested our method on synthetic images and real fluorescence E.coli images. It works well not only for visual inspection, but also for numerical evaluation. Our proposed method should be useful for further quantitative analysis especially for protein expression value comparison.
An efficient deterministic-probabilistic approach to modeling regional groundwater flow: 1. Theory
Yen, Chung-Cheng; Guymon, Gary L.
1990-01-01
An efficient probabilistic model is developed and cascaded with a deterministic model for predicting water table elevations in regional aquifers. The objective is to quantify model uncertainty where precise estimates of water table elevations may be required. The probabilistic model is based on the two-point probability method which only requires prior knowledge of uncertain variables mean and coefficient of variation. The two-point estimate method is theoretically developed and compared with the Monte Carlo simulation method. The results of comparisons using hypothetical determinisitic problems indicate that the two-point estimate method is only generally valid for linear problems where the coefficients of variation of uncertain parameters (for example, storage coefficient and hydraulic conductivity) is small. The two-point estimate method may be applied to slightly nonlinear problems with good results, provided coefficients of variation are small. In such cases, the two-point estimate method is much more efficient than the Monte Carlo method provided the number of uncertain variables is less than eight.
An Efficient Deterministic-Probabilistic Approach to Modeling Regional Groundwater Flow: 1. Theory
NASA Astrophysics Data System (ADS)
Yen, Chung-Cheng; Guymon, Gary L.
1990-07-01
An efficient probabilistic model is developed and cascaded with a deterministic model for predicting water table elevations in regional aquifers. The objective is to quantify model uncertainty where precise estimates of water table elevations may be required. The probabilistic model is based on the two-point probability method which only requires prior knowledge of uncertain variables mean and coefficient of variation. The two-point estimate method is theoretically developed and compared with the Monte Carlo simulation method. The results of comparisons using hypothetical determinisitic problems indicate that the two-point estimate method is only generally valid for linear problems where the coefficients of variation of uncertain parameters (for example, storage coefficient and hydraulic conductivity) is small. The two-point estimate method may be applied to slightly nonlinear problems with good results, provided coefficients of variation are small. In such cases, the two-point estimate method is much more efficient than the Monte Carlo method provided the number of uncertain variables is less than eight.
Correction of rotational distortion for catheter-based en face OCT and OCT angiography
Ahsen, Osman O.; Lee, Hsiang-Chieh; Giacomelli, Michael G.; Wang, Zhao; Liang, Kaicheng; Tsai, Tsung-Han; Potsaid, Benjamin; Mashimo, Hiroshi; Fujimoto, James G.
2015-01-01
We demonstrate a computationally efficient method for correcting the nonuniform rotational distortion (NURD) in catheter-based imaging systems to improve endoscopic en face optical coherence tomography (OCT) and OCT angiography. The method performs nonrigid registration using fiducial markers on the catheter to correct rotational speed variations. Algorithm performance is investigated with an ultrahigh-speed endoscopic OCT system and micromotor catheter. Scan nonuniformity is quantitatively characterized, and artifacts from rotational speed variations are significantly reduced. Furthermore, we present endoscopic en face OCT and OCT angiography images of human gastrointestinal tract in vivo to demonstrate the image quality improvement using the correction algorithm. PMID:25361133
NASA Astrophysics Data System (ADS)
Zhu, Ge; Yao, Xu-Ri; Qiu, Peng; Mahmood, Waqas; Yu, Wen-Kai; Sun, Zhi-Bin; Zhai, Guang-Jie; Zhao, Qing
2018-02-01
In general, the sound waves can cause the vibration of the objects that are encountered in the traveling path. If we make a laser beam illuminate the rough surface of an object, it will be scattered into a speckle pattern that vibrates with these sound waves. Here, an efficient variance-based method is proposed to recover the sound information from speckle patterns captured by a high-speed camera. This method allows us to select the proper pixels that have large variances of the gray-value variations over time, from a small region of the speckle patterns. The gray-value variations of these pixels are summed together according to a simple model to recover the sound with a high signal-to-noise ratio. Meanwhile, our method will significantly simplify the computation compared with the traditional digital-image-correlation technique. The effectiveness of the proposed method has been verified by applying a variety of objects. The experimental results illustrate that the proposed method is robust to the quality of the speckle patterns and costs more than one-order less time to perform the same number of the speckle patterns. In our experiment, a sound signal of time duration 1.876 s is recovered from various objects with time consumption of 5.38 s only.
Deviation diagnosis and analysis of hull flat block assembly based on a state space model
NASA Astrophysics Data System (ADS)
Zhang, Zhiying; Dai, Yinfang; Li, Zhen
2012-09-01
Dimensional control is one of the most important challenges in the shipbuilding industry. In order to predict assembly dimensional variation in hull flat block construction, a variation stream model based on state space was presented in this paper which can be further applied to accuracy control in shipbuilding. Part accumulative error, locating error, and welding deformation were taken into consideration in this model, and variation propagation mechanisms and the accumulative rule in the assembly process were analyzed. Then, a model was developed to describe the variation propagation throughout the assembly process. Finally, an example of flat block construction from an actual shipyard was given. The result shows that this method is effective and useful.
The fluid dynamic approach to equidistribution methods for grid generation and adaptation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Delzanno, Gian Luca; Finn, John M
2009-01-01
The equidistribution methods based on L{sub p} Monge-Kantorovich optimization [Finn and Delzanno, submitted to SISC, 2009] and on the deformation [Moser, 1965; Dacorogna and Moser, 1990, Liao and Anderson, 1992] method are analyzed primarily in the context of grid generation. It is shown that the first class of methods can be obtained from a fluid dynamic formulation based on time-dependent equations for the mass density and the momentum density, arising from a variational principle. In this context, deformation methods arise from a fluid formulation by making a specific assumption on the time evolution of the density (but with some degreemore » of freedom for the momentum density). In general, deformation methods do not arise from a variational principle. However, it is possible to prescribe an optimal deformation method, related to L{sub 1} Monge-Kantorovich optimization, by making a further assumption on the momentum density. Some applications of the L{sub p} fluid dynamic formulation to imaging are also explored.« less
Path-space variational inference for non-equilibrium coarse-grained systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harmandaris, Vagelis, E-mail: harman@uoc.gr; Institute of Applied and Computational Mathematics; Kalligiannaki, Evangelia, E-mail: ekalligian@tem.uoc.gr
In this paper we discuss information-theoretic tools for obtaining optimized coarse-grained molecular models for both equilibrium and non-equilibrium molecular simulations. The latter are ubiquitous in physicochemical and biological applications, where they are typically associated with coupling mechanisms, multi-physics and/or boundary conditions. In general the non-equilibrium steady states are not known explicitly as they do not necessarily have a Gibbs structure. The presented approach can compare microscopic behavior of molecular systems to parametric and non-parametric coarse-grained models using the relative entropy between distributions on the path space and setting up a corresponding path-space variational inference problem. The methods can become entirelymore » data-driven when the microscopic dynamics are replaced with corresponding correlated data in the form of time series. Furthermore, we present connections and generalizations of force matching methods in coarse-graining with path-space information methods. We demonstrate the enhanced transferability of information-based parameterizations to different observables, at a specific thermodynamic point, due to information inequalities. We discuss methodological connections between information-based coarse-graining of molecular systems and variational inference methods primarily developed in the machine learning community. However, we note that the work presented here addresses variational inference for correlated time series due to the focus on dynamics. The applicability of the proposed methods is demonstrated on high-dimensional stochastic processes given by overdamped and driven Langevin dynamics of interacting particles.« less
NASA Astrophysics Data System (ADS)
Ponte Castañeda, Pedro
2016-11-01
This paper presents a variational method for estimating the effective constitutive response of composite materials with nonlinear constitutive behavior. The method is based on a stationary variational principle for the macroscopic potential in terms of the corresponding potential of a linear comparison composite (LCC) whose properties are the trial fields in the variational principle. When used in combination with estimates for the LCC that are exact to second order in the heterogeneity contrast, the resulting estimates for the nonlinear composite are also guaranteed to be exact to second-order in the contrast. In addition, the new method allows full optimization with respect to the properties of the LCC, leading to estimates that are fully stationary and exhibit no duality gaps. As a result, the effective response and field statistics of the nonlinear composite can be estimated directly from the appropriately optimized linear comparison composite. By way of illustration, the method is applied to a porous, isotropic, power-law material, and the results are found to compare favorably with earlier bounds and estimates. However, the basic ideas of the method are expected to work for broad classes of composites materials, whose effective response can be given appropriate variational representations, including more general elasto-plastic and soft hyperelastic composites and polycrystals.
Efficient genotype compression and analysis of large genetic variation datasets
Layer, Ryan M.; Kindlon, Neil; Karczewski, Konrad J.; Quinlan, Aaron R.
2015-01-01
Genotype Query Tools (GQT) is a new indexing strategy that expedites analyses of genome variation datasets in VCF format based on sample genotypes, phenotypes and relationships. GQT’s compressed genotype index minimizes decompression for analysis, and performance relative to existing methods improves with cohort size. We show substantial (up to 443 fold) performance gains over existing methods and demonstrate GQT’s utility for exploring massive datasets involving thousands to millions of genomes. PMID:26550772
NASA Astrophysics Data System (ADS)
Jin, Seung-Seop; Jung, Hyung-Jo
2014-03-01
It is well known that the dynamic properties of a structure such as natural frequencies depend not only on damage but also on environmental condition (e.g., temperature). The variation in dynamic characteristics of a structure due to environmental condition may mask damage of the structure. Without taking the change of environmental condition into account, false-positive or false-negative damage diagnosis may occur so that structural health monitoring becomes unreliable. In order to address this problem, an approach to construct a regression model based on structural responses considering environmental factors has been usually used by many researchers. The key to success of this approach is the formulation between the input and output variables of the regression model to take into account the environmental variations. However, it is quite challenging to determine proper environmental variables and measurement locations in advance for fully representing the relationship between the structural responses and the environmental variations. One alternative (i.e., novelty detection) is to remove the variations caused by environmental factors from the structural responses by using multivariate statistical analysis (e.g., principal component analysis (PCA), factor analysis, etc.). The success of this method is deeply depending on the accuracy of the description of normal condition. Generally, there is no prior information on normal condition during data acquisition, so that the normal condition is determined by subjective perspective with human-intervention. The proposed method is a novel adaptive multivariate statistical analysis for monitoring of structural damage detection under environmental change. One advantage of this method is the ability of a generative learning to capture the intrinsic characteristics of the normal condition. The proposed method is tested on numerically simulated data for a range of noise in measurement under environmental variation. A comparative study with conventional methods (i.e., fixed reference scheme) demonstrates the superior performance of the proposed method for structural damage detection.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stemkens, B; Glitzner, M; Kontaxis, C
Purpose: To assess the dose deposition in simulated single-fraction MR-Linac treatments of renal cell carcinoma, when inter-cycle respiratory motion variation is taken into account using online MRI. Methods: Three motion characterization methods, with increasing complexity, were compared to evaluate the effect of inter-cycle motion variation and drifts on the accumulated dose for an SBRT kidney MR-Linac treatment: 1) STATIC, in which static anatomy was assumed, 2) AVG-RESP, in which 4D-MRI phase-volumes were time-weighted, based on the respiratory phase and 3) PCA, in which 3D volumes were generated using a PCA-model, enabling the detection of inter-cycle variations and drifts. An experimentalmore » ITV-based kidney treatment was simulated in a 1.5T magnetic field on three volunteer datasets. For each volunteer a retrospectively sorted 4D-MRI (ten respiratory phases) and fast 2D cine-MR images (temporal resolution = 476ms) were acquired to simulate MR-imaging during radiation. For each method, the high spatio-temporal resolution 3D volumes were non-rigidly registered to obtain deformation vector fields (DVFs). Using the DVFs, pseudo-CTs (generated from the 4D-MRI) were deformed and the dose was accumulated for the entire treatment. The accuracies of all methods were independently determined using an additional, orthogonal 2D-MRI slice. Results: Motion was most accurately estimated using the PCA method, which correctly estimated drifts and inter-cycle variations (RMSE=3.2, 2.2, 1.1mm on average for STATIC, AVG-RESP and PCA, compared to the 2DMRI slice). Dose-volume parameters on the ITV showed moderate changes (D99=35.2, 32.5, 33.8Gy for STATIC, AVG-RESP and PCA). AVG-RESP showed distinct hot/cold spots outside the ITV margin, which were more distributed for the PCA scenario, since inter-cycle variations were not modeled by the AVG-RESP method. Conclusion: Dose differences were observed when inter-cycle variations were taken into account. The increased inter-cycle randomness in motion as captured by the PCA model mitigates the local (erroneous) hotspots estimated by the AVG-RESP method.« less
NASA Astrophysics Data System (ADS)
Almasganj, Mohammad; Adabi, Saba; Fatemizadeh, Emad; Xu, Qiuyun; Sadeghi, Hamid; Daveluy, Steven; Nasiriavanaki, Mohammadreza
2017-03-01
Optical Coherence Tomography (OCT) has a great potential to elicit clinically useful information from tissues due to its high axial and transversal resolution. In practice, an OCT setup cannot reach to its theoretical resolution due to imperfections of its components, which make its images blurry. The blurriness is different alongside regions of image; thus, they cannot be modeled by a unique point spread function (PSF). In this paper, we investigate the use of solid phantoms to estimate the PSF of each sub-region of imaging system. We then utilize Lucy-Richardson, Hybr and total variation (TV) based iterative deconvolution methods for mitigating occurred spatially variant blurriness. It is shown that the TV based method will suppress the so-called speckle noise in OCT images better than the two other approaches. The performance of proposed algorithm is tested on various samples, including several skin tissues besides the test image blurred with synthetic PSF-map, demonstrating qualitatively and quantitatively the advantage of TV based deconvolution method using spatially-variant PSF for enhancing image quality.
Mazaheri, Davood; Shojaosadati, Seyed Abbas; Zamir, Seyed Morteza; Mousavi, Seyyed Mohammad
2018-04-21
In this work, mathematical modeling of ethanol production in solid-state fermentation (SSF) has been done based on the variation in the dry weight of solid medium. This method was previously used for mathematical modeling of enzyme production; however, the model should be modified to predict the production of a volatile compound like ethanol. The experimental results of bioethanol production from the mixture of carob pods and wheat bran by Zymomonas mobilis in SSF were used for the model validation. Exponential and logistic kinetic models were used for modeling the growth of microorganism. In both cases, the model predictions matched well with the experimental results during the exponential growth phase, indicating the good ability of solid medium weight variation method for modeling a volatile product formation in solid-state fermentation. In addition, using logistic model, better predictions were obtained.
NASA Technical Reports Server (NTRS)
Hilado, C. J.; Furst, A.
1978-01-01
The toxicity screening test method developed at the University of San Francisco was evaluated for reproducibility. The variables addressed were strain of mouse, lot of animals, and operator. There was a significant difference in response between Swiss Webster mice and ICR mice, with the latter exhibiting greater resistance. These two strains of mice are not interchangeable in this procedure. Variation between individual animals was significant and unavoidable. In view of this variation, between-lot and between-operator variations appear to have no practical significance. The significant variation between individual animals stresses the need for average values based on at least four animals, and preferably values based on at least two experiments and eight animals. Efforts to compare materials should be based on the evaluation of relatively simple responses using substantial numbers of animals, rather than on elaborate evaluation of single animals
Höfener, Sebastian; Gomes, André Severo Pereira; Visscher, Lucas
2012-01-28
In this article, we present a consistent derivation of a density functional theory (DFT) based embedding method which encompasses wave-function theory-in-DFT (WFT-in-DFT) and the DFT-based subsystem formulation of response theory (DFT-in-DFT) by Neugebauer [J. Neugebauer, J. Chem. Phys. 131, 084104 (2009)] as special cases. This formulation, which is based on the time-averaged quasi-energy formalism, makes use of the variation Lagrangian techniques to allow the use of non-variational (in particular: coupled cluster) wave-function-based methods. We show how, in the time-independent limit, we naturally obtain expressions for the ground-state DFT-in-DFT and WFT-in-DFT embedding via a local potential. We furthermore provide working equations for the special case in which coupled cluster theory is used to obtain the density and excitation energies of the active subsystem. A sample application is given to demonstrate the method. © 2012 American Institute of Physics
A dynamic unilateral contact problem with adhesion and friction in viscoelasticity
NASA Astrophysics Data System (ADS)
Cocou, Marius; Schryve, Mathieu; Raous, Michel
2010-08-01
The aim of this paper is to study an interaction law coupling recoverable adhesion, friction and unilateral contact between two viscoelastic bodies of Kelvin-Voigt type. A dynamic contact problem with adhesion and nonlocal friction is considered and its variational formulation is written as the coupling between an implicit variational inequality and a parabolic variational inequality describing the evolution of the intensity of adhesion. The existence and approximation of variational solutions are analysed, based on a penalty method, some abstract results and compactness properties. Finally, some numerical examples are presented.
Salimon, Jumat; Omar, Talal A.; Salih, Nadia
2014-01-01
Two different procedures for the methylation of fatty acids (FAs) and trans fatty acids (TFAs) in food fats were compared using gas chromatography (GC-FID). The base-catalyzed followed by an acid-catalyzed method (KOCH3/HCl) and the base-catalyzed followed by (trimethylsilyl)diazomethane (TMS–DM) method were used to prepare FA methyl esters (FAMEs) from lipids extracted from food products. In general, both methods were suitable for the determination of cis/trans FAs. The correlation coefficients (r) between the methods were relatively small (ranging from 0.86 to 0.99) and had a high level of agreement for the most abundant FAs. The significant differences (P = 0.05) can be observed for unsaturated FAs (UFAs), specifically for TFAs. The results from the KOCH3/HCl method showed the lowest recovery values (%R) and higher variation (from 84% to 112%), especially for UFAs. The TMS-DM method had higher R values, less variation (from 90% to 106%), and more balance between variation and %RSD values in intraday and interday measurements (less than 4% and 6%, resp.) than the KOCH3/HCl method, except for C12:0, C14:0, and C18:0. Nevertheless, the KOCH3/HCl method required shorter time and was less expensive than the TMS-DM method which is more convenient for an accurate and thorough analysis of rich cis/trans UFA samples. PMID:24719581
Salimon, Jumat; Omar, Talal A; Salih, Nadia
2014-01-01
Two different procedures for the methylation of fatty acids (FAs) and trans fatty acids (TFAs) in food fats were compared using gas chromatography (GC-FID). The base-catalyzed followed by an acid-catalyzed method (KOCH3/HCl) and the base-catalyzed followed by (trimethylsilyl)diazomethane (TMS-DM) method were used to prepare FA methyl esters (FAMEs) from lipids extracted from food products. In general, both methods were suitable for the determination of cis/trans FAs. The correlation coefficients (r) between the methods were relatively small (ranging from 0.86 to 0.99) and had a high level of agreement for the most abundant FAs. The significant differences (P = 0.05) can be observed for unsaturated FAs (UFAs), specifically for TFAs. The results from the KOCH3/HCl method showed the lowest recovery values (%R) and higher variation (from 84% to 112%), especially for UFAs. The TMS-DM method had higher R values, less variation (from 90% to 106%), and more balance between variation and %RSD values in intraday and interday measurements (less than 4% and 6%, resp.) than the KOCH3/HCl method, except for C12:0, C14:0, and C18:0. Nevertheless, the KOCH3/HCl method required shorter time and was less expensive than the TMS-DM method which is more convenient for an accurate and thorough analysis of rich cis/trans UFA samples.
Geometry of quantum Hall states: Gravitational anomaly and transport coefficients
DOE Office of Scientific and Technical Information (OSTI.GOV)
Can, Tankut, E-mail: tcan@scgp.stonybrook.edu; Laskin, Michael; Wiegmann, Paul B.
2015-11-15
We show that universal transport coefficients of the fractional quantum Hall effect (FQHE) can be understood as a response to variations of spatial geometry. Some transport properties are essentially governed by the gravitational anomaly. We develop a general method to compute correlation functions of FQH states in a curved space, where local transformation properties of these states are examined through local geometric variations. We introduce the notion of a generating functional and relate it to geometric invariant functionals recently studied in geometry. We develop two complementary methods to study the geometry of the FQHE. One method is based on iteratingmore » a Ward identity, while the other is based on a field theoretical formulation of the FQHE through a path integral formalism.« less
Denoising Medical Images using Calculus of Variations
Kohan, Mahdi Nakhaie; Behnam, Hamid
2011-01-01
We propose a method for medical image denoising using calculus of variations and local variance estimation by shaped windows. This method reduces any additive noise and preserves small patterns and edges of images. A pyramid structure-texture decomposition of images is used to separate noise and texture components based on local variance measures. The experimental results show that the proposed method has visual improvement as well as a better SNR, RMSE and PSNR than common medical image denoising methods. Experimental results in denoising a sample Magnetic Resonance image show that SNR, PSNR and RMSE have been improved by 19, 9 and 21 percents respectively. PMID:22606674
Forum for discussion and debate
NASA Technical Reports Server (NTRS)
1981-01-01
The application of statistical methods to meteorological data for which there are long, compatible series, and where known trend changes took place were suggested. The effects of optical wedge deterioration, atmospheric aerosol variation, solar irradiance variations, etc., are evaluated. It is recommended that coupled satellite ground based observational system is required to determine global long term trends.
Li, Zhao; Dosso, Stan E; Sun, Dajun
2016-07-01
This letter develops a Bayesian inversion for localizing underwater acoustic transponders using a surface ship which compensates for sound-speed profile (SSP) temporal variation during the survey. The method is based on dividing observed acoustic travel-time data into time segments and including depth-independent SSP variations for each segment as additional unknown parameters to approximate the SSP temporal variation. SSP variations are estimated jointly with transponder locations, rather than calculated separately as in existing two-step inversions. Simulation and sea-trial results show this localization/SSP joint inversion performs better than two-step inversion in terms of localization accuracy, agreement with measured SSP variations, and computational efficiency.
Interpretable functional principal component analysis.
Lin, Zhenhua; Wang, Liangliang; Cao, Jiguo
2016-09-01
Functional principal component analysis (FPCA) is a popular approach to explore major sources of variation in a sample of random curves. These major sources of variation are represented by functional principal components (FPCs). The intervals where the values of FPCs are significant are interpreted as where sample curves have major variations. However, these intervals are often hard for naïve users to identify, because of the vague definition of "significant values". In this article, we develop a novel penalty-based method to derive FPCs that are only nonzero precisely in the intervals where the values of FPCs are significant, whence the derived FPCs possess better interpretability than the FPCs derived from existing methods. To compute the proposed FPCs, we devise an efficient algorithm based on projection deflation techniques. We show that the proposed interpretable FPCs are strongly consistent and asymptotically normal under mild conditions. Simulation studies confirm that with a competitive performance in explaining variations of sample curves, the proposed FPCs are more interpretable than the traditional counterparts. This advantage is demonstrated by analyzing two real datasets, namely, electroencephalography data and Canadian weather data. © 2015, The International Biometric Society.
NASA Astrophysics Data System (ADS)
Wu, Bitao; Wu, Gang; Yang, Caiqian; He, Yi
2018-05-01
A novel damage identification method for concrete continuous girder bridges based on spatially-distributed long-gauge strain sensing is presented in this paper. First, the variation regularity of the long-gauge strain influence line of continuous girder bridges which changes with the location of vehicles on the bridge is studied. According to this variation regularity, a calculation method for the distribution regularity of the area of long-gauge strain history is investigated. Second, a numerical simulation of damage identification based on the distribution regularity of the area of long-gauge strain history is conducted, and the results indicate that this method is effective for identifying damage and is not affected by the speed, axle number and weight of vehicles. Finally, a real bridge test on a highway is conducted, and the experimental results also show that this method is very effective for identifying damage in continuous girder bridges, and the local element stiffness distribution regularity can be revealed at the same time. This identified information is useful for maintaining of continuous girder bridges on highways.
2013-01-01
Background SNPs&GO is a method for the prediction of deleterious Single Amino acid Polymorphisms (SAPs) using protein functional annotation. In this work, we present the web server implementation of SNPs&GO (WS-SNPs&GO). The server is based on Support Vector Machines (SVM) and for a given protein, its input comprises: the sequence and/or its three-dimensional structure (when available), a set of target variations and its functional Gene Ontology (GO) terms. The output of the server provides, for each protein variation, the probabilities to be associated to human diseases. Results The server consists of two main components, including updated versions of the sequence-based SNPs&GO (recently scored as one of the best algorithms for predicting deleterious SAPs) and of the structure-based SNPs&GO3d programs. Sequence and structure based algorithms are extensively tested on a large set of annotated variations extracted from the SwissVar database. Selecting a balanced dataset with more than 38,000 SAPs, the sequence-based approach achieves 81% overall accuracy, 0.61 correlation coefficient and an Area Under the Curve (AUC) of the Receiver Operating Characteristic (ROC) curve of 0.88. For the subset of ~6,600 variations mapped on protein structures available at the Protein Data Bank (PDB), the structure-based method scores with 84% overall accuracy, 0.68 correlation coefficient, and 0.91 AUC. When tested on a new blind set of variations, the results of the server are 79% and 83% overall accuracy for the sequence-based and structure-based inputs, respectively. Conclusions WS-SNPs&GO is a valuable tool that includes in a unique framework information derived from protein sequence, structure, evolutionary profile, and protein function. WS-SNPs&GO is freely available at http://snps.biofold.org/snps-and-go. PMID:23819482
Defect detection of castings in radiography images using a robust statistical feature.
Zhao, Xinyue; He, Zaixing; Zhang, Shuyou
2014-01-01
One of the most commonly used optical methods for defect detection is radiographic inspection. Compared with methods that extract defects directly from the radiography image, model-based methods deal with the case of an object with complex structure well. However, detection of small low-contrast defects in nonuniformly illuminated images is still a major challenge for them. In this paper, we present a new method based on the grayscale arranging pairs (GAP) feature to detect casting defects in radiography images automatically. First, a model is built using pixel pairs with a stable intensity relationship based on the GAP feature from previously acquired images. Second, defects can be extracted by comparing the difference of intensity-difference signs between the input image and the model statistically. The robustness of the proposed method to noise and illumination variations has been verified on casting radioscopic images with defects. The experimental results showed that the average computation time of the proposed method in the testing stage is 28 ms per image on a computer with a Pentium Core 2 Duo 3.00 GHz processor. For the comparison, we also evaluated the performance of the proposed method as well as that of the mixture-of-Gaussian-based and crossing line profile methods. The proposed method achieved 2.7% and 2.0% false negative rates in the noise and illumination variation experiments, respectively.
Still-to-video face recognition in unconstrained environments
NASA Astrophysics Data System (ADS)
Wang, Haoyu; Liu, Changsong; Ding, Xiaoqing
2015-02-01
Face images from video sequences captured in unconstrained environments usually contain several kinds of variations, e.g. pose, facial expression, illumination, image resolution and occlusion. Motion blur and compression artifacts also deteriorate recognition performance. Besides, in various practical systems such as law enforcement, video surveillance and e-passport identification, only a single still image per person is enrolled as the gallery set. Many existing methods may fail to work due to variations in face appearances and the limit of available gallery samples. In this paper, we propose a novel approach for still-to-video face recognition in unconstrained environments. By assuming that faces from still images and video frames share the same identity space, a regularized least squares regression method is utilized to tackle the multi-modality problem. Regularization terms based on heuristic assumptions are enrolled to avoid overfitting. In order to deal with the single image per person problem, we exploit face variations learned from training sets to synthesize virtual samples for gallery samples. We adopt a learning algorithm combining both affine/convex hull-based approach and regularizations to match image sets. Experimental results on a real-world dataset consisting of unconstrained video sequences demonstrate that our method outperforms the state-of-the-art methods impressively.
Digital Signal Processing Methods for Ultrasonic Echoes.
Sinding, Kyle; Drapaca, Corina; Tittmann, Bernhard
2016-04-28
Digital signal processing has become an important component of data analysis needed in industrial applications. In particular, for ultrasonic thickness measurements the signal to noise ratio plays a major role in the accurate calculation of the arrival time. For this application a band pass filter is not sufficient since the noise level cannot be significantly decreased such that a reliable thickness measurement can be performed. This paper demonstrates the abilities of two regularization methods - total variation and Tikhonov - to filter acoustic and ultrasonic signals. Both of these methods are compared to a frequency based filtering for digitally produced signals as well as signals produced by ultrasonic transducers. This paper demonstrates the ability of the total variation and Tikhonov filters to accurately recover signals from noisy acoustic signals faster than a band pass filter. Furthermore, the total variation filter has been shown to reduce the noise of a signal significantly for signals with clear ultrasonic echoes. Signal to noise ratios have been increased over 400% by using a simple parameter optimization. While frequency based filtering is efficient for specific applications, this paper shows that the reduction of noise in ultrasonic systems can be much more efficient with regularization methods.
Multi-disciplinary optimization of aeroservoelastic systems
NASA Technical Reports Server (NTRS)
Karpel, Mordechay
1991-01-01
New methods were developed for efficient aeroservoelastic analysis and optimization. The main target was to develop a method for investigating large structural variations using a single set of modal coordinates. This task was accomplished by basing the structural modal coordinates on normal modes calculated with a set of fictitious masses loading the locations of anticipated structural changes. The following subject areas are covered: (1) modal coordinates for aeroelastic analysis with large local structural variations; and (2) time simulation of flutter with large stiffness changes.
Self-optimizing Pitch Control for Large Scale Wind Turbine Based on ADRC
NASA Astrophysics Data System (ADS)
Xia, Anjun; Hu, Guoqing; Li, Zheng; Huang, Dongxiao; Wang, Fengxiang
2018-01-01
Since wind turbine is a complex nonlinear and strong coupling system, traditional PI control method can hardly achieve good control performance. A self-optimizing pitch control method based on the active-disturbance-rejection control theory is proposed in this paper. A linear model of the wind turbine is derived by linearizing the aerodynamic torque equation and the dynamic response of wind turbine is transformed into a first-order linear system. An expert system is designed to optimize the amplification coefficient according to the pitch rate and the speed deviation. The purpose of the proposed control method is to regulate the amplification coefficient automatically and keep the variations of pitch rate and rotor speed in proper ranges. Simulation results show that the proposed pitch control method has the ability to modify the amplification coefficient effectively, when it is not suitable, and keep the variations of pitch rate and rotor speed in proper ranges
NASA Technical Reports Server (NTRS)
Lyle, Karen H.
2014-01-01
Acceptance of new spacecraft structural architectures and concepts requires validated design methods to minimize the expense involved with technology validation via flighttesting. This paper explores the implementation of probabilistic methods in the sensitivity analysis of the structural response of a Hypersonic Inflatable Aerodynamic Decelerator (HIAD). HIAD architectures are attractive for spacecraft deceleration because they are lightweight, store compactly, and utilize the atmosphere to decelerate a spacecraft during re-entry. However, designers are hesitant to include these inflatable approaches for large payloads or spacecraft because of the lack of flight validation. In the example presented here, the structural parameters of an existing HIAD model have been varied to illustrate the design approach utilizing uncertainty-based methods. Surrogate models have been used to reduce computational expense several orders of magnitude. The suitability of the design is based on assessing variation in the resulting cone angle. The acceptable cone angle variation would rely on the aerodynamic requirements.
Sign Language Translator Application Using OpenCV
NASA Astrophysics Data System (ADS)
Triyono, L.; Pratisto, E. H.; Bawono, S. A. T.; Purnomo, F. A.; Yudhanto, Y.; Raharjo, B.
2018-03-01
This research focuses on the development of sign language translator application using OpenCV Android based, this application is based on the difference in color. The author also utilizes Support Machine Learning to predict the label. Results of the research showed that the coordinates of the fingertip search methods can be used to recognize a hand gesture to the conditions contained open arms while to figure gesture with the hand clenched using search methods Hu Moments value. Fingertip methods more resilient in gesture recognition with a higher success rate is 95% on the distance variation is 35 cm and 55 cm and variations of light intensity of approximately 90 lux and 100 lux and light green background plain condition compared with the Hu Moments method with the same parameters and the percentage of success of 40%. While the background of outdoor environment applications still can not be used with a success rate of only 6 managed and the rest failed.
Zhao, Jiang Yan; Xie, Ping; Sang, Yan Fang; Xui, Qiang Qiang; Wu, Zi Yi
2018-04-01
Under the influence of both global climate change and frequent human activities, the variability of second-moment in hydrological time series become obvious, indicating changes in the consistency of hydrological data samples. Therefore, the traditional hydrological series analysis methods, which only consider the variability of mean values, are not suitable for handling all hydrological non-consistency problems. Traditional synthetic duration curve methods for the design of the lowest navigable water level, based on the consistency of samples, would cause more risks to navigation, especially under low water level in dry seasons. Here, we detected both mean variation and variance variation using the hydrological variation diagnosis system. Furthermore, combing the principle of decomposition and composition of time series, we proposed the synthetic duration curve method for designing the lowest navigable water level with inconsistent characters in dry seasons. With the Yunjinghong Station in the Lancang River Basin as an example, we analyzed its designed water levels in the present, the distant past and the recent past, as well as the differences among three situations (i.e., considering second moment variation, only considering mean variation, not considering any variation). Results showed that variability of the second moment changed the trend of designed water levels alteration in the Yunjinghong Station. When considering the first two moments or just considering the mean variation, the difference ofdesigned water levels was as bigger as -1.11 m. When considering the first two moments or not, the difference of designed water levels was as bigger as -1.01 m. Our results indicated the strong effects of variance variation on the designed water levels, and highlighted the importance of the second moment variation analysis for the channel planning and design.
The Energetic Cost of Walking: A Comparison of Predictive Methods
Kramer, Patricia Ann; Sylvester, Adam D.
2011-01-01
Background The energy that animals devote to locomotion has been of intense interest to biologists for decades and two basic methodologies have emerged to predict locomotor energy expenditure: those based on metabolic and those based on mechanical energy. Metabolic energy approaches share the perspective that prediction of locomotor energy expenditure should be based on statistically significant proxies of metabolic function, while mechanical energy approaches, which derive from many different perspectives, focus on quantifying the energy of movement. Some controversy exists as to which mechanical perspective is “best”, but from first principles all mechanical methods should be equivalent if the inputs to the simulation are of similar quality. Our goals in this paper are 1) to establish the degree to which the various methods of calculating mechanical energy are correlated, and 2) to investigate to what degree the prediction methods explain the variation in energy expenditure. Methodology/Principal Findings We use modern humans as the model organism in this experiment because their data are readily attainable, but the methodology is appropriate for use in other species. Volumetric oxygen consumption and kinematic and kinetic data were collected on 8 adults while walking at their self-selected slow, normal and fast velocities. Using hierarchical statistical modeling via ordinary least squares and maximum likelihood techniques, the predictive ability of several metabolic and mechanical approaches were assessed. We found that all approaches are correlated and that the mechanical approaches explain similar amounts of the variation in metabolic energy expenditure. Most methods predict the variation within an individual well, but are poor at accounting for variation between individuals. Conclusion Our results indicate that the choice of predictive method is dependent on the question(s) of interest and the data available for use as inputs. Although we used modern humans as our model organism, these results can be extended to other species. PMID:21731693
Evaluation of redundancy analysis to identify signatures of local adaptation.
Capblancq, Thibaut; Luu, Keurcien; Blum, Michael G B; Bazin, Eric
2018-05-26
Ordination is a common tool in ecology that aims at representing complex biological information in a reduced space. In landscape genetics, ordination methods such as principal component analysis (PCA) have been used to detect adaptive variation based on genomic data. Taking advantage of environmental data in addition to genotype data, redundancy analysis (RDA) is another ordination approach that is useful to detect adaptive variation. This paper aims at proposing a test statistic based on RDA to search for loci under selection. We compare redundancy analysis to pcadapt, which is a nonconstrained ordination method, and to a latent factor mixed model (LFMM), which is a univariate genotype-environment association method. Individual-based simulations identify evolutionary scenarios where RDA genome scans have a greater statistical power than genome scans based on PCA. By constraining the analysis with environmental variables, RDA performs better than PCA in identifying adaptive variation when selection gradients are weakly correlated with population structure. Additionally, we show that if RDA and LFMM have a similar power to identify genetic markers associated with environmental variables, the RDA-based procedure has the advantage to identify the main selective gradients as a combination of environmental variables. To give a concrete illustration of RDA in population genomics, we apply this method to the detection of outliers and selective gradients on an SNP data set of Populus trichocarpa (Geraldes et al., 2013). The RDA-based approach identifies the main selective gradient contrasting southern and coastal populations to northern and continental populations in the northwestern American coast. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Petersen, Per H; Lund, Flemming; Fraser, Callum G; Sölétormos, György
2016-11-01
Background The distributions of within-subject biological variation are usually described as coefficients of variation, as are analytical performance specifications for bias, imprecision and other characteristics. Estimation of specifications required for reference change values is traditionally done using relationship between the batch-related changes during routine performance, described as Δbias, and the coefficients of variation for analytical imprecision (CV A ): the original theory is based on standard deviations or coefficients of variation calculated as if distributions were Gaussian. Methods The distribution of between-subject biological variation can generally be described as log-Gaussian. Moreover, recent analyses of within-subject biological variation suggest that many measurands have log-Gaussian distributions. In consequence, we generated a model for the estimation of analytical performance specifications for reference change value, with combination of Δbias and CV A based on log-Gaussian distributions of CV I as natural logarithms. The model was tested using plasma prolactin and glucose as examples. Results Analytical performance specifications for reference change value generated using the new model based on log-Gaussian distributions were practically identical with the traditional model based on Gaussian distributions. Conclusion The traditional and simple to apply model used to generate analytical performance specifications for reference change value, based on the use of coefficients of variation and assuming Gaussian distributions for both CV I and CV A , is generally useful.
Macro-architectured cellular materials: Properties, characteristic modes, and prediction methods
NASA Astrophysics Data System (ADS)
Ma, Zheng-Dong
2017-12-01
Macro-architectured cellular (MAC) material is defined as a class of engineered materials having configurable cells of relatively large (i.e., visible) size that can be architecturally designed to achieve various desired material properties. Two types of novel MAC materials, negative Poisson's ratio material and biomimetic tendon reinforced material, were introduced in this study. To estimate the effective material properties for structural analyses and to optimally design such materials, a set of suitable homogenization methods was developed that provided an effective means for the multiscale modeling of MAC materials. First, a strain-based homogenization method was developed using an approach that separated the strain field into a homogenized strain field and a strain variation field in the local cellular domain superposed on the homogenized strain field. The principle of virtual displacements for the relationship between the strain variation field and the homogenized strain field was then used to condense the strain variation field onto the homogenized strain field. The new method was then extended to a stress-based homogenization process based on the principle of virtual forces and further applied to address the discrete systems represented by the beam or frame structures of the aforementioned MAC materials. The characteristic modes and the stress recovery process used to predict the stress distribution inside the cellular domain and thus determine the material strengths and failures at the local level are also discussed.
NASA Astrophysics Data System (ADS)
Ma, Qing; Chiras, S.; Clarke, D. R.; Suo, Z.
1995-08-01
Large tensile stresses usually exist in metallic interconnect lines on silicon substrates as a result of thermal mismatch. When a current is subsequently passed any divergence of atomic flux can create superimposed stress variations along the line. Together, these stresses can significantly influence the growth of voids and therefore affect interconnect reliability. In this work, a high-resolution (˜2 μm) optical spectroscopy method has been used to measure the localized stresses around passivated aluminum lines on a silicon wafer, both as-fabricated and after electromigration testing. The method is based on the piezospectroscopic properties of silicon, specifically the frequency shift of the Raman line at 520 R cm-1. By focusing a laser beam at points adjacent to the aluminum lines, the Raman signal was excited and collected. The stresses in the aluminum lines can then be derived from the stresses in the silicon using finite element methods. Large variations of stress along an electromigration-tested line were observed and compared to a theoretical model based on differences in effective diffusivities from grain to grain in a polycrystalline interconnect line.
NASA Astrophysics Data System (ADS)
Singh, K.; Sandu, A.; Bowman, K. W.; Parrington, M.; Jones, D. B. A.; Lee, M.
2011-08-01
Chemistry transport models determine the evolving chemical state of the atmosphere by solving the fundamental equations that govern physical and chemical transformations subject to initial conditions of the atmospheric state and surface boundary conditions, e.g., surface emissions. The development of data assimilation techniques synthesize model predictions with measurements in a rigorous mathematical framework that provides observational constraints on these conditions. Two families of data assimilation methods are currently widely used: variational and Kalman filter (KF). The variational approach is based on control theory and formulates data assimilation as a minimization problem of a cost functional that measures the model-observations mismatch. The Kalman filter approach is rooted in statistical estimation theory and provides the analysis covariance together with the best state estimate. Suboptimal Kalman filters employ different approximations of the covariances in order to make the computations feasible with large models. Each family of methods has both merits and drawbacks. This paper compares several data assimilation methods used for global chemical data assimilation. Specifically, we evaluate data assimilation approaches for improving estimates of the summertime global tropospheric ozone distribution in August 2006 based on ozone observations from the NASA Tropospheric Emission Spectrometer and the GEOS-Chem chemistry transport model. The resulting analyses are compared against independent ozonesonde measurements to assess the effectiveness of each assimilation method. All assimilation methods provide notable improvements over the free model simulations, which differ from the ozonesonde measurements by about 20 % (below 200 hPa). Four dimensional variational data assimilation with window lengths between five days and two weeks is the most accurate method, with mean differences between analysis profiles and ozonesonde measurements of 1-5 %. Two sequential assimilation approaches (three dimensional variational and suboptimal KF), although derived from different theoretical considerations, provide similar ozone estimates, with relative differences of 5-10 % between the analyses and ozonesonde measurements. Adjoint sensitivity analysis techniques are used to explore the role of of uncertainties in ozone precursors and their emissions on the distribution of tropospheric ozone. A novel technique is introduced that projects 3-D-Variational increments back to an equivalent initial condition, which facilitates comparison with 4-D variational techniques.
A coupled mode formulation by reciprocity and a variational principle
NASA Technical Reports Server (NTRS)
Chuang, Shun-Lien
1987-01-01
A coupled mode formulation for parallel dielectric waveguides is presented via two methods: a reciprocity theorem and a variational principle. In the first method, a generalized reciprocity relation for two sets of field solutions satisfying Maxwell's equations and the boundary conditions in two different media, respectively, is derived. Based on the generalized reciprocity theorem, the coupled mode equations can then be formulated. The second method using a variational principle is also presented for a general waveguide system which can be lossy. The results of the variational principle can also be shown to be identical to those from the reciprocity theorem. The exact relations governing the 'conventional' and the new coupling coefficients are derived. It is shown analytically that the present formulation satisfies the reciprocity theorem and power conservation exactly, while the conventional theory violates the power conservation and reciprocity theorem by as much as 55 percent and the Hardy-Streifer (1985, 1986) theory by 0.033 percent, for example.
ERIC Educational Resources Information Center
Kimball, Steven M.; Milanowski, Anthony
2009-01-01
Purpose: The article reports on a study of school leader decision making that examined variation in the validity of teacher evaluation ratings in a school district that has implemented a standards-based teacher evaluation system. Research Methods: Applying mixed methods, the study used teacher evaluation ratings and value-added student achievement…
Pavanello, Michele; Tung, Wei-Cheng; Adamowicz, Ludwik
2009-11-14
Efficient optimization of the basis set is key to achieving a very high accuracy in variational calculations of molecular systems employing basis functions that are explicitly dependent on the interelectron distances. In this work we present a method for a systematic enlargement of basis sets of explicitly correlated functions based on the iterative-complement-interaction approach developed by Nakatsuji [Phys. Rev. Lett. 93, 030403 (2004)]. We illustrate the performance of the method in the variational calculations of H(3) where we use explicitly correlated Gaussian functions with shifted centers. The total variational energy (-1.674 547 421 Hartree) and the binding energy (-15.74 cm(-1)) obtained in the calculation with 1000 Gaussians are the most accurate results to date.
USDA-ARS?s Scientific Manuscript database
This study compared the utility of three sampling methods for ecological monitoring based on: interchangeability of data (rank correlations), precision (coefficient of variation), cost (minutes/transect), and potential of each method to generate multiple indicators. Species richness and foliar cover...
NASA Astrophysics Data System (ADS)
Leijala, Ulpu; Björkqvist, Jan-Victor; Johansson, Milla M.; Pellikka, Havu
2017-04-01
Future coastal management continuously strives for more location-exact and precise methods to investigate possible extreme sea level events and to face flooding hazards in the most appropriate way. Evaluating future flooding risks by understanding the behaviour of the joint effect of sea level variations and wind waves is one of the means to make more comprehensive flooding hazard analysis, and may at first seem like a straightforward task to solve. Nevertheless, challenges and limitations such as availability of time series of the sea level and wave height components, the quality of data, significant locational variability of coastal wave height, as well as assumptions to be made depending on the study location, make the task more complicated. In this study, we present a statistical method for combining location-specific probability distributions of water level variations (including local sea level observations and global mean sea level rise) and wave run-up (based on wave buoy measurements). The goal of our method is to obtain a more accurate way to account for the waves when making flooding hazard analysis on the coast compared to the approach of adding a separate fixed wave action height on top of sea level -based flood risk estimates. As a result of our new method, we gain maximum elevation heights with different return periods of the continuous water mass caused by a combination of both phenomena, "the green water". We also introduce a sensitivity analysis to evaluate the properties and functioning of our method. The sensitivity test is based on using theoretical wave distributions representing different alternatives of wave behaviour in relation to sea level variations. As these wave distributions are merged with the sea level distribution, we get information on how the different wave height conditions and shape of the wave height distribution influence the joint results. Our method presented here can be used as an advanced tool to minimize over- and underestimation of the combined effect of sea level variations and wind waves, and to help coastal infrastructure planning and support smooth and safe operation of coastal cities in a changing climate.
Appraising the reliability of visual impact assessment methods
Nickolaus R. Feimer; Kenneth H. Craik; Richard C. Smardon; Stephen R.J. Sheppard
1979-01-01
This paper presents the research approach and selected results of an empirical investigation aimed at the evaluation of selected observer-based visual impact assessment (VIA) methods. The VIA methods under examination were chosen to cover a range of VIA methods currently in use in both applied and research settings. Variation in three facets of VIA methods were...
Support vector machine-based facial-expression recognition method combining shape and appearance
NASA Astrophysics Data System (ADS)
Han, Eun Jung; Kang, Byung Jun; Park, Kang Ryoung; Lee, Sangyoun
2010-11-01
Facial expression recognition can be widely used for various applications, such as emotion-based human-machine interaction, intelligent robot interfaces, face recognition robust to expression variation, etc. Previous studies have been classified as either shape- or appearance-based recognition. The shape-based method has the disadvantage that the individual variance of facial feature points exists irrespective of similar expressions, which can cause a reduction of the recognition accuracy. The appearance-based method has a limitation in that the textural information of the face is very sensitive to variations in illumination. To overcome these problems, a new facial-expression recognition method is proposed, which combines both shape and appearance information, based on the support vector machine (SVM). This research is novel in the following three ways as compared to previous works. First, the facial feature points are automatically detected by using an active appearance model. From these, the shape-based recognition is performed by using the ratios between the facial feature points based on the facial-action coding system. Second, the SVM, which is trained to recognize the same and different expression classes, is proposed to combine two matching scores obtained from the shape- and appearance-based recognitions. Finally, a single SVM is trained to discriminate four different expressions, such as neutral, a smile, anger, and a scream. By determining the expression of the input facial image whose SVM output is at a minimum, the accuracy of the expression recognition is much enhanced. The experimental results showed that the recognition accuracy of the proposed method was better than previous researches and other fusion methods.
Use of Emergency Contraception among Women Aged 15-44: United States, 2006-2010
... had ever used emergency contraception reported fear of method failure as a reason for use, but variation ... had ever had sexual intercourse. Data source and methods This report is based primarily on data from ...
Hatch, Christine E; Fisher, Andrew T.; Revenaugh, Justin S.; Constantz, Jim; Ruehl, Chris
2006-01-01
We present a method for determining streambed seepage rates using time series thermal data. The new method is based on quantifying changes in phase and amplitude of temperature variations between pairs of subsurface sensors. For a reasonable range of streambed thermal properties and sensor spacings the time series method should allow reliable estimation of seepage rates for a range of at least ±10 m d−1 (±1.2 × 10−2 m s−1), with amplitude variations being most sensitive at low flow rates and phase variations retaining sensitivity out to much higher rates. Compared to forward modeling, the new method requires less observational data and less setup and data handling and is faster, particularly when interpreting many long data sets. The time series method is insensitive to streambed scour and sedimentation, which allows for application under a wide range of flow conditions and allows time series estimation of variable streambed hydraulic conductivity. This new approach should facilitate wider use of thermal methods and improve understanding of the complex spatial and temporal dynamics of surface water–groundwater interactions.
Gondim Teixeira, Pedro Augusto; Leplat, Christophe; Chen, Bailiang; De Verbizier, Jacques; Beaumont, Marine; Badr, Sammy; Cotten, Anne; Blum, Alain
2017-12-01
To evaluate intra-tumour and striated muscle T1 value heterogeneity and the influence of different methods of T1 estimation on the variability of quantitative perfusion parameters. Eighty-two patients with a histologically confirmed musculoskeletal tumour were prospectively included in this study and, with ethics committee approval, underwent contrast-enhanced MR perfusion and T1 mapping. T1 value variations in viable tumour areas and in normal-appearing striated muscle were assessed. In 20 cases, normal muscle perfusion parameters were calculated using three different methods: signal based and gadolinium concentration based on fixed and variable T1 values. Tumour and normal muscle T1 values were significantly different (p = 0.0008). T1 value heterogeneity was higher in tumours than in normal muscle (variation of 19.8% versus 13%). The T1 estimation method had a considerable influence on the variability of perfusion parameters. Fixed T1 values yielded higher coefficients of variation than variable T1 values (mean 109.6 ± 41.8% and 58.3 ± 14.1% respectively). Area under the curve was the least variable parameter (36%). T1 values in musculoskeletal tumours are significantly different and more heterogeneous than normal muscle. Patient-specific T1 estimation is needed for direct inter-patient comparison of perfusion parameters. • T1 value variation in musculoskeletal tumours is considerable. • T1 values in muscle and tumours are significantly different. • Patient-specific T1 estimation is needed for comparison of inter-patient perfusion parameters. • Technical variation is higher in permeability than semiquantitative perfusion parameters.
Temperature compensated and self-calibrated current sensor
Yakymyshyn, Christopher Paul; Brubaker, Michael Allen; Yakymyshyn, Pamela Jane
2007-09-25
A method is described to provide temperature compensation and reduction of drift due to aging for a current sensor based on a plurality of magnetic field sensors positioned around a current carrying conductor. The offset voltage signal generated by each magnetic field sensor is used to correct variations in the output signal due to temperature variations and aging.
ERIC Educational Resources Information Center
Glover, Saundra; Bellinger, Jessica D.; Bae, Sejong; Rivers, Patrick A.; Singh, Karan P.
2010-01-01
Objective: The objective of this study is to determine racial and ethnic variations in specialty care utilization based on (a) perceived health status and (b) chronic disease status. Methods: Variations in specialty care utilization, by perceived health and chronic disease status, were examined using the Commonwealth Fund Health Care Quality…
Video-based face recognition via convolutional neural networks
NASA Astrophysics Data System (ADS)
Bao, Tianlong; Ding, Chunhui; Karmoshi, Saleem; Zhu, Ming
2017-06-01
Face recognition has been widely studied recently while video-based face recognition still remains a challenging task because of the low quality and large intra-class variation of video captured face images. In this paper, we focus on two scenarios of video-based face recognition: 1)Still-to-Video(S2V) face recognition, i.e., querying a still face image against a gallery of video sequences; 2)Video-to-Still(V2S) face recognition, in contrast to S2V scenario. A novel method was proposed in this paper to transfer still and video face images to an Euclidean space by a carefully designed convolutional neural network, then Euclidean metrics are used to measure the distance between still and video images. Identities of still and video images that group as pairs are used as supervision. In the training stage, a joint loss function that measures the Euclidean distance between the predicted features of training pairs and expanding vectors of still images is optimized to minimize the intra-class variation while the inter-class variation is guaranteed due to the large margin of still images. Transferred features are finally learned via the designed convolutional neural network. Experiments are performed on COX face dataset. Experimental results show that our method achieves reliable performance compared with other state-of-the-art methods.
ERIC Educational Resources Information Center
Aagaard, Jesper
2017-01-01
In time, phenomenology has become a viable approach to conducting qualitative studies in education. Popular and well-established methods include descriptive and hermeneutic phenomenology. Based on critiques of the essentialism and receptivity of these two methods, however, this article offers a third variation of empirical phenomenology:…
Spectra of variations and anisotropy of cosmic rays during GLE of May 17, 2012
NASA Astrophysics Data System (ADS)
Kravtsova, Marina; Sdobnov, Valery
Using ground-based observations of cosmic rays (CRs) from the World Network of Neutron Monitor Stations and a method of spectrographic global survey, we have examined variations in the rigidity spectrum and anisotropy of CRs during the ground level enhancement (GLE) of May 17, 2012. We showed the rigidity spectrum of amplitudes of CR variations, the behavior of pitch-angle anisotropy amplitudes, and the relative variations in intensity of CRs with rigidities of 2, 4, and 10 GV in the solar-ecliptic geocentric coordinate system in some periods of the event under study.
Li, Xingyu; Plataniotis, Konstantinos N
2015-07-01
In digital histopathology, tasks of segmentation and disease diagnosis are achieved by quantitative analysis of image content. However, color variation in image samples makes it challenging to produce reliable results. This paper introduces a complete normalization scheme to address the problem of color variation in histopathology images jointly caused by inconsistent biopsy staining and nonstandard imaging condition. Method : Different from existing normalization methods that either address partial cause of color variation or lump them together, our method identifies causes of color variation based on a microscopic imaging model and addresses inconsistency in biopsy imaging and staining by an illuminant normalization module and a spectral normalization module, respectively. In evaluation, we use two public datasets that are representative of histopathology images commonly received in clinics to examine the proposed method from the aspects of robustness to system settings, performance consistency against achromatic pixels, and normalization effectiveness in terms of histological information preservation. As the saturation-weighted statistics proposed in this study generates stable and reliable color cues for stain normalization, our scheme is robust to system parameters and insensitive to image content and achromatic colors. Extensive experimentation suggests that our approach outperforms state-of-the-art normalization methods as the proposed method is the only approach that succeeds to preserve histological information after normalization. The proposed color normalization solution would be useful to mitigate effects of color variation in pathology images on subsequent quantitative analysis.
Blazhko modulation in the infrared
NASA Astrophysics Data System (ADS)
Jurcsik, J.; Hajdu, G.; Dékány, I.; Nuspl, J.; Catelan, M.; Grebel, E. K.
2018-04-01
We present first direct evidence of modulation in the K band of Blazhko-type RR Lyrae stars that are identified by their secular modulations in the I-band data of Optical Gravitational Lensing Experiment-IV. A method has been developed to decompose the K-band light variation into two parts originating from the temperature and the radius changes using synthetic data of atmosphere-model grids. The amplitudes of the temperature and the radius variations derived from the method for non-Blazhko RRab stars are in very good agreement with the results of the Baade-Wesselink analysis of RRab stars in the M3 globular cluster confirming the applicability and correctness of the method. It has been found that the Blazhko modulation is primarily driven by the change in the temperature variation. The radius variation plays a marginal part, moreover it has an opposite sign as if the Blazhko effect was caused by the radii variations. This result reinforces the previous finding based on the Baade-Wesselink analysis of M3 (NGC 5272) RR Lyrae, that significant modulation of the radius variations can only be detected in radial-velocity measurements, which relies on spectral lines that form in the uppermost atmospheric layers. Our result gives the first insight into the energetics and dynamics of the Blazhko phenomenon, hence it puts strong constraints on its possible physical explanations.
Predictive Array Design. A method for sampling combinatorial chemistry library space.
Lipkin, M J; Rose, V S; Wood, J
2002-01-01
A method, Predictive Array Design, is presented for sampling combinatorial chemistry space and selecting a subarray for synthesis based on the experimental design method of Latin Squares. The method is appropriate for libraries with three sites of variation. Libraries with four sites of variation can be designed using the Graeco-Latin Square. Simulated annealing is used to optimise the physicochemical property profile of the sub-array. The sub-array can be used to make predictions of the activity of compounds in the all combinations array if we assume each monomer has a relatively constant contribution to activity and that the activity of a compound is composed of the sum of the activities of its constitutive monomers.
Optical measurements of absorption changes in two-layered diffusive media
NASA Astrophysics Data System (ADS)
Fabbri, Francesco; Sassaroli, Angelo; Henry, Michael E.; Fantini, Sergio
2004-04-01
We have used Monte Carlo simulations for a two-layered diffusive medium to investigate the effect of a superficial layer on the measurement of absorption variations from optical diffuse reflectance data processed by using: (a) a multidistance, frequency-domain method based on diffusion theory for a semi-infinite homogeneous medium; (b) a differential-pathlength-factor method based on a modified Lambert-Beer law for a homogeneous medium and (c) a two-distance, partial-pathlength method based on a modified Lambert-Beer law for a two-layered medium. Methods (a) and (b) lead to a single value for the absorption variation, whereas method (c) yields absorption variations for each layer. In the simulations, the optical coefficients of the medium were representative of those of biological tissue in the near-infrared. The thickness of the first layer was in the range 0.3-1.4 cm, and the source-detector distances were in the range 1-5 cm, which is typical of near-infrared diffuse reflectance measurements in tissue. The simulations have shown that (1) method (a) is mostly sensitive to absorption changes in the underlying layer, provided that the thickness of the superficial layer is ~0.6 cm or less; (2) method (b) is significantly affected by absorption changes in the superficial layer and (3) method (c) yields the absorption changes for both layers with a relatively good accuracy of ~4% for the superficial layer and ~10% for the underlying layer (provided that the absorption changes are less than 20-30% of the baseline value). We have applied all three methods of data analysis to near-infrared data collected on the forehead of a human subject during electroconvulsive therapy. Our results suggest that the multidistance method (a) and the two-distance partial-pathlength method (c) may better decouple the contributions to the optical signals that originate in deeper tissue (brain) from those that originate in more superficial tissue layers.
AFLP Variation in Populations of Podisus maculiventris
USDA-ARS?s Scientific Manuscript database
We are developing methods to reduce costs of mass producing beneficial insect species for biological control programs. One of our methods entails selecting beneficials for optimal production traits. Currently we are selecting for increased fecundity. Selection protocols, whether based on phenotyp...
Dudik, Joshua M.; Kurosu, Atsuko; Coyle, James L
2015-01-01
Background Cervical auscultation with high resolution sensors is currently under consideration as a method of automatically screening for specific swallowing abnormalities. To be clinically useful without human involvement, any devices based on cervical auscultation should be able to detect specified swallowing events in an automatic manner. Methods In this paper, we comparatively analyze the density-based spatial clustering of applications with noise algorithm (DBSCAN), a k-means based algorithm, and an algorithm based on quadratic variation as methods of differentiating periods of swallowing activity from periods of time without swallows. These algorithms utilized swallowing vibration data exclusively and compared the results to a gold standard measure of swallowing duration. Data was collected from 23 subjects that were actively suffering from swallowing difficulties. Results Comparing the performance of the DBSCAN algorithm with a proven segmentation algorithm that utilizes k-means clustering demonstrated that the DBSCAN algorithm had a higher sensitivity and correctly segmented more swallows. Comparing its performance with a threshold-based algorithm that utilized the quadratic variation of the signal showed that the DBSCAN algorithm offered no direct increase in performance. However, it offered several other benefits including a faster run time and more consistent performance between patients. All algorithms showed noticeable differen-tiation from the endpoints provided by a videofluoroscopy examination as well as reduced sensitivity. Conclusions In summary, we showed that the DBSCAN algorithm is a viable method for detecting the occurrence of a swallowing event using cervical auscultation signals, but significant work must be done to improve its performance before it can be implemented in an unsupervised manner. PMID:25658505
The effect of individually-induced processes on image-based overlay and diffraction-based overlay
NASA Astrophysics Data System (ADS)
Oh, SeungHwa; Lee, Jeongjin; Lee, Seungyoon; Hwang, Chan; Choi, Gilheyun; Kang, Ho-Kyu; Jung, EunSeung
2014-04-01
In this paper, set of wafers with separated processes was prepared and overlay measurement result was compared in two methods; IBO and DBO. Based on the experimental result, theoretical approach of relationship between overlay mark deformation and overlay variation is presented. Moreover, overlay reading simulation was used in verification and prediction of overlay variation due to deformation of overlay mark caused by induced processes. Through this study, understanding of individual process effects on overlay measurement error is given. Additionally, guideline of selecting proper overlay measurement scheme for specific layer is presented.
Wu, Shuang; Chen, Jie; Li, Chen; Kong, Delei; Yu, Kai; Liu, Shuwei; Zou, Jianwen
2018-02-07
Agricultural nitrate leaching and runoff incurs high nitrogen loads in agricultural irrigation watersheds, constituting one of important sources of atmospheric nitrous oxide (N 2 O). Two independent sampling campaigns of N 2 O flux measurement over diel cycles and N 2 O flux measurements once a week over annual cycles were carried out in an agricultural irrigation watershed in southeast China using floating chamber (chamber-based) and gas transfer equation (model-based) methods. The diel and seasonal patterns of N 2 O fluxes did not differ between the two measurement methods. The diel variation in N 2 O fluxes was characterized by the pattern that N 2 O fluxes were greater during nighttime than daytime periods with a single flux peak at midnight. The diel variation in N 2 O fluxes was closely associated with water environment and chemistry. The time interval of 9:00-11:00 a.m. was identified to be the sampling time best representing daily N 2 O flux measurements in agricultural irrigation watersheds. Seasonal N 2 O fluxes showed large variation, with some flux peaks corresponding to agricultural irrigation and drainage episodes and heavy rainfall during the crop-growing period of May to November. On average, N 2 O fluxes calculated by model-based methods were 27% lower than those determined by the chamber-based techniques over diel or annual cycles. Overall, more measurement campaigns are highly needed to assess regional agricultural N 2 O budget with low uncertainties.
Pineda, Angel R; Barrett, Harrison H
2004-02-01
The current paradigm for evaluating detectors in digital radiography relies on Fourier methods. Fourier methods rely on a shift-invariant and statistically stationary description of the imaging system. The theoretical justification for the use of Fourier methods is based on a uniform background fluence and an infinite detector. In practice, the background fluence is not uniform and detector size is finite. We study the effect of stochastic blurring and structured backgrounds on the correlation between Fourier-based figures of merit and Hotelling detectability. A stochastic model of the blurring leads to behavior similar to what is observed by adding electronic noise to the deterministic blurring model. Background structure does away with the shift invariance. Anatomical variation makes the covariance matrix of the data less amenable to Fourier methods by introducing long-range correlations. It is desirable to have figures of merit that can account for all the sources of variation, some of which are not stationary. For such cases, we show that the commonly used figures of merit based on the discrete Fourier transform can provide an inaccurate estimate of Hotelling detectability.
Using effort information with change-in-ratio data for population estimation
Udevitz, Mark S.; Pollock, Kenneth H.
1995-01-01
Most change-in-ratio (CIR) methods for estimating fish and wildlife population sizes have been based only on assumptions about how encounter probabilities vary among population subclasses. When information on sampling effort is available, it is also possible to derive CIR estimators based on assumptions about how encounter probabilities vary over time. This paper presents a generalization of previous CIR models that allows explicit consideration of a range of assumptions about the variation of encounter probabilities among subclasses and over time. Explicit estimators are derived under this model for specific sets of assumptions about the encounter probabilities. Numerical methods are presented for obtaining estimators under the full range of possible assumptions. Likelihood ratio tests for these assumptions are described. Emphasis is on obtaining estimators based on assumptions about variation of encounter probabilities over time.
Effect of costing methods on unit cost of hospital medical services.
Riewpaiboon, Arthorn; Malaroje, Saranya; Kongsawatt, Sukalaya
2007-04-01
To explore the variance of unit costs of hospital medical services due to different costing methods employed in the analysis. Retrospective and descriptive study at Kaengkhoi District Hospital, Saraburi Province, Thailand, in the fiscal year 2002. The process started with a calculation of unit costs of medical services as a base case. After that, the unit costs were re-calculated based on various methods. Finally, the variations of the results obtained from various methods and the base case were computed and compared. The total annualized capital cost of buildings and capital items calculated by the accounting-based approach (averaging the capital purchase prices throughout their useful life) was 13.02% lower than that calculated by the economic-based approach (combination of depreciation cost and interest on undepreciated portion over the useful life). A change of discount rate from 3% to 6% results in a 4.76% increase of the hospital's total annualized capital cost. When the useful life of durable goods was changed from 5 to 10 years, the total annualized capital cost of the hospital decreased by 17.28% from that of the base case. Regarding alternative criteria of indirect cost allocation, unit cost of medical services changed by a range of -6.99% to +4.05%. We explored the effect on unit cost of medical services in one department. Various costing methods, including departmental allocation methods, ranged between -85% and +32% against those of the base case. Based on the variation analysis, the economic-based approach was suitable for capital cost calculation. For the useful life of capital items, appropriate duration should be studied and standardized. Regarding allocation criteria, single-output criteria might be more efficient than the combined-output and complicated ones. For the departmental allocation methods, micro-costing method was the most suitable method at the time of study. These different costing methods should be standardized and developed as guidelines since they could affect implementation of the national health insurance scheme and health financing management.
Stadler, David; Sulyok, Michael; Schuhmacher, Rainer; Berthiller, Franz; Krska, Rudolf
2018-05-01
Multi-mycotoxin determination by LC-MS is commonly based on external solvent-based or matrix-matched calibration and, if necessary, the correction for the method bias. In everyday practice, the method bias (expressed as apparent recovery RA), which may be caused by losses during the recovery process and/or signal/suppression enhancement, is evaluated by replicate analysis of a single spiked lot of a matrix. However, RA may vary for different lots of the same matrix, i.e., lot-to-lot variation, which can result in a higher relative expanded measurement uncertainty (U r ). We applied a straightforward procedure for the calculation of U r from the within-laboratory reproducibility, which is also called intermediate precision, and the uncertainty of RA (u r,RA ). To estimate the contribution of the lot-to-lot variation to U r , the measurement results of one replicate of seven different lots of figs and maize and seven replicates of a single lot of these matrices, respectively, were used to calculate U r . The lot-to-lot variation was contributing to u r,RA and thus to U r for the majority of the 66 evaluated analytes in both figs and maize. The major contributions of the lot-to-lot variation to u r,RA were differences in analyte recovery in figs and relative matrix effects in maize. U r was estimated from long-term participation in proficiency test schemes with 58%. Provided proper validation, a fit-for-purpose U r of 50% was proposed for measurement results obtained by an LC-MS-based multi-mycotoxin assay, independent of the concentration of the analytes.
NASA Astrophysics Data System (ADS)
Zhao, Xia; Wang, Guang-xin
2008-12-01
Synthetic aperture radar (SAR) is an active remote sensing sensor. It is a coherent imaging system, the speckle is its inherent default, which affects badly the interpretation and recognition of the SAR targets. Conventional methods of removing the speckle is studied usually in real SAR image, which reduce the edges of the images at the same time as depressing the speckle. Morever, Conventional methods lost the information about images phase. Removing the speckle and enhancing the target and edge simultaneously are still a puzzle. To suppress the spckle and enhance the targets and the edges simultaneously, a half-quadratic variational regularization method in complex SAR image is presented, which is based on the prior knowledge of the targets and the edge. Due to the non-quadratic and non- convex quality and the complexity of the cost function, a half-quadratic variational regularization variation is used to construct a new cost function,which is solved by alternate optimization. In the proposed scheme, the construction of the model, the solution of the model and the selection of the model peremeters are studied carefully. In the end, we validate the method using the real SAR data.Theoretic analysis and the experimental results illustrate the the feasibility of the proposed method. Further more, the proposed method can preserve the information about images phase.
Effects of temperature variations on guided waves propagating in composite structures
NASA Astrophysics Data System (ADS)
Shoja, Siavash; Berbyuk, Viktor; Boström, Anders
2016-04-01
Effects of temperature on guided waves propagating in composite materials is a well-known problem which has been investigated in many studies. The majority of the studies is focused on effects of high temperature. Understanding the effects of low temperature has major importance in composite structures and components which are operating in cold climate conditions such as e.g. wind turbines operating in cold climate regions. In this study first the effects of temperature variations on guided waves propagating in a composite plate is investigated experimentally in a cold climate chamber. The material is a common material used to manufacture rotor blades of wind turbines. The temperature range is 25°C to -25°C and effects of temperature variations on amplitude and phase shift of the received signal are investigated. In order to apply the effects of lowering the temperature on the received signal, the Baseline Signal Stretch (BSS) method is modified and used. The modification is based on decomposing the signal into symmetric and asymmetric modes and applying two different stretch factors on each of them. Finally the results obtained based on the new method is compared with the results of application of BSS with one stretch factor and experimental measurements. Comparisons show that an improvement is obtained using the BSS with the mode decomposition method at temperature variations of more than 25°C.
NASA Technical Reports Server (NTRS)
Barranger, John P.
1990-01-01
A novel optical method of measuring 2-D surface strain is proposed. Two linear strains along orthogonal axes and the shear strain between those axes is determined by a variation of Yamaguchi's laser-speckle strain gage technique. It offers the advantages of shorter data acquisition times, less stringent alignment requirements, and reduced decorrelation effects when compared to a previously implemented optical strain rosette technique. The method automatically cancels the translational and rotational components of rigid body motion while simplifying the optical system and improving the speed of response.
Investigation of priorities in water quality management based on correlations and variations.
Boyacıoğlu, Hülya; Gündogdu, Vildan; Boyacıoğlu, Hayal
2013-04-15
The development of water quality assessment strategies investigating spatial and temporal changes caused by natural and anthropogenic phenomena is an important tool in management practices. This paper used cluster analysis, water quality index method, sensitivity analysis and canonical correlation analysis to investigate priorities in pollution control activities. Data sets representing 22 surface water quality parameters were subject to analysis. Results revealed that organic pollution was serious threat for overall water quality in the region. Besides, oil and grease, lead and mercury were the critical variables violating the standard. In contrast to inorganic variables, organic and physical-inorganic chemical parameters were influenced by variations in physical conditions (discharge, temperature). This study showed that information produced based on the variations and correlations in water quality data sets can be helpful to investigate priorities in water management activities. Moreover statistical techniques and index methods are useful tools in data - information transformation process. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Wilson, Robert M.
1990-01-01
The level of skill in predicting the size of the sunspot cycle is investigated for the two types of precursor techniques, single variate and bivariate fits, both applied to cycle 22. The present level of growth in solar activity is compared to the mean level of growth (cycles 10-21) and to the predictions based on the precursor techniques. It is shown that, for cycle 22, both single variate methods (based on geomagnetic data) and bivariate methods suggest a maximum amplitude smaller than that observed for cycle 19, and possibly for cycle 21. Compared to the mean cycle, cycle 22 is presently behaving as if it were a +2.6 sigma cycle (maximum amplitude of about 225), which means that either it will be the first cycle not to be reliably predicted by the combined precursor techniques or its deviation relative to the mean cycle will substantially decrease over the next 18 months.
HormoneBase, a population-level database of steroid hormone levels across vertebrates
Vitousek, Maren N.; Johnson, Michele A.; Donald, Jeremy W.; Francis, Clinton D.; Fuxjager, Matthew J.; Goymann, Wolfgang; Hau, Michaela; Husak, Jerry F.; Kircher, Bonnie K.; Knapp, Rosemary; Martin, Lynn B.; Miller, Eliot T.; Schoenle, Laura A.; Uehling, Jennifer J.; Williams, Tony D.
2018-01-01
Hormones are central regulators of organismal function and flexibility that mediate a diversity of phenotypic traits from early development through senescence. Yet despite these important roles, basic questions about how and why hormone systems vary within and across species remain unanswered. Here we describe HormoneBase, a database of circulating steroid hormone levels and their variation across vertebrates. This database aims to provide all available data on the mean, variation, and range of plasma glucocorticoids (both baseline and stress-induced) and androgens in free-living and un-manipulated adult vertebrates. HormoneBase (www.HormoneBase.org) currently includes >6,580 entries from 476 species, reported in 648 publications from 1967 to 2015, and unpublished datasets. Entries are associated with data on the species and population, sex, year and month of study, geographic coordinates, life history stage, method and latency of hormone sampling, and analysis technique. This novel resource could be used for analyses of the function and evolution of hormone systems, and the relationships between hormonal variation and a variety of processes including phenotypic variation, fitness, and species distributions. PMID:29786693
Variational Approach to Enhanced Sampling and Free Energy Calculations
NASA Astrophysics Data System (ADS)
Valsson, Omar; Parrinello, Michele
2014-08-01
The ability of widely used sampling methods, such as molecular dynamics or Monte Carlo simulations, to explore complex free energy landscapes is severely hampered by the presence of kinetic bottlenecks. A large number of solutions have been proposed to alleviate this problem. Many are based on the introduction of a bias potential which is a function of a small number of collective variables. However constructing such a bias is not simple. Here we introduce a functional of the bias potential and an associated variational principle. The bias that minimizes the functional relates in a simple way to the free energy surface. This variational principle can be turned into a practical, efficient, and flexible sampling method. A number of numerical examples are presented which include the determination of a three-dimensional free energy surface. We argue that, beside being numerically advantageous, our variational approach provides a convenient and novel standpoint for looking at the sampling problem.
Xie, Shan Juan; Lu, Yu; Yoon, Sook; Yang, Jucheng; Park, Dong Sun
2015-01-01
Finger vein recognition has been considered one of the most promising biometrics for personal authentication. However, the capacities and percentages of finger tissues (e.g., bone, muscle, ligament, water, fat, etc.) vary person by person. This usually causes poor quality of finger vein images, therefore degrading the performance of finger vein recognition systems (FVRSs). In this paper, the intrinsic factors of finger tissue causing poor quality of finger vein images are analyzed, and an intensity variation (IV) normalization method using guided filter based single scale retinex (GFSSR) is proposed for finger vein image enhancement. The experimental results on two public datasets demonstrate the effectiveness of the proposed method in enhancing the image quality and finger vein recognition accuracy. PMID:26184226
Xie, Shan Juan; Lu, Yu; Yoon, Sook; Yang, Jucheng; Park, Dong Sun
2015-07-14
Finger vein recognition has been considered one of the most promising biometrics for personal authentication. However, the capacities and percentages of finger tissues (e.g., bone, muscle, ligament, water, fat, etc.) vary person by person. This usually causes poor quality of finger vein images, therefore degrading the performance of finger vein recognition systems (FVRSs). In this paper, the intrinsic factors of finger tissue causing poor quality of finger vein images are analyzed, and an intensity variation (IV) normalization method using guided filter based single scale retinex (GFSSR) is proposed for finger vein image enhancement. The experimental results on two public datasets demonstrate the effectiveness of the proposed method in enhancing the image quality and finger vein recognition accuracy.
Seidu, Issaka; Zhekova, Hristina R; Seth, Michael; Ziegler, Tom
2012-03-08
The performance of the second-order spin-flip constricted variational density functional theory (SF-CV(2)-DFT) for the calculation of the exchange coupling constant (J) is assessed by application to a series of triply bridged Cu(II) dinuclear complexes. A comparison of the J values based on SF-CV(2)-DFT with those obtained by the broken symmetry (BS) DFT method and experiment is provided. It is demonstrated that our methodology constitutes a viable alternative to the BS-DFT method. The strong dependence of the calculated exchange coupling constants on the applied functionals is demonstrated. Both SF-CV(2)-DFT and BS-DFT affords the best agreement with experiment for hybrid functionals.
Temperature compensated and self-calibrated current sensor using reference current
Yakymyshyn, Christopher Paul [Seminole, FL; Brubaker, Michael Allen [Loveland, CO; Yakymyshyn, Pamela Jane [Seminole, FL
2008-01-22
A method is described to provide temperature compensation and self-calibration of a current sensor based on a plurality of magnetic field sensors positioned around a current carrying conductor. A reference electrical current carried by a conductor positioned within the sensing window of the current sensor is used to correct variations in the output signal due to temperature variations and aging.
NASA Astrophysics Data System (ADS)
Rödenbeck, Christian; Bakker, Dorothee; Gruber, Nicolas; Iida, Yosuke; Jacobson, Andy; Jones, Steve; Landschützer, Peter; Metzl, Nicolas; Nakaoka, Shin-ichiro; Olsen, Are; Park, Geun-Ha; Peylin, Philippe; Rodgers, Keith; Sasse, Tristan; Schuster, Ute; Shutler, James; Valsala, Vinu; Wanninkhof, Rik; Zeng, Jiye
2016-04-01
Using measurements of the surface-ocean COtwo partial pressure (pCOtwo) from the SOCAT and LDEO data bases and 14 different pCOtwo mapping methods recently collated by the Surface Ocean pCOtwo Mapping intercomparison (SOCOM) initiative, variations in regional and global sea-air COtwo fluxes are investigated. Though the available mapping methods use widely different approaches, we find relatively consistent estimates of regional pCOtwo seasonality, in line with previous estimates. In terms of interannual variability (IAV), all mapping methods estimate the largest variations to occur in the Eastern equatorial Pacific. Despite considerable spread in the detailed variations, mapping methods that fit the data more closely also tend to agree more closely with each other in regional averages. Encouragingly, this includes mapping methods belonging to complementary types - taking variability either directly from the pCOtwo data or indirectly from driver data via regression. From a weighted ensemble average, we find an IAV amplitude of the global sea-air COtwo flux of IAVampl (standard deviation over AnalysisPeriod), which is larger than simulated by biogeochemical process models. On a decadal perspective, the global ocean COtwo uptake is estimated to have gradually increased since about 2000, with little decadal change prior to that. The weighted mean net global ocean COtwo sink estimated by the SOCOM ensemble is -1.75 UPgCyr (AnalysisPeriod), consistent within uncertainties with estimates from ocean-interior carbon data or atmospheric oxygen trends. Using data-based sea-air COtwo fluxes in atmospheric COtwo inversions also helps to better constrain land-atmosphere COtwo fluxes.
NASA Astrophysics Data System (ADS)
Wang, Mi; Fang, Chengcheng; Yang, Bo; Cheng, Yufeng
2016-06-01
The low frequency error is a key factor which has affected uncontrolled geometry processing accuracy of the high-resolution optical image. To guarantee the geometric quality of imagery, this paper presents an on-orbit calibration method for the low frequency error based on geometric calibration field. Firstly, we introduce the overall flow of low frequency error on-orbit analysis and calibration, which includes optical axis angle variation detection of star sensor, relative calibration among star sensors, multi-star sensor information fusion, low frequency error model construction and verification. Secondly, we use optical axis angle change detection method to analyze the law of low frequency error variation. Thirdly, we respectively use the method of relative calibration and information fusion among star sensors to realize the datum unity and high precision attitude output. Finally, we realize the low frequency error model construction and optimal estimation of model parameters based on DEM/DOM of geometric calibration field. To evaluate the performance of the proposed calibration method, a certain type satellite's real data is used. Test results demonstrate that the calibration model in this paper can well describe the law of the low frequency error variation. The uncontrolled geometric positioning accuracy of the high-resolution optical image in the WGS-84 Coordinate Systems is obviously improved after the step-wise calibration.
NASA Astrophysics Data System (ADS)
Burger, Martin; Dirks, Hendrik; Frerking, Lena; Hauptmann, Andreas; Helin, Tapio; Siltanen, Samuli
2017-12-01
In this paper we study the reconstruction of moving object densities from undersampled dynamic x-ray tomography in two dimensions. A particular motivation of this study is to use realistic measurement protocols for practical applications, i.e. we do not assume to have a full Radon transform in each time step, but only projections in few angular directions. This restriction enforces a space-time reconstruction, which we perform by incorporating physical motion models and regularization of motion vectors in a variational framework. The methodology of optical flow, which is one of the most common methods to estimate motion between two images, is utilized to formulate a joint variational model for reconstruction and motion estimation. We provide a basic mathematical analysis of the forward model and the variational model for the image reconstruction. Moreover, we discuss the efficient numerical minimization based on alternating minimizations between images and motion vectors. A variety of results are presented for simulated and real measurement data with different sampling strategy. A key observation is that random sampling combined with our model allows reconstructions of similar amount of measurements and quality as a single static reconstruction.
NASA Astrophysics Data System (ADS)
Khan, Sabeel M.; Sunny, D. A.; Aqeel, M.
2017-09-01
Nonlinear dynamical systems and their solutions are very sensitive to initial conditions and therefore need to be approximated carefully. In this article, we present and analyze nonlinear solution characteristics of the periodically forced Chen system with the application of a variational method based on the concept of finite time-elements. Our approach is based on the discretization of physical time space into finite elements where each time-element is mapped to a natural time space. The solution of the system is then determined in natural time space using a set of suitable basis functions. The numerical algorithm is presented and implemented to compute and analyze nonlinear behavior at different time-step sizes. The obtained results show an excellent agreement with the classical RK-4 and RK-5 methods. The accuracy and convergence of the method is shown by comparing numerically computed results with the exact solution for a test problem. The presented method has shown a great potential in dealing with the solutions of nonlinear dynamical systems and thus can be utilized in delineating different features and characteristics of their solutions.
Rasta, Seyed Hossein; Partovi, Mahsa Eisazadeh; Seyedarabi, Hadi; Javadzadeh, Alireza
2015-01-01
To investigate the effect of preprocessing techniques including contrast enhancement and illumination correction on retinal image quality, a comparative study was carried out. We studied and implemented a few illumination correction and contrast enhancement techniques on color retinal images to find out the best technique for optimum image enhancement. To compare and choose the best illumination correction technique we analyzed the corrected red and green components of color retinal images statistically and visually. The two contrast enhancement techniques were analyzed using a vessel segmentation algorithm by calculating the sensitivity and specificity. The statistical evaluation of the illumination correction techniques were carried out by calculating the coefficients of variation. The dividing method using the median filter to estimate background illumination showed the lowest Coefficients of variations in the red component. The quotient and homomorphic filtering methods after the dividing method presented good results based on their low Coefficients of variations. The contrast limited adaptive histogram equalization increased the sensitivity of the vessel segmentation algorithm up to 5% in the same amount of accuracy. The contrast limited adaptive histogram equalization technique has a higher sensitivity than the polynomial transformation operator as a contrast enhancement technique for vessel segmentation. Three techniques including the dividing method using the median filter to estimate background, quotient based and homomorphic filtering were found as the effective illumination correction techniques based on a statistical evaluation. Applying the local contrast enhancement technique, such as CLAHE, for fundus images presented good potentials in enhancing the vasculature segmentation.
The Contribution of Qualitative Methods for Identifying the Educational Needs of Adults
ERIC Educational Resources Information Center
Boz, Hayat; Dagli, Yakup
2017-01-01
This study addresses the contribution of applying qualitative research methods for identifying the educational activities planned for adults. The paper is based on the experience gained during in-depth interviews with 39 elderly and 33 middle-aged participants, by purposive sampling method and maximum variation technique, within a needs analysis…
NASA Technical Reports Server (NTRS)
Gao, Bo-Cai; Wiscombe, W. J.
1994-01-01
A method for detecting cirrus clouds in terms of brightness temperature differences between narrowbands at 8, 11, and 12 microns has been proposed by Ackerman et al. In this method, the variation of emissivity with wavelength for different surface targets was not taken into consideration. Based on state-of-the-art laboratory measurements of reflectance spectra of terrestrial materials by Salisbury and D'Aria, it is found that the brightness temperature differences between the 8- and 11-microns bands for soils, rocks, and minerals, and dry vegetation can vary between approximately -8 and +8 K due solely to surface emissivity variations. The large brightness temperature differences are sufficient to cause false detection of cirrus clouds from remote sensing data acquired over certain surface targets using the 8-11-12-microns method directly. It is suggested that the 8-11-12-microns method should be improved to include the surface emissivity effects. In addition, it is recommended that in the future the variation of surface emissivity with wavelength should be taken into account in algorithms for retrieving surface temperatures and low-level atmospheric temperature and water vapor profiles.
Unconventional Hamilton-type variational principle in phase space and symplectic algorithm
NASA Astrophysics Data System (ADS)
Luo, En; Huang, Weijiang; Zhang, Hexin
2003-06-01
By a novel approach proposed by Luo, the unconventional Hamilton-type variational principle in phase space for elastodynamics of multidegree-of-freedom system is established in this paper. It not only can fully characterize the initial-value problem of this dynamic, but also has a natural symplectic structure. Based on this variational principle, a symplectic algorithm which is called a symplectic time-subdomain method is proposed. A non-difference scheme is constructed by applying Lagrange interpolation polynomial to the time subdomain. Furthermore, it is also proved that the presented symplectic algorithm is an unconditionally stable one. From the results of the two numerical examples of different types, it can be seen that the accuracy and the computational efficiency of the new method excel obviously those of widely used Wilson-θ and Newmark-β methods. Therefore, this new algorithm is a highly efficient one with better computational performance.
A Variational Method in Out-of-Equilibrium Physical Systems
Pinheiro, Mario J.
2013-01-01
We propose a new variational principle for out-of-equilibrium dynamic systems that are fundamentally based on the method of Lagrange multipliers applied to the total entropy of an ensemble of particles. However, we use the fundamental equation of thermodynamics on differential forms, considering U and S as 0-forms. We obtain a set of two first order differential equations that reveal the same formal symplectic structure shared by classical mechanics, fluid mechanics and thermodynamics. From this approach, a topological torsion current emerges of the form , where Aj and ωk denote the components of the vector potential (gravitational and/or electromagnetic) and where ω denotes the angular velocity of the accelerated frame. We derive a special form of the Umov-Poynting theorem for rotating gravito-electromagnetic systems. The variational method is then applied to clarify the working mechanism of particular devices. PMID:24316718
Adaptive torque estimation of robot joint with harmonic drive transmission
NASA Astrophysics Data System (ADS)
Shi, Zhiguo; Li, Yuankai; Liu, Guangjun
2017-11-01
Robot joint torque estimation using input and output position measurements is a promising technique, but the result may be affected by the load variation of the joint. In this paper, a torque estimation method with adaptive robustness and optimality adjustment according to load variation is proposed for robot joint with harmonic drive transmission. Based on a harmonic drive model and a redundant adaptive robust Kalman filter (RARKF), the proposed approach can adapt torque estimation filtering optimality and robustness to the load variation by self-tuning the filtering gain and self-switching the filtering mode between optimal and robust. The redundant factor of RARKF is designed as a function of the motor current for tolerating the modeling error and load-dependent filtering mode switching. The proposed joint torque estimation method has been experimentally studied in comparison with a commercial torque sensor and two representative filtering methods. The results have demonstrated the effectiveness of the proposed torque estimation technique.
Kim, Dongchul; Kang, Mingon; Biswas, Ashis; Liu, Chunyu; Gao, Jean
2016-08-10
Inferring gene regulatory networks is one of the most interesting research areas in the systems biology. Many inference methods have been developed by using a variety of computational models and approaches. However, there are two issues to solve. First, depending on the structural or computational model of inference method, the results tend to be inconsistent due to innately different advantages and limitations of the methods. Therefore the combination of dissimilar approaches is demanded as an alternative way in order to overcome the limitations of standalone methods through complementary integration. Second, sparse linear regression that is penalized by the regularization parameter (lasso) and bootstrapping-based sparse linear regression methods were suggested in state of the art methods for network inference but they are not effective for a small sample size data and also a true regulator could be missed if the target gene is strongly affected by an indirect regulator with high correlation or another true regulator. We present two novel network inference methods based on the integration of three different criteria, (i) z-score to measure the variation of gene expression from knockout data, (ii) mutual information for the dependency between two genes, and (iii) linear regression-based feature selection. Based on these criterion, we propose a lasso-based random feature selection algorithm (LARF) to achieve better performance overcoming the limitations of bootstrapping as mentioned above. In this work, there are three main contributions. First, our z score-based method to measure gene expression variations from knockout data is more effective than similar criteria of related works. Second, we confirmed that the true regulator selection can be effectively improved by LARF. Lastly, we verified that an integrative approach can clearly outperform a single method when two different methods are effectively jointed. In the experiments, our methods were validated by outperforming the state of the art methods on DREAM challenge data, and then LARF was applied to inferences of gene regulatory network associated with psychiatric disorders.
Rodríguez, Roberto A; Love, David C; Stewart, Jill R; Tajuba, Julianne; Knee, Jacqueline; Dickerson, Jerold W; Webster, Laura F; Sobsey, Mark D
2012-04-01
Methods for detection of two fecal indicator viruses, F+ and somatic coliphages, were evaluated for application to recreational marine water. Marine water samples were collected during the summer of 2007 in Southern California, United States from transects along Avalon Beach (n=186 samples) and Doheny Beach (n=101 samples). Coliphage detection methods included EPA method 1601 - two-step enrichment (ENR), EPA method 1602 - single agar layer (SAL), and variations of ENR. Variations included comparison of two incubation times (overnight and 5-h incubation) and two final detection steps (lysis zone assay and a rapid latex agglutination assay). A greater number of samples were positive for somatic and F+ coliphages by ENR than by SAL (p<0.01). The standard ENR with overnight incubation and detection by lysis zone assay was the most sensitive method for the detection of F+ and somatic coliphages from marine water, although the method takes up to three days to obtain results. A rapid 5-h enrichment version of ENR also performed well, with more positive samples than SAL, and could be performed in roughly 24h. Latex agglutination-based detection methods require the least amount of time to perform, although the sensitivity was less than lysis zone-based detection methods. Rapid culture-based enrichment of coliphages in marine water may be possible by further optimizing culture-based methods for saline water conditions to generate higher viral titers than currently available, as well as increasing the sensitivity of latex agglutination detection methods. Copyright © 2012 Elsevier B.V. All rights reserved.
Wang, Xun; Sun, Beibei; Liu, Boyang; Fu, Yaping; Zheng, Pan
2017-01-01
Experimental design focuses on describing or explaining the multifactorial interactions that are hypothesized to reflect the variation. The design introduces conditions that may directly affect the variation, where particular conditions are purposely selected for observation. Combinatorial design theory deals with the existence, construction and properties of systems of finite sets whose arrangements satisfy generalized concepts of balance and/or symmetry. In this work, borrowing the concept of "balance" in combinatorial design theory, a novel method for multifactorial bio-chemical experiments design is proposed, where balanced templates in combinational design are used to select the conditions for observation. Balanced experimental data that covers all the influencing factors of experiments can be obtianed for further processing, such as training set for machine learning models. Finally, a software based on the proposed method is developed for designing experiments with covering influencing factors a certain number of times.
NASA Technical Reports Server (NTRS)
Farmer, F. H.; Jarrett, O., Jr.; Brown, C. A., Jr.
1983-01-01
The concentration and composition of phytoplankton populations are measured by an optical method which can be used either in situ or remotely. This method is based upon the in vivo light absorption characteristics of phytoplankton. To provide a data base for testing assumptions relative to the proposed method, visible absorbance spectra of pure cultures of 20 marine phytoplankton were obtained under laboratory conditions. Descriptive and analytical statistics were computed for the absorbance spectra and were used to make comparisons between members of major taxonomic groups and between groups. Spectral variation between the members of the major taxonomic groups was observed to be considerably less than the spectral variation between these groups. In several cases the differences between the mean absorbance spectra of major taxonomic groups are significant enough to be detected with passive remote sensing techniques.
On prediction of crack in different orientations in pipe using frequency based approach
NASA Astrophysics Data System (ADS)
Naniwadekar, M. R.; Naik, S. S.; Maiti, S. K.
2008-04-01
A technique based on measurement of change in natural frequencies and modeling of crack by rotational spring is employed to detect a crack with straight front in different orientations in a section of straight horizontal steel hollow pipe (outer diameter 0.0378 m and inner diameter 0.0278 m). Crack orientations in the range 0-60° with the vertical have been examined and sizes/depths in the range 1-4 mm through the wall of thickness 5 mm have been studied. Variation of rotational spring stiffness with crack size and orientation has been obtained experimentally by deflection and vibration methods. The spring stiffness reduces as expected, with an increase in crack size; it increases with an increase in the crack orientation angle. The crack location has been predicted with a maximum error of 7.29%. The sensitivity of the method for prediction of crack location on variations in experimental data has been examined by changing the difference between the frequencies of pipes with and without crack by ±10%. The method is found to be very robust; the maximum variation in location is 2.68%, which is much less than the change in frequency difference introduced.
Entropy based quantification of Ki-67 positive cell images and its evaluation by a reader study
NASA Astrophysics Data System (ADS)
Niazi, M. Khalid Khan; Pennell, Michael; Elkins, Camille; Hemminger, Jessica; Jin, Ming; Kirby, Sean; Kurt, Habibe; Miller, Barrie; Plocharczyk, Elizabeth; Roth, Rachel; Ziegler, Rebecca; Shana'ah, Arwa; Racke, Fred; Lozanski, Gerard; Gurcan, Metin N.
2013-03-01
Presence of Ki-67, a nuclear protein, is typically used to measure cell proliferation. The quantification of the Ki-67 proliferation index is performed visually by the pathologist; however, this is subject to inter- and intra-reader variability. Automated techniques utilizing digital image analysis by computers have emerged. The large variations in specimen preparation, staining, and imaging as well as true biological heterogeneity of tumor tissue often results in variable intensities in Ki-67 stained images. These variations affect the performance of currently developed methods. To optimize the segmentation of Ki-67 stained cells, one should define a data dependent transformation that will account for these color variations instead of defining a fixed linear transformation to separate different hues. To address these issues in images of tissue stained with Ki-67, we propose a methodology that exploits the intrinsic properties of CIE L∗a∗b∗ color space to translate this complex problem into an automatic entropy based thresholding problem. The developed method was evaluated through two reader studies with pathology residents and expert hematopathologists. Agreement between the proposed method and the expert pathologists was good (CCC = 0.80).
Problem-Based Learning: Instructor Characteristics, Competencies, and Professional Development
2011-01-01
cognitive learning objectives addressed by student -centered instruction . For instance, experiential learning , a variation of which is used at the...based learning in grade school science or mathematics . However, the measures could be modified to focus on adult PBL (or student -centered learning ... student -centered learning methods, the findings should generalize across instructional methods of interest to the Army. Further research is required
Design of Intelligent Hydraulic Excavator Control System Based on PID Method
NASA Astrophysics Data System (ADS)
Zhang, Jun; Jiao, Shengjie; Liao, Xiaoming; Yin, Penglong; Wang, Yulin; Si, Kuimao; Zhang, Yi; Gu, Hairong
Most of the domestic designed hydraulic excavators adopt the constant power design method and set 85%~90% of engine power as the hydraulic system adoption power, it causes high energy loss due to mismatching of power between the engine and the pump. While the variation of the rotational speed of engine could sense the power shift of the load, it provides a new method to adjust the power matching between engine and pump through engine speed. Based on negative flux hydraulic system, an intelligent hydraulic excavator control system was designed based on rotational speed sensing method to improve energy efficiency. The control system was consisted of engine control module, pump power adjusted module, engine idle module and system fault diagnosis module. Special PLC with CAN bus was used to acquired the sensors and adjusts the pump absorption power according to load variation. Four energy saving control strategies with constant power method were employed to improve the fuel utilization. Three power modes (H, S and L mode) were designed to meet different working status; Auto idle function was employed to save energy through two work status detected pressure switches, 1300rpm was setting as the idle speed according to the engine consumption fuel curve. Transient overload function was designed for deep digging within short time without spending extra fuel. An increasing PID method was employed to realize power matching between engine and pump, the rotational speed's variation was taken as the PID algorithm's input; the current of proportional valve of variable displacement pump was the PID's output. The result indicated that the auto idle could decrease fuel consumption by 33.33% compared to work in maximum speed of H mode, the PID control method could take full use of maximum engine power at each power mode and keep the engine speed at stable range. Application of rotational speed sensing method provides a reliable method to improve the excavator's energy efficiency and realize power match between pump and engine.
Zhu, X Q; Gasser, R B
1998-06-01
In this study, we assessed single-strand conformation polymorphism (SSCP)-based approaches for their capacity to fingerprint sequence variation in ribosomal DNA (rDNA) of ascaridoid nematodes of veterinary and/or human health significance. The second internal transcribed spacer region (ITS-2) of rDNA was utilised as the target region because it is known to provide species-specific markers for this group of parasites. ITS-2 was amplified by PCR from genomic DNA derived from individual parasites and subjected to analysis. Direct SSCP analysis of amplicons from seven taxa (Toxocara vitulorum, Toxocara cati, Toxocara canis, Toxascaris leonina, Baylisascaris procyonis, Ascaris suum and Parascaris equorum) showed that the single-strand (ss) ITS-2 patterns produced allowed their unequivocal identification to species. While no variation in SSCP patterns was detected in the ITS-2 within four species for which multiple samples were available, the method allowed the direct display of four distinct sequence types of ITS-2 among individual worms of T. cati. Comparison of SSCP/sequencing with the methods of dideoxy fingerprinting (ddF) and restriction endonuclease fingerprinting (REF) revealed that also ddF allowed the definition of the four sequence types, whereas REF displayed three of four. The findings indicate the usefulness of the SSCP-based approaches for the identification of ascaridoid nematodes to species, the direct display of sequence variation in rDNA and the detection of population variation. The ability to fingerprint microheterogeneity in ITS-2 rDNA using such approaches also has implications for studying fundamental aspects relating to mutational change in rDNA.
Spatial resolution enhancement of satellite image data using fusion approach
NASA Astrophysics Data System (ADS)
Lestiana, H.; Sukristiyanti
2018-02-01
Object identification using remote sensing data has a problem when the spatial resolution is not in accordance with the object. The fusion approach is one of methods to solve the problem, to improve the object recognition and to increase the objects information by combining data from multiple sensors. The application of fusion image can be used to estimate the environmental component that is needed to monitor in multiple views, such as evapotranspiration estimation, 3D ground-based characterisation, smart city application, urban environments, terrestrial mapping, and water vegetation. Based on fusion application method, the visible object in land area has been easily recognized using the method. The variety of object information in land area has increased the variation of environmental component estimation. The difficulties in recognizing the invisible object like Submarine Groundwater Discharge (SGD), especially in tropical area, might be decreased by the fusion method. The less variation of the object in the sea surface temperature is a challenge to be solved.
Secular Variation and Physical Characteristics Determination of the HADS Star EH Lib
NASA Astrophysics Data System (ADS)
Pena, J. H.; Villarreal, C.; Pina, D. S.; Renteria, A.; Soni, A., Guillen, J. Calderon, J.
2017-12-01
Physical parameters of EH Lib have been determined based on observations carried out in 2015 with photometry. They have also served, along with samples from the years 1969 and 1986, to analyse the frequency content of EH Lib with Fourier Transforms. Recent CCD observations increased the times of maximum with twelve new times which helped us study the secular variation of the period with a method based on the minimization of the standard deviation of the O-C residuals. It is concluded that there may be a long-term period change.
Li, Lixin; Zhou, Xiaolu; Kalo, Marc; Piltner, Reinhard
2016-07-25
Appropriate spatiotemporal interpolation is critical to the assessment of relationships between environmental exposures and health outcomes. A powerful assessment of human exposure to environmental agents would incorporate spatial and temporal dimensions simultaneously. This paper compares shape function (SF)-based and inverse distance weighting (IDW)-based spatiotemporal interpolation methods on a data set of PM2.5 data in the contiguous U.S. Particle pollution, also known as particulate matter (PM), is composed of microscopic solids or liquid droplets that are so small that they can get deep into the lungs and cause serious health problems. PM2.5 refers to particles with a mean aerodynamic diameter less than or equal to 2.5 micrometers. Based on the error statistics results of k-fold cross validation, the SF-based method performed better overall than the IDW-based method. The interpolation results generated by the SF-based method are combined with population data to estimate the population exposure to PM2.5 in the contiguous U.S. We investigated the seasonal variations, identified areas where annual and daily PM2.5 were above the standards, and calculated the population size in these areas. Finally, a web application is developed to interpolate and visualize in real time the spatiotemporal variation of ambient air pollution across the contiguous U.S. using air pollution data from the U.S. Environmental Protection Agency (EPA)'s AirNow program.
Li, Lixin; Zhou, Xiaolu; Kalo, Marc; Piltner, Reinhard
2016-01-01
Appropriate spatiotemporal interpolation is critical to the assessment of relationships between environmental exposures and health outcomes. A powerful assessment of human exposure to environmental agents would incorporate spatial and temporal dimensions simultaneously. This paper compares shape function (SF)-based and inverse distance weighting (IDW)-based spatiotemporal interpolation methods on a data set of PM2.5 data in the contiguous U.S. Particle pollution, also known as particulate matter (PM), is composed of microscopic solids or liquid droplets that are so small that they can get deep into the lungs and cause serious health problems. PM2.5 refers to particles with a mean aerodynamic diameter less than or equal to 2.5 micrometers. Based on the error statistics results of k-fold cross validation, the SF-based method performed better overall than the IDW-based method. The interpolation results generated by the SF-based method are combined with population data to estimate the population exposure to PM2.5 in the contiguous U.S. We investigated the seasonal variations, identified areas where annual and daily PM2.5 were above the standards, and calculated the population size in these areas. Finally, a web application is developed to interpolate and visualize in real time the spatiotemporal variation of ambient air pollution across the contiguous U.S. using air pollution data from the U.S. Environmental Protection Agency (EPA)’s AirNow program. PMID:27463722
Reddy, Michael M.; Schuster, Paul; Kendall, Carol; Reddy, Micaela B.
2006-01-01
18O is an ideal tracer for characterizing hydrological processes because it can be reliably measured in several watershed hydrological compartments. Here, we present multiyear isotopic data, i.e. 18O variations (δ18O), for precipitation inputs, surface water and groundwater in the Shingobee River Headwaters Area (SRHA), a well-instrumented research catchment in north-central Minnesota. SRHA surface waters exhibit δ18O seasonal variations similar to those of groundwaters, and seasonal δ18O variations plotted versus time fit seasonal sine functions. These seasonal δ18O variations were interpreted to estimate surface water and groundwater mean residence times (MRTs) at sampling locations near topographically closed-basin lakes. MRT variations of about 1 to 16 years have been estimated over an area covering about 9 km2 from the basin boundary to the most downgradient well. Estimated MRT error (±0·3 to ±0·7 years) is small for short MRTs and is much larger (±10 years) for a well with an MRT (16 years) near the limit of the method. Groundwater transit time estimates based on Darcy's law, tritium content, and the seasonal δ18O amplitude approach appear to be consistent within the limits of each method. The results from this study suggest that use of the δ18O seasonal variation method to determine MRTs can help assess groundwater recharge areas in small headwaters catchments.
Reddy, Michael M.; Schuster, Paul F.; Kendall, Carol; Reddy, Micaela B.
2006-01-01
18O is an ideal tracer for characterizing hydrological processes because it can be reliably measured in several watershed hydrological compartments. Here, we present multiyear isotopic data, i.e. 18O variations (δ18O), for precipitation inputs, surface water and groundwater in the Shingobee River Headwaters Area (SRHA), a well-instrumented research catchment in north-central Minnesota. SRHA surface waters exhibit δ18O seasonal variations similar to those of groundwaters, and seasonal δ18O variations plotted versus time fit seasonal sine functions. These seasonal δ18O variations were interpreted to estimate surface water and groundwater mean residence times (MRTs) at sampling locations near topographically closed-basin lakes. MRT variations of about 1 to 16 years have been estimated over an area covering about 9 km2 from the basin boundary to the most downgradient well. Estimated MRT error (±0·3 to ±0·7 years) is small for short MRTs and is much larger (±10 years) for a well with an MRT (16 years) near the limit of the method. Groundwater transit time estimates based on Darcy's law, tritium content, and the seasonal δ18O amplitude approach appear to be consistent within the limits of each method. The results from this study suggest that use of the δ18O seasonal variation method to determine MRTs can help assess groundwater recharge areas in small headwaters catchments.
Empirical correction for earth sensor horizon radiance variation
NASA Technical Reports Server (NTRS)
Hashmall, Joseph A.; Sedlak, Joseph; Andrews, Daniel; Luquette, Richard
1998-01-01
A major limitation on the use of infrared horizon sensors for attitude determination is the variability of the height of the infrared Earth horizon. This variation includes a climatological component and a stochastic component of approximately equal importance. The climatological component shows regular variation with season and latitude. Models based on historical measurements have been used to compensate for these systematic changes. The stochastic component is analogous to tropospheric weather. It can cause extreme, localized changes that for a period of days, overwhelm the climatological variation. An algorithm has been developed to compensate partially for the climatological variation of horizon height and at least to mitigate the stochastic variation. This method uses attitude and horizon sensor data from spacecraft to update a horizon height history as a function of latitude. For spacecraft that depend on horizon sensors for their attitudes (such as the Total Ozone Mapping Spectrometer-Earth Probe-TOMS-EP) a batch least squares attitude determination system is used. It is assumed that minimizing the average sensor residual throughout a full orbit of data results in attitudes that are nearly independent of local horizon height variations. The method depends on the additional assumption that the mean horizon height over all latitudes is approximately independent of season. Using these assumptions, the method yields the latitude dependent portion of local horizon height variations. This paper describes the algorithm used to generate an empirical horizon height. Ideally, an international horizon height database could be established that would rapidly merge data from various spacecraft to provide timely corrections that could be used by all.
Colour based fire detection method with temporal intensity variation filtration
NASA Astrophysics Data System (ADS)
Trambitckii, K.; Anding, K.; Musalimov, V.; Linß, G.
2015-02-01
Development of video, computing technologies and computer vision gives a possibility of automatic fire detection on video information. Under that project different algorithms was implemented to find more efficient way of fire detection. In that article colour based fire detection algorithm is described. But it is not enough to use only colour information to detect fire properly. The main reason of this is that in the shooting conditions may be a lot of things having colour similar to fire. A temporary intensity variation of pixels is used to separate them from the fire. These variations are averaged over the series of several frames. This algorithm shows robust work and was realised as a computer program by using of the OpenCV library.
Space-variant restoration of images degraded by camera motion blur.
Sorel, Michal; Flusser, Jan
2008-02-01
We examine the problem of restoration from multiple images degraded by camera motion blur. We consider scenes with significant depth variations resulting in space-variant blur. The proposed algorithm can be applied if the camera moves along an arbitrary curve parallel to the image plane, without any rotations. The knowledge of camera trajectory and camera parameters is not necessary. At the input, the user selects a region where depth variations are negligible. The algorithm belongs to the group of variational methods that estimate simultaneously a sharp image and a depth map, based on the minimization of a cost functional. To initialize the minimization, it uses an auxiliary window-based depth estimation algorithm. Feasibility of the algorithm is demonstrated by three experiments with real images.
NASA Astrophysics Data System (ADS)
Keller, Brad M.; Reeves, Anthony P.; Barr, R. Graham; Yankelevitz, David F.; Henschke, Claudia I.
2010-03-01
CT scans allow for the quantitative evaluation of the anatomical bases of emphysema. Recently, a non-density based geometric measurement of lung diagphragm curvature has been proposed as a method for the quantification of emphysema from CT. This work analyzes variability of diaphragm curvature and evaluates the effectiveness of a compensation methodology for the reduction of this variability as compared to emphysema index. Using a dataset of 43 scan-pairs with less than a 100 day time-interval between scans, we find that the diaphragm curvature had a trend towards lower overall variability over emphysema index (95% CI:-9.7 to + 14.7 vs. -15.8 to +12.0), and that the variation of both measures was reduced after compensation. We conclude that the variation of the new measure can be considered comparable to the established measure and the compensation can reduce the apparent variation of quantitative measures successfully.
Zhao, Guangju; Mu, Xingmin; Jiao, Juying; Gao, Peng; Sun, Wenyi; Li, Erhui; Wei, Yanhong; Huang, Jiacong
2018-05-23
Understanding the relative contributions of climate change and human activities to variations in sediment load is of great importance for regional soil, and river basin management. Considerable studies have investigated spatial-temporal variation of sediment load within the Loess Plateau; however, contradictory findings exist among methods used. This study systematically reviewed six quantitative methods: simple linear regression, double mass curve, sediment identity factor analysis, dam-sedimentation based method, the Sediment Delivery Distributed (SEDD) model, and the Soil Water Assessment Tool (SWAT) model. The calculation procedures and merits for each method were systematically explained. A case study in the Huangfuchuan watershed on the northern Loess Plateau has been undertaken. The results showed that sediment load had been reduced by 70.5% during the changing period from 1990 to 2012 compared to that of the baseline period from 1955 to 1989. Human activities accounted for an average of 93.6 ± 4.1% of the total decline in sediment load, whereas climate change contributed 6.4 ± 4.1%. Five methods produced similar estimates, but the linear regression yielded relatively different results. The results of this study provide a good reference for assessing the effects of climate change and human activities on sediment load variation by using different methods. Copyright © 2018. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Jochimsen, Thies H.; Schulz, Jessica; Busse, Harald; Werner, Peter; Schaudinn, Alexander; Zeisig, Vilia; Kurch, Lars; Seese, Anita; Barthel, Henryk; Sattler, Bernhard; Sabri, Osama
2015-06-01
This study explores the possibility of using simultaneous positron emission tomography—magnetic resonance imaging (PET-MRI) to estimate the lean body mass (LBM) in order to obtain a standardized uptake value (SUV) which is less dependent on the patients' adiposity. This approach is compared to (1) the commonly-used method based on a predictive equation for LBM, and (2) to using an LBM derived from PET-CT data. It is hypothesized that an MRI-based correction of SUV provides a robust method due to the high soft-tissue contrast of MRI. A straightforward approach to calculate an MRI-derived LBM is presented. It is based on the fat and water images computed from the two-point Dixon MRI primarily used for attenuation correction in PET-MRI. From these images, a water fraction was obtained for each voxel. Averaging over the whole body yielded the weight-normalized LBM. Performance of the new approach in terms of reducing variations of 18F-Fludeoxyglucose SUVs in brain and liver across 19 subjects was compared with results using predictive methods and PET-CT data to estimate the LBM. The MRI-based method reduced the coefficient of variation of SUVs in the brain by 41 ± 10% which is comparable to the reduction by the PET-CT method (35 ± 10%). The reduction of the predictive LBM method was 29 ± 8%. In the liver, the reduction was less clear, presumably due to other sources of variation. In conclusion, employing the Dixon data in simultaneous PET-MRI for calculation of lean body mass provides a brain SUV which is less dependent on patient adiposity. The reduced dependency is comparable to that obtained by CT and predictive equations. Therefore, it is more comparable across patients. The technique does not impose an overhead in measurement time and is straightforward to implement.
Jochimsen, Thies H; Schulz, Jessica; Busse, Harald; Werner, Peter; Schaudinn, Alexander; Zeisig, Vilia; Kurch, Lars; Seese, Anita; Barthel, Henryk; Sattler, Bernhard; Sabri, Osama
2015-06-21
This study explores the possibility of using simultaneous positron emission tomography--magnetic resonance imaging (PET-MRI) to estimate the lean body mass (LBM) in order to obtain a standardized uptake value (SUV) which is less dependent on the patients' adiposity. This approach is compared to (1) the commonly-used method based on a predictive equation for LBM, and (2) to using an LBM derived from PET-CT data. It is hypothesized that an MRI-based correction of SUV provides a robust method due to the high soft-tissue contrast of MRI. A straightforward approach to calculate an MRI-derived LBM is presented. It is based on the fat and water images computed from the two-point Dixon MRI primarily used for attenuation correction in PET-MRI. From these images, a water fraction was obtained for each voxel. Averaging over the whole body yielded the weight-normalized LBM. Performance of the new approach in terms of reducing variations of (18)F-Fludeoxyglucose SUVs in brain and liver across 19 subjects was compared with results using predictive methods and PET-CT data to estimate the LBM. The MRI-based method reduced the coefficient of variation of SUVs in the brain by 41 ± 10% which is comparable to the reduction by the PET-CT method (35 ± 10%). The reduction of the predictive LBM method was 29 ± 8%. In the liver, the reduction was less clear, presumably due to other sources of variation. In conclusion, employing the Dixon data in simultaneous PET-MRI for calculation of lean body mass provides a brain SUV which is less dependent on patient adiposity. The reduced dependency is comparable to that obtained by CT and predictive equations. Therefore, it is more comparable across patients. The technique does not impose an overhead in measurement time and is straightforward to implement.
Accurate mask-based spatially regularized correlation filter for visual tracking
NASA Astrophysics Data System (ADS)
Gu, Xiaodong; Xu, Xinping
2017-01-01
Recently, discriminative correlation filter (DCF)-based trackers have achieved extremely successful results in many competitions and benchmarks. These methods utilize a periodic assumption of the training samples to efficiently learn a classifier. However, this assumption will produce unwanted boundary effects, which severely degrade the tracking performance. Correlation filters with limited boundaries and spatially regularized DCFs were proposed to reduce boundary effects. However, their methods used the fixed mask or predesigned weights function, respectively, which was unsuitable for large appearance variation. We propose an accurate mask-based spatially regularized correlation filter for visual tracking. Our augmented objective can reduce the boundary effect even in large appearance variation. In our algorithm, the masking matrix is converted into the regularized function that acts on the correlation filter in frequency domain, which makes the algorithm fast convergence. Our online tracking algorithm performs favorably against state-of-the-art trackers on OTB-2015 Benchmark in terms of efficiency, accuracy, and robustness.
Zhekova, Hristina R; Seth, Michael; Ziegler, Tom
2011-11-14
We have recently developed a methodology for the calculation of exchange coupling constants J in weakly interacting polynuclear metal clusters. The method is based on unrestricted and restricted second order spin-flip constricted variational density functional theory (SF-CV(2)-DFT) and is here applied to eight binuclear copper systems. Comparison of the SF-CV(2)-DFT results with experiment and with results obtained from other DFT and wave function based methods has been made. Restricted SF-CV(2)-DFT with the BH&HLYP functional yields consistently J values in excellent agreement with experiment. The results acquired from this scheme are comparable in quality to those obtained by accurate multi-reference wave function methodologies such as difference dedicated configuration interaction and the complete active space with second-order perturbation theory. © 2011 American Institute of Physics
Colloquium: Search for a drifting proton-electron mass ratio from H2
NASA Astrophysics Data System (ADS)
Ubachs, W.; Bagdonaite, J.; Salumbides, E. J.; Murphy, M. T.; Kaper, L.
2016-04-01
An overview is presented of the H2 quasar absorption method to search for a possible variation of the proton-electron mass ratio μ =mp/me on a cosmological time scale. The method is based on a comparison between wavelengths of absorption lines in the H2 Lyman and Werner bands as observed at high redshift with wavelengths of the same lines measured at zero redshift in the laboratory. For such comparison sensitivity coefficients to a relative variation of μ are calculated for all individual lines and included in the fitting routine deriving a value for Δ μ /μ . Details of the analysis of astronomical spectra, obtained with large 8-10 m class optical telescopes, equipped with high-resolution echelle grating based spectrographs, are explained. The methods and results of the laboratory molecular spectroscopy of H2, in particular, the laser-based metrology studies for the determination of rest wavelengths of the Lyman and Werner band absorption lines, are reviewed. Theoretical physics scenarios delivering a rationale for a varying μ are discussed briefly, as well as alternative spectroscopic approaches to probe variation of μ , other than the H2 method. Also a recent approach to detect a dependence of the proton-to-electron mass ratio on environmental conditions, such as the presence of strong gravitational fields, are highlighted. Currently some 56 H2 absorption systems are known and listed. Their usefulness to detect μ variation is discussed, in terms of column densities and brightness of background quasar sources, along with future observational strategies. The astronomical observations of ten quasar systems analyzed so far set a constraint on a varying proton-electron mass ratio of |Δ μ /μ |<5 ×1 0-6 (3 σ ), which is a null result, holding for redshifts in the range z =2.0 - 4.2 . This corresponds to look-back times of (10 - 12.4 )×109 years into cosmic history. Attempts to interpret the results from these ten H2 absorbers in terms of a spatial variation of μ are currently hampered by the small sample size and their coincidental distribution in a relatively narrow band across the sky.
Khana, Diba; Rossen, Lauren M; Hedegaard, Holly; Warner, Margaret
2018-01-01
Hierarchical Bayes models have been used in disease mapping to examine small scale geographic variation. State level geographic variation for less common causes of mortality outcomes have been reported however county level variation is rarely examined. Due to concerns about statistical reliability and confidentiality, county-level mortality rates based on fewer than 20 deaths are suppressed based on Division of Vital Statistics, National Center for Health Statistics (NCHS) statistical reliability criteria, precluding an examination of spatio-temporal variation in less common causes of mortality outcomes such as suicide rates (SRs) at the county level using direct estimates. Existing Bayesian spatio-temporal modeling strategies can be applied via Integrated Nested Laplace Approximation (INLA) in R to a large number of rare causes of mortality outcomes to enable examination of spatio-temporal variations on smaller geographic scales such as counties. This method allows examination of spatiotemporal variation across the entire U.S., even where the data are sparse. We used mortality data from 2005-2015 to explore spatiotemporal variation in SRs, as one particular application of the Bayesian spatio-temporal modeling strategy in R-INLA to predict year and county-specific SRs. Specifically, hierarchical Bayesian spatio-temporal models were implemented with spatially structured and unstructured random effects, correlated time effects, time varying confounders and space-time interaction terms in the software R-INLA, borrowing strength across both counties and years to produce smoothed county level SRs. Model-based estimates of SRs were mapped to explore geographic variation.
Temperature compensated and self-calibrated current sensor using reference magnetic field
Yakymyshyn, Christopher Paul; Brubaker, Michael Allen; Yakymyshyn, Pamela Jane
2007-10-09
A method is described to provide temperature compensation and self-calibration of a current sensor based on a plurality of magnetic field sensors positioned around a current carrying conductor. A reference magnetic field generated within the current sensor housing is detected by the magnetic field sensors and is used to correct variations in the output signal due to temperature variations and aging.
Temperature compensated current sensor using reference magnetic field
Yakymyshyn, Christopher Paul; Brubaker, Michael Allen; Yakymyshyn, Pamela Jane
2007-10-09
A method is described to provide temperature compensation and self-calibration of a current sensor based on a plurality of magnetic field sensors positioned around a current carrying conductor. A reference magnetic field generated within the current sensor housing is detected by a separate but identical magnetic field sensor and is used to correct variations in the output signal due to temperature variations and aging.
Variational Methods in Sensitivity Analysis and Optimization for Aerodynamic Applications
NASA Technical Reports Server (NTRS)
Ibrahim, A. H.; Hou, G. J.-W.; Tiwari, S. N. (Principal Investigator)
1996-01-01
Variational methods (VM) sensitivity analysis, which is the continuous alternative to the discrete sensitivity analysis, is employed to derive the costate (adjoint) equations, the transversality conditions, and the functional sensitivity derivatives. In the derivation of the sensitivity equations, the variational methods use the generalized calculus of variations, in which the variable boundary is considered as the design function. The converged solution of the state equations together with the converged solution of the costate equations are integrated along the domain boundary to uniquely determine the functional sensitivity derivatives with respect to the design function. The determination of the sensitivity derivatives of the performance index or functional entails the coupled solutions of the state and costate equations. As the stable and converged numerical solution of the costate equations with their boundary conditions are a priori unknown, numerical stability analysis is performed on both the state and costate equations. Thereafter, based on the amplification factors obtained by solving the generalized eigenvalue equations, the stability behavior of the costate equations is discussed and compared with the state (Euler) equations. The stability analysis of the costate equations suggests that the converged and stable solution of the costate equation is possible only if the computational domain of the costate equations is transformed to take into account the reverse flow nature of the costate equations. The application of the variational methods to aerodynamic shape optimization problems is demonstrated for internal flow problems at supersonic Mach number range. The study shows, that while maintaining the accuracy of the functional sensitivity derivatives within the reasonable range for engineering prediction purposes, the variational methods show a substantial gain in computational efficiency, i.e., computer time and memory, when compared with the finite difference sensitivity analysis.
Scene-based nonuniformity correction with reduced ghosting using a gated LMS algorithm.
Hardie, Russell C; Baxley, Frank; Brys, Brandon; Hytla, Patrick
2009-08-17
In this paper, we present a scene-based nouniformity correction (NUC) method using a modified adaptive least mean square (LMS) algorithm with a novel gating operation on the updates. The gating is designed to significantly reduce ghosting artifacts produced by many scene-based NUC algorithms by halting updates when temporal variation is lacking. We define the algorithm and present a number of experimental results to demonstrate the efficacy of the proposed method in comparison to several previously published methods including other LMS and constant statistics based methods. The experimental results include simulated imagery and a real infrared image sequence. We show that the proposed method significantly reduces ghosting artifacts, but has a slightly longer convergence time. (c) 2009 Optical Society of America
NASA Astrophysics Data System (ADS)
Filatov, Michael; Cremer, Dieter
2002-01-01
A recently developed variationally stable quasi-relativistic method, which is based on the low-order approximation to the method of normalized elimination of the small component, was incorporated into density functional theory (DFT). The new method was tested for diatomic molecules involving Ag, Cd, Au, and Hg by calculating equilibrium bond lengths, vibrational frequencies, and dissociation energies. The method is easy to implement into standard quantum chemical programs and leads to accurate results for the benchmark systems studied.
Lin, Guoping; Candela, Y; Tillement, O; Cai, Zhiping; Lefèvre-Seguin, V; Hare, J
2012-12-15
A method based on thermal bistability for ultralow-threshold microlaser optimization is demonstrated. When sweeping the pump laser frequency across a pump resonance, the dynamic thermal bistability slows down the power variation. The resulting line shape modification enables a real-time monitoring of the laser characteristic. We demonstrate this method for a functionalized microsphere exhibiting a submicrowatt laser threshold. This approach is confirmed by comparing the results with a step-by-step recording in quasi-static thermal conditions.
Karthick, P A; Ghosh, Diptasree Maitra; Ramakrishnan, S
2018-02-01
Surface electromyography (sEMG) based muscle fatigue research is widely preferred in sports science and occupational/rehabilitation studies due to its noninvasiveness. However, these signals are complex, multicomponent and highly nonstationary with large inter-subject variations, particularly during dynamic contractions. Hence, time-frequency based machine learning methodologies can improve the design of automated system for these signals. In this work, the analysis based on high-resolution time-frequency methods, namely, Stockwell transform (S-transform), B-distribution (BD) and extended modified B-distribution (EMBD) are proposed to differentiate the dynamic muscle nonfatigue and fatigue conditions. The nonfatigue and fatigue segments of sEMG signals recorded from the biceps brachii of 52 healthy volunteers are preprocessed and subjected to S-transform, BD and EMBD. Twelve features are extracted from each method and prominent features are selected using genetic algorithm (GA) and binary particle swarm optimization (BPSO). Five machine learning algorithms, namely, naïve Bayes, support vector machine (SVM) of polynomial and radial basis kernel, random forest and rotation forests are used for the classification. The results show that all the proposed time-frequency distributions (TFDs) are able to show the nonstationary variations of sEMG signals. Most of the features exhibit statistically significant difference in the muscle fatigue and nonfatigue conditions. The maximum number of features (66%) is reduced by GA and BPSO for EMBD and BD-TFD respectively. The combination of EMBD- polynomial kernel based SVM is found to be most accurate (91% accuracy) in classifying the conditions with the features selected using GA. The proposed methods are found to be capable of handling the nonstationary and multicomponent variations of sEMG signals recorded in dynamic fatiguing contractions. Particularly, the combination of EMBD- polynomial kernel based SVM could be used to detect the dynamic muscle fatigue conditions. Copyright © 2017 Elsevier B.V. All rights reserved.
Law, A; Yu, J S; Wang, W; Lin, J; Lynen, R
2017-09-01
Three measures to assess the provision of effective contraception methods among reproductive-aged women have recently been endorsed for national public reporting. Based on these measures, this study examined real-world trends and regional variations of contraceptive provision in a commercially insured population in the United States. Women 15-44years old with continuous enrollment in each year from 2005 to 2014 were identified from a commercial claims database. In accordance with the proposed measures, percentages of women (a) provided most effective or moderately effective (MEME) methods of contraception and (b) provided a long-acting reversible contraceptive (LARC) method were calculated in two populations: women at risk for unintended pregnancy and women who had a live birth within 3 and 60days of delivery. During the 10-year period, the percentages of women at risk for unintended pregnancy provided MEME contraceptive methods increased among 15-20-year-olds (24.5%-35.9%) and 21-44-year-olds (26.2%-31.5%), and those provided a LARC method also increased among 15-20-year-olds (0.1%-2.4%) and 21-44-year-olds (0.8%-3.9%). Provision of LARC methods increased most in the North Central and West among both age groups of women. Provision of MEME contraceptives and LARC methods to women who had a live birth within 60days postpartum also increased across age groups and regions. This assessment indicates an overall trend of increasing provision of MEME contraceptive methods in the commercial sector, albeit with age group and regional variations. If implemented, these proposed measures may have impacts on health plan contraceptive access policy. Copyright © 2017 Elsevier Inc. All rights reserved.
Noisy image magnification with total variation regularization and order-changed dictionary learning
NASA Astrophysics Data System (ADS)
Xu, Jian; Chang, Zhiguo; Fan, Jiulun; Zhao, Xiaoqiang; Wu, Xiaomin; Wang, Yanzi
2015-12-01
Noisy low resolution (LR) images are always obtained in real applications, but many existing image magnification algorithms can not get good result from a noisy LR image. We propose a two-step image magnification algorithm to solve this problem. The proposed algorithm takes the advantages of both regularization-based method and learning-based method. The first step is based on total variation (TV) regularization and the second step is based on sparse representation. In the first step, we add a constraint on the TV regularization model to magnify the LR image and at the same time to suppress the noise in it. In the second step, we propose an order-changed dictionary training algorithm to train the dictionaries which is dominated by texture details. Experimental results demonstrate that the proposed algorithm performs better than many other algorithms when the noise is not serious. The proposed algorithm can also provide better visual quality on natural LR images.
NASA Astrophysics Data System (ADS)
Li, Xin; Babovic, Vladan
2017-04-01
Observed studies on inter-annual variation of precipitation provide insight into the response of precipitation to anthropogenic climate change and natural climate variability. Inter-annual variation of precipitation results from the concurrent variations of precipitation frequency and intensity, understanding of the relative importance of frequency and intensity in the variability of precipitation can help fathom its changing properties. Investigation of the long-term changes of precipitation schemes has been extensively carried out in many regions across the world, however, detailed studies of the relative importance of precipitation frequency and intensity in inter-annual variation of precipitation are still limited, especially in the tropics. Therefore, this study presents a comprehensive framework to investigate the inter-annual variation of precipitation and the dominance of precipitation frequency and intensity in a tropical urban city-state, Singapore, based on long-term (1980-2013) daily precipitation series from 22 rain gauges. First, an iterative Mann-Kendall trend test method is applied to detect long-term trends in precipitation total, frequency and intensity at both annual and seasonal time scales. Then, the relative importance of precipitation frequency and intensity in inducing the inter-annual variation of wet-day precipitation total is analyzed using a dominance analysis method based on linear regression. The results show statistically significant upward trends in wet-day precipitation total, frequency and intensity at annual time scale, however, these trends are not evident during the monsoon seasons. The inter-annual variation of wet-day precipitation is mainly dominated by precipitation intensity for most of the stations at annual time scale and during the Northeast monsoon season. However, during the Southwest monsoon season, the inter-annual variation of wet-day precipitation is mainly dominated by precipitation frequency. These results have implications for water resources management practices in Singapore.
Variational Principles for Buckling of Microtubules Modeled as Nonlocal Orthotropic Shells
2014-01-01
A variational principle for microtubules subject to a buckling load is derived by semi-inverse method. The microtubule is modeled as an orthotropic shell with the constitutive equations based on nonlocal elastic theory and the effect of filament network taken into account as an elastic surrounding. Microtubules can carry large compressive forces by virtue of the mechanical coupling between the microtubules and the surrounding elastic filament network. The equations governing the buckling of the microtubule are given by a system of three partial differential equations. The problem studied in the present work involves the derivation of the variational formulation for microtubule buckling. The Rayleigh quotient for the buckling load as well as the natural and geometric boundary conditions of the problem is obtained from this variational formulation. It is observed that the boundary conditions are coupled as a result of nonlocal formulation. It is noted that the analytic solution of the buckling problem for microtubules is usually a difficult task. The variational formulation of the problem provides the basis for a number of approximate and numerical methods of solutions and furthermore variational principles can provide physical insight into the problem. PMID:25214886
Maxwell, M; Howie, J G; Pryde, C J
1998-01-01
BACKGROUND: Prescribing matters (particularly budget setting and research into prescribing variation between doctors) have been handicapped by the absence of credible measures of the volume of drugs prescribed. AIM: To use the defined daily dose (DDD) method to study variation in the volume and cost of drugs prescribed across the seven main British National Formulary (BNF) chapters with a view to comparing different methods of setting prescribing budgets. METHOD: Study of one year of prescribing statistics from all 129 general practices in Lothian, covering 808,059 patients: analyses of prescribing statistics for 1995 to define volume and cost/volume of prescribing for one year for 10 groups of practices defined by the age and deprivation status of their patients, for seven BNF chapters; creation of prescribing budgets for 1996 for each individual practice based on the use of target volume and cost statistics; comparison of 1996 DDD-based budgets with those set using the conventional historical approach; and comparison of DDD-based budgets with budgets set using a capitation-based formula derived from local cost/patient information. RESULTS: The volume of drugs prescribed was affected by the age structure of the practices in BNF Chapters 1 (gastrointestinal), 2 (cardiovascular), and 6 (endocrine), and by deprivation structure for BNF Chapters 3 (respiratory) and 4 (central nervous system). Costs per DDD in the major BNF chapters were largely independent of age, deprivation structure, or fundholding status. Capitation and DDD-based budgets were similar to each other, but both differed substantially from historic budgets. One practice in seven gained or lost more than 100,000 Pounds per annum using DDD or capitation budgets compared with historic budgets. The DDD-based budget, but not the capitation-based budget, can be used to set volume-specific prescribing targets. CONCLUSIONS: DDD-based and capitation-based prescribing budgets can be set using a simple explanatory model and generalizable methods. In this study, both differed substantially from historic budgets. DDD budgets could be created to accommodate new prescribing strategies and raised or lowered to reflect local intentions to alter overall prescribing volume or cost targets. We recommend that future work on setting budgets and researching prescribing variations should be based on DDD statistics. PMID:10024703
Variational optical flow computation in real time.
Bruhn, Andrés; Weickert, Joachim; Feddern, Christian; Kohlberger, Timo; Schnörr, Christoph
2005-05-01
This paper investigates the usefulness of bidirectional multigrid methods for variational optical flow computations. Although these numerical schemes are among the fastest methods for solving equation systems, they are rarely applied in the field of computer vision. We demonstrate how to employ those numerical methods for the treatment of variational optical flow formulations and show that the efficiency of this approach even allows for real-time performance on standard PCs. As a representative for variational optic flow methods, we consider the recently introduced combined local-global method. It can be considered as a noise-robust generalization of the Horn and Schunck technique. We present a decoupled, as well as a coupled, version of the classical Gauss-Seidel solver, and we develop several multgrid implementations based on a discretization coarse grid approximation. In contrast, with standard bidirectional multigrid algorithms, we take advantage of intergrid transfer operators that allow for nondyadic grid hierarchies. As a consequence, no restrictions concerning the image size or the number of traversed levels have to be imposed. In the experimental section, we juxtapose the developed multigrid schemes and demonstrate their superior performance when compared to unidirectional multgrid methods and nonhierachical solvers. For the well-known 316 x 252 Yosemite sequence, we succeeded in computing the complete set of dense flow fields in three quarters of a second on a 3.06-GHz Pentium4 PC. This corresponds to a frame rate of 18 flow fields per second which outperforms the widely-used Gauss-Seidel method by almost three orders of magnitude.
Dual-scale Galerkin methods for Darcy flow
NASA Astrophysics Data System (ADS)
Wang, Guoyin; Scovazzi, Guglielmo; Nouveau, Léo; Kees, Christopher E.; Rossi, Simone; Colomés, Oriol; Main, Alex
2018-02-01
The discontinuous Galerkin (DG) method has found widespread application in elliptic problems with rough coefficients, of which the Darcy flow equations are a prototypical example. One of the long-standing issues of DG approximations is the overall computational cost, and many different strategies have been proposed, such as the variational multiscale DG method, the hybridizable DG method, the multiscale DG method, the embedded DG method, and the Enriched Galerkin method. In this work, we propose a mixed dual-scale Galerkin method, in which the degrees-of-freedom of a less computationally expensive coarse-scale approximation are linked to the degrees-of-freedom of a base DG approximation. We show that the proposed approach has always similar or improved accuracy with respect to the base DG method, with a considerable reduction in computational cost. For the specific definition of the coarse-scale space, we consider Raviart-Thomas finite elements for the mass flux and piecewise-linear continuous finite elements for the pressure. We provide a complete analysis of stability and convergence of the proposed method, in addition to a study on its conservation and consistency properties. We also present a battery of numerical tests to verify the results of the analysis, and evaluate a number of possible variations, such as using piecewise-linear continuous finite elements for the coarse-scale mass fluxes.
Aldhous, Marian C; Abu Bakar, Suhaili; Prescott, Natalie J; Palla, Raquel; Soo, Kimberley; Mansfield, John C; Mathew, Christopher G; Satsangi, Jack; Armour, John A L
2010-12-15
The copy number variation in beta-defensin genes on human chromosome 8 has been proposed to underlie susceptibility to inflammatory disorders, but presents considerable challenges for accurate typing on the scale required for adequately powered case-control studies. In this work, we have used accurate methods of copy number typing based on the paralogue ratio test (PRT) to assess beta-defensin copy number in more than 1500 UK DNA samples including more than 1000 cases of Crohn's disease. A subset of 625 samples was typed using both PRT-based methods and standard real-time PCR methods, from which direct comparisons highlight potentially serious shortcomings of a real-time PCR assay for typing this variant. Comparing our PRT-based results with two previous studies based only on real-time PCR, we find no evidence to support the reported association of Crohn's disease with either low or high beta-defensin copy number; furthermore, it is noteworthy that there are disagreements between different studies on the observed frequency distribution of copy number states among European controls. We suggest safeguards to be adopted in assessing and reporting the accuracy of copy number measurement, with particular emphasis on integer clustering of results, to avoid reporting of spurious associations in future case-control studies.
Aldhous, Marian C.; Abu Bakar, Suhaili; Prescott, Natalie J.; Palla, Raquel; Soo, Kimberley; Mansfield, John C.; Mathew, Christopher G.; Satsangi, Jack; Armour, John A.L.
2010-01-01
The copy number variation in beta-defensin genes on human chromosome 8 has been proposed to underlie susceptibility to inflammatory disorders, but presents considerable challenges for accurate typing on the scale required for adequately powered case–control studies. In this work, we have used accurate methods of copy number typing based on the paralogue ratio test (PRT) to assess beta-defensin copy number in more than 1500 UK DNA samples including more than 1000 cases of Crohn's disease. A subset of 625 samples was typed using both PRT-based methods and standard real-time PCR methods, from which direct comparisons highlight potentially serious shortcomings of a real-time PCR assay for typing this variant. Comparing our PRT-based results with two previous studies based only on real-time PCR, we find no evidence to support the reported association of Crohn's disease with either low or high beta-defensin copy number; furthermore, it is noteworthy that there are disagreements between different studies on the observed frequency distribution of copy number states among European controls. We suggest safeguards to be adopted in assessing and reporting the accuracy of copy number measurement, with particular emphasis on integer clustering of results, to avoid reporting of spurious associations in future case–control studies. PMID:20858604
Investigation of Quasi-periodic Solar Oscillations in Sunspots Based on SOHO/MDI Magnetograms
NASA Astrophysics Data System (ADS)
Kallunki, J.; Riehokainen, A.
2012-10-01
In this work we study quasi-periodic solar oscillations in sunspots, based on the variation of the amplitude of the magnetic field strength and the variation of the sunspot area. We investigate long-period oscillations between three minutes and ten hours. The magnetic field synoptic maps were obtained from the SOHO/MDI. Wavelet (Morlet), global wavelet spectrum (GWS) and fast Fourier transform (FFT) methods are used in the periodicity analysis at the 95 % significance level. Additionally, the quiet Sun area (QSA) signal and an instrumental effect are discussed. We find several oscillation periods in the sunspots above the 95 % significance level: 3 - 5, 10 - 23, 220 - 240, 340 and 470 minutes, and we also find common oscillation periods (10 - 23 minutes) between the sunspot area variation and that of the magnetic field strength. We discuss possible mechanisms for the obtained results, based on the existing models for sunspot oscillations.
INTEGRAL/SPI data segmentation to retrieve source intensity variations
NASA Astrophysics Data System (ADS)
Bouchet, L.; Amestoy, P. R.; Buttari, A.; Rouet, F.-H.; Chauvin, M.
2013-07-01
Context. The INTEGRAL/SPI, X/γ-ray spectrometer (20 keV-8 MeV) is an instrument for which recovering source intensity variations is not straightforward and can constitute a difficulty for data analysis. In most cases, determining the source intensity changes between exposures is largely based on a priori information. Aims: We propose techniques that help to overcome the difficulty related to source intensity variations, which make this step more rational. In addition, the constructed "synthetic" light curves should permit us to obtain a sky model that describes the data better and optimizes the source signal-to-noise ratios. Methods: For this purpose, the time intensity variation of each source was modeled as a combination of piecewise segments of time during which a given source exhibits a constant intensity. To optimize the signal-to-noise ratios, the number of segments was minimized. We present a first method that takes advantage of previous time series that can be obtained from another instrument on-board the INTEGRAL observatory. A data segmentation algorithm was then used to synthesize the time series into segments. The second method no longer needs external light curves, but solely SPI raw data. For this, we developed a specific algorithm that involves the SPI transfer function. Results: The time segmentation algorithms that were developed solve a difficulty inherent to the SPI instrument, which is the intensity variations of sources between exposures, and it allows us to obtain more information about the sources' behavior. Based on observations with INTEGRAL, an ESA project with instruments and science data centre funded by ESA member states (especially the PI countries: Denmark, France, Germany, Italy, Spain, and Switzerland), Czech Republic and Poland with participation of Russia and the USA.
NASA Astrophysics Data System (ADS)
Zhang, Z.; Werner, F.; Cho, H.-M.; Wind, G.; Platnick, S.; Ackerman, A. S.; Di Girolamo, L.; Marshak, A.; Meyer, Kerry
2017-02-01
The so-called bi-spectral method retrieves cloud optical thickness (τ) and cloud droplet effective radius (re) simultaneously from a pair of cloud reflectance observations, one in a visible or near infrared (VIS/NIR) band and the other in a shortwave-infrared (SWIR) band. A cloudy pixel is usually assumed to be horizontally homogeneous in the retrieval. Ignoring sub-pixel variations of cloud reflectances can lead to a significant bias in the retrieved τ and re. In this study, we use the Taylor expansion of a two-variable function to understand and quantify the impacts of sub-pixel variances of VIS/NIR and SWIR cloud reflectances and their covariance on the τ and re retrievals. This framework takes into account the fact that the retrievals are determined by both VIS/NIR and SWIR band observations in a mutually dependent way. In comparison with previous studies, it provides a more comprehensive understanding of how sub-pixel cloud reflectance variations impact the τ and re retrievals based on the bi-spectral method. In particular, our framework provides a mathematical explanation of how the sub-pixel variation in VIS/NIR band influences the re retrieval and why it can sometimes outweigh the influence of variations in the SWIR band and dominate the error in re retrievals, leading to a potential contribution of positive bias to the re retrieval.
NASA Technical Reports Server (NTRS)
Zhang, Z; Werner, F.; Cho, H. -M.; Wind, Galina; Platnick, S.; Ackerman, A. S.; Di Girolamo, L.; Marshak, A.; Meyer, Kerry
2017-01-01
The so-called bi-spectral method retrieves cloud optical thickness (t) and cloud droplet effective radius (re) simultaneously from a pair of cloud reflectance observations, one in a visible or near infrared (VIS/NIR) band and the other in a shortwave-infrared (SWIR) band. A cloudy pixel is usually assumed to be horizontally homogeneous in the retrieval. Ignoring sub-pixel variations of cloud reflectances can lead to a significant bias in the retrieved t and re. In this study, we use the Taylor expansion of a two-variable function to understand and quantify the impacts of sub-pixel variances of VIS/NIR and SWIR cloud reflectances and their covariance on the t and re retrievals. This framework takes into account the fact that the retrievals are determined by both VIS/NIR and SWIR band observations in a mutually dependent way. In comparison with previous studies, it provides a more comprehensive understanding of how sub-pixel cloud reflectance variations impact the t and re retrievals based on the bi-spectral method. In particular, our framework provides a mathematical explanation of how the sub-pixel variation in VIS/NIR band influences the re retrieval and why it can sometimes outweigh the influence of variations in the SWIR band and dominate the error in re retrievals, leading to a potential contribution of positive bias to the re retrieval.
σ -SCF: A Direct Energy-targeting Method To Mean-field Excited States
NASA Astrophysics Data System (ADS)
Ye, Hongzhou; Welborn, Matthew; Ricke, Nathan; van Voorhis, Troy
The mean-field solutions of electronic excited states are much less accessible than ground state (e.g. Hartree-Fock) solutions. Energy-based optimization methods for excited states, like Δ-SCF, tend to fall into the lowest solution consistent with a given symmetry - a problem known as ``variational collapse''. In this work, we combine the ideas of direct energy-targeting and variance-based optimization in order to describe excited states at the mean-field level. The resulting method, σ-SCF, has several advantages. First, it allows one to target any desired excited state by specifying a single parameter: a guess of the energy of that state. It can therefore, in principle, find all excited states. Second, it avoids variational collapse by using a variance-based, unconstrained local minimization. As a consequence, all states - ground or excited - are treated on an equal footing. Third, it provides an alternate approach to locate Δ-SCF solutions that are otherwise hardly accessible by the usual non-aufbau configuration initial guess. We present results for this new method for small atoms (He, Be) and molecules (H2, HF). This work was funded by a Grant from NSF (CHE-1464804).
Adaptive Granulation-Based Prediction for Energy System of Steel Industry.
Wang, Tianyu; Han, Zhongyang; Zhao, Jun; Wang, Wei
2018-01-01
The flow variation tendency of byproduct gas plays a crucial role for energy scheduling in steel industry. An accurate prediction of its future trends will be significantly beneficial for the economic profits of steel enterprise. In this paper, a long-term prediction model for the energy system is proposed by providing an adaptive granulation-based method that considers the production semantics involved in the fluctuation tendency of the energy data, and partitions them into a series of information granules. To fully reflect the corresponding data characteristics of the formed unequal-length temporal granules, a 3-D feature space consisting of the timespan, the amplitude and the linetype is designed as linguistic descriptors. In particular, a collaborative-conditional fuzzy clustering method is proposed to granularize the tendency-based feature descriptors and specifically measure the amplitude variation of industrial data which plays a dominant role in the feature space. To quantify the performance of the proposed method, a series of real-world industrial data coming from the energy data center of a steel plant is employed to conduct the comparative experiments. The experimental results demonstrate that the proposed method successively satisfies the requirements of the practically viable prediction.
Comparing two Bayes methods based on the free energy functions in Bernoulli mixtures.
Yamazaki, Keisuke; Kaji, Daisuke
2013-08-01
Hierarchical learning models are ubiquitously employed in information science and data engineering. The structure makes the posterior distribution complicated in the Bayes method. Then, the prediction including construction of the posterior is not tractable though advantages of the method are empirically well known. The variational Bayes method is widely used as an approximation method for application; it has the tractable posterior on the basis of the variational free energy function. The asymptotic behavior has been studied in many hierarchical models and a phase transition is observed. The exact form of the asymptotic variational Bayes energy is derived in Bernoulli mixture models and the phase diagram shows that there are three types of parameter learning. However, the approximation accuracy or interpretation of the transition point has not been clarified yet. The present paper precisely analyzes the Bayes free energy function of the Bernoulli mixtures. Comparing free energy functions in these two Bayes methods, we can determine the approximation accuracy and elucidate behavior of the parameter learning. Our results claim that the Bayes free energy has the same learning types while the transition points are different. Copyright © 2013 Elsevier Ltd. All rights reserved.
Gu, Hai Ting; Xie, Ping; Sang, Yan Fang; Wu, Zi Yi
2018-04-01
Abrupt change is an important manifestation of hydrological process with dramatic variation in the context of global climate change, the accurate recognition of which has great significance to understand hydrological process changes and carry out the actual hydrological and water resources works. The traditional method is not reliable at both ends of the samples. The results of the methods are often inconsistent. In order to solve the problem, we proposed a comprehensive weighted recognition method for hydrological abrupt change based on weighting by comparing of 12 commonly used methods for testing change points. The reliability of the method was verified by Monte Carlo statistical test. The results showed that the efficiency of the 12 methods was influenced by the factors including coefficient of variation (Cv), deviation coefficient (Cs) before the change point, mean value difference coefficient, Cv difference coefficient and Cs difference coefficient, but with no significant relationship with the mean value of the sequence. Based on the performance of each method, the weight of each test method was given following the results from statistical test. The sliding rank sum test method and the sliding run test method had the highest weight, whereas the RS test method had the lowest weight. By this means, the change points with the largest comprehensive weight could be selected as the final result when the results of the different methods were inconsistent. This method was used to analyze the daily maximum sequence of Jiajiu station in the lower reaches of the Lancang River (1-day, 3-day, 5-day, 7-day and 1-month). The results showed that each sequence had obvious jump variation in 2004, which was in agreement with the physical causes of hydrological process change and water conservancy construction. The rationality and reliability of the proposed method was verified.
Anderson, Justin E; Michno, Jean-Michel; Kono, Thomas J Y; Stec, Adrian O; Campbell, Benjamin W; Curtin, Shaun J; Stupar, Robert M
2016-05-12
The safety of mutagenized and genetically transformed plants remains a subject of scrutiny. Data gathered and communicated on the phenotypic and molecular variation induced by gene transfer technologies will provide a scientific-based means to rationally address such concerns. In this study, genomic structural variation (e.g. large deletions and duplications) and single nucleotide polymorphism rates were assessed among a sample of soybean cultivars, fast neutron-derived mutants, and five genetically transformed plants developed through Agrobacterium based transformation methods. On average, the number of genes affected by structural variations in transgenic plants was one order of magnitude less than that of fast neutron mutants and two orders of magnitude less than the rates observed between cultivars. Structural variants in transgenic plants, while rare, occurred adjacent to the transgenes, and at unlinked loci on different chromosomes. DNA repair junctions at both transgenic and unlinked sites were consistent with sequence microhomology across breakpoints. The single nucleotide substitution rates were modest in both fast neutron and transformed plants, exhibiting fewer than 100 substitutions genome-wide, while inter-cultivar comparisons identified over one-million single nucleotide polymorphisms. Overall, these patterns provide a fresh perspective on the genomic variation associated with high-energy induced mutagenesis and genetically transformed plants. The genetic transformation process infrequently results in novel genetic variation and these rare events are analogous to genetic variants occurring spontaneously, already present in the existing germplasm, or induced through other types of mutagenesis. It remains unclear how broadly these results can be applied to other crops or transformation methods.
The quantitative polymerase chain reaction (qPCR) method provides rapid estimates of fecal indicator bacteria densities that have been indicated to be useful in the assessment of water quality. Primarily because this method provides faster results than standard culture-based meth...
Voltammetric methods for determination of total sulfide concentrations in anoxic sediments utilizing a previously described [1] gold-based mercury amalgam microelectrode were optimized. Systematic studies in NaCl (supporting electrolyte) and porewater indicate variations in ionic...
Nathan P. Havill; Gina Davis; Joanne Klein; Adalgisa Caccone; Scott Salom
2011-01-01
Molecular diagnostics use DNA-based methods to assign unknown organisms to species. As such, they rely on a priori species designation by taxonomists and require validation with enough samples to capture the variation within species for accurately selecting diagnostic characters.
Implementation study of wearable sensors for activity recognition systems.
Rezaie, Hamed; Ghassemian, Mona
2015-08-01
This Letter investigates and reports on a number of activity recognition methods for a wearable sensor system. The authors apply three methods for data transmission, namely 'stream-based', 'feature-based' and 'threshold-based' scenarios to study the accuracy against energy efficiency of transmission and processing power that affects the mote's battery lifetime. They also report on the impact of variation of sampling frequency and data transmission rate on energy consumption of motes for each method. This study leads us to propose a cross-layer optimisation of an activity recognition system for provisioning acceptable levels of accuracy and energy efficiency.
Research on compressive sensing reconstruction algorithm based on total variation model
NASA Astrophysics Data System (ADS)
Gao, Yu-xuan; Sun, Huayan; Zhang, Tinghua; Du, Lin
2017-12-01
Compressed sensing for breakthrough Nyquist sampling theorem provides a strong theoretical , making compressive sampling for image signals be carried out simultaneously. In traditional imaging procedures using compressed sensing theory, not only can it reduces the storage space, but also can reduce the demand for detector resolution greatly. Using the sparsity of image signal, by solving the mathematical model of inverse reconfiguration, realize the super-resolution imaging. Reconstruction algorithm is the most critical part of compression perception, to a large extent determine the accuracy of the reconstruction of the image.The reconstruction algorithm based on the total variation (TV) model is more suitable for the compression reconstruction of the two-dimensional image, and the better edge information can be obtained. In order to verify the performance of the algorithm, Simulation Analysis the reconstruction result in different coding mode of the reconstruction algorithm based on the TV reconstruction algorithm. The reconstruction effect of the reconfigurable algorithm based on TV based on the different coding methods is analyzed to verify the stability of the algorithm. This paper compares and analyzes the typical reconstruction algorithm in the same coding mode. On the basis of the minimum total variation algorithm, the Augmented Lagrangian function term is added and the optimal value is solved by the alternating direction method.Experimental results show that the reconstruction algorithm is compared with the traditional classical algorithm based on TV has great advantages, under the low measurement rate can be quickly and accurately recovers target image.
Rasta, Seyed Hossein; Partovi, Mahsa Eisazadeh; Seyedarabi, Hadi; Javadzadeh, Alireza
2015-01-01
To investigate the effect of preprocessing techniques including contrast enhancement and illumination correction on retinal image quality, a comparative study was carried out. We studied and implemented a few illumination correction and contrast enhancement techniques on color retinal images to find out the best technique for optimum image enhancement. To compare and choose the best illumination correction technique we analyzed the corrected red and green components of color retinal images statistically and visually. The two contrast enhancement techniques were analyzed using a vessel segmentation algorithm by calculating the sensitivity and specificity. The statistical evaluation of the illumination correction techniques were carried out by calculating the coefficients of variation. The dividing method using the median filter to estimate background illumination showed the lowest Coefficients of variations in the red component. The quotient and homomorphic filtering methods after the dividing method presented good results based on their low Coefficients of variations. The contrast limited adaptive histogram equalization increased the sensitivity of the vessel segmentation algorithm up to 5% in the same amount of accuracy. The contrast limited adaptive histogram equalization technique has a higher sensitivity than the polynomial transformation operator as a contrast enhancement technique for vessel segmentation. Three techniques including the dividing method using the median filter to estimate background, quotient based and homomorphic filtering were found as the effective illumination correction techniques based on a statistical evaluation. Applying the local contrast enhancement technique, such as CLAHE, for fundus images presented good potentials in enhancing the vasculature segmentation. PMID:25709940
NASA Astrophysics Data System (ADS)
Shen, Zhengwei; Cheng, Lishuang
2017-09-01
Total variation (TV)-based image deblurring method can bring on staircase artifacts in the homogenous region of the latent images recovered from the degraded images while a wavelet/frame-based image deblurring method will lead to spurious noise spikes and pseudo-Gibbs artifacts in the vicinity of discontinuities of the latent images. To suppress these artifacts efficiently, we propose a nonconvex composite wavelet/frame and TV-based image deblurring model. In this model, the wavelet/frame and the TV-based methods may complement each other, which are verified by theoretical analysis and experimental results. To further improve the quality of the latent images, nonconvex penalty function is used to be the regularization terms of the model, which may induce a stronger sparse solution and will more accurately estimate the relative large gradient or wavelet/frame coefficients of the latent images. In addition, by choosing a suitable parameter to the nonconvex penalty function, the subproblem that splits by the alternative direction method of multipliers algorithm from the proposed model can be guaranteed to be a convex optimization problem; hence, each subproblem can converge to a global optimum. The mean doubly augmented Lagrangian and the isotropic split Bregman algorithms are used to solve these convex subproblems where the designed proximal operator is used to reduce the computational complexity of the algorithms. Extensive numerical experiments indicate that the proposed model and algorithms are comparable to other state-of-the-art model and methods.
NASA Astrophysics Data System (ADS)
Sun, Hu; Zhang, Aijia; Wang, Yishou; Qing, Xinlin P.
2017-04-01
Guided wave-based structural health monitoring (SHM) has been given considerable attention and widely studied for large-scale aircraft structures. Nevertheless, it is difficult to apply SHM systems on board or online, for which one of the most serious reasons is the environmental influence. Load is one fact that affects not only the host structure, in which guided wave propagates, but also the PZT, by which guided wave is transmitted and received. In this paper, numerical analysis using finite element method is used to study the load effect on guided wave acquired by PZT. The static loads with different grades are considered to analyze its effect on guided wave signals that PZT transmits and receives. Based on the variation trend of guided waves versus load, a load compensation method is developed to eliminate effects of load in the process of damage detection. The probabilistic reconstruction algorithm based on the signal variation of transmitter-receiver path is employed to identify the damage. Numerical tests is conducted to verify the feasibility and effectiveness of the given method.
An adaptive state of charge estimation approach for lithium-ion series-connected battery system
NASA Astrophysics Data System (ADS)
Peng, Simin; Zhu, Xuelai; Xing, Yinjiao; Shi, Hongbing; Cai, Xu; Pecht, Michael
2018-07-01
Due to the incorrect or unknown noise statistics of a battery system and its cell-to-cell variations, state of charge (SOC) estimation of a lithium-ion series-connected battery system is usually inaccurate or even divergent using model-based methods, such as extended Kalman filter (EKF) and unscented Kalman filter (UKF). To resolve this problem, an adaptive unscented Kalman filter (AUKF) based on a noise statistics estimator and a model parameter regulator is developed to accurately estimate the SOC of a series-connected battery system. An equivalent circuit model is first built based on the model parameter regulator that illustrates the influence of cell-to-cell variation on the battery system. A noise statistics estimator is then used to attain adaptively the estimated noise statistics for the AUKF when its prior noise statistics are not accurate or exactly Gaussian. The accuracy and effectiveness of the SOC estimation method is validated by comparing the developed AUKF and UKF when model and measurement statistics noises are inaccurate, respectively. Compared with the UKF and EKF, the developed method shows the highest SOC estimation accuracy.
Marker Registration Technique for Handwritten Text Marker in Augmented Reality Applications
NASA Astrophysics Data System (ADS)
Thanaborvornwiwat, N.; Patanukhom, K.
2018-04-01
Marker registration is a fundamental process to estimate camera poses in marker-based Augmented Reality (AR) systems. We developed AR system that creates correspondence virtual objects on handwritten text markers. This paper presents a new method for registration that is robust for low-content text markers, variation of camera poses, and variation of handwritten styles. The proposed method uses Maximally Stable Extremal Regions (MSER) and polygon simplification for a feature point extraction. The experiment shows that we need to extract only five feature points per image which can provide the best registration results. An exhaustive search is used to find the best matching pattern of the feature points in two images. We also compared performance of the proposed method to some existing registration methods and found that the proposed method can provide better accuracy and time efficiency.
When things go pear shaped: contour variations of contacts
NASA Astrophysics Data System (ADS)
Utzny, Clemens
2013-04-01
Traditional control of critical dimensions (CD) on photolithographic masks considers the CD average and a measure for the CD variation such as the CD range or the standard deviation. Also systematic CD deviations from the mean such as CD signatures are subject to the control. These measures are valid for mask quality verification as long as patterns across a mask exhibit only size variations and no shape variation. The issue of shape variations becomes especially important in the context of contact holes on EUV masks. For EUV masks the CD error budget is much smaller than for standard optical masks. This means that small deviations from the contact shape can impact EUV waver prints in the sense that contact shape deformations induce asymmetric bridging phenomena. In this paper we present a detailed study of contact shape variations based on regular product data. Two data sets are analyzed: 1) contacts of varying target size and 2) a regularly spaced field of contacts. Here, the methods of statistical shape analysis are used to analyze CD SEM generated contour data. We demonstrate that contacts on photolithographic masks do not only show size variations but exhibit also pronounced nontrivial shape variations. In our data sets we find pronounced shape variations which can be interpreted as asymmetrical shape squeezing and contact rounding. Thus we demonstrate the limitations of classic CD measures for describing the feature variations on masks. Furthermore we show how the methods of statistical shape analysis can be used for quantifying the contour variations thus paving the way to a new understanding of mask linearity and its specification.
Effective Visual Tracking Using Multi-Block and Scale Space Based on Kernelized Correlation Filters
Jeong, Soowoong; Kim, Guisik; Lee, Sangkeun
2017-01-01
Accurate scale estimation and occlusion handling is a challenging problem in visual tracking. Recently, correlation filter-based trackers have shown impressive results in terms of accuracy, robustness, and speed. However, the model is not robust to scale variation and occlusion. In this paper, we address the problems associated with scale variation and occlusion by employing a scale space filter and multi-block scheme based on a kernelized correlation filter (KCF) tracker. Furthermore, we develop a more robust algorithm using an appearance update model that approximates the change of state of occlusion and deformation. In particular, an adaptive update scheme is presented to make each process robust. The experimental results demonstrate that the proposed method outperformed 29 state-of-the-art trackers on 100 challenging sequences. Specifically, the results obtained with the proposed scheme were improved by 8% and 18% compared to those of the KCF tracker for 49 occlusion and 64 scale variation sequences, respectively. Therefore, the proposed tracker can be a robust and useful tool for object tracking when occlusion and scale variation are involved. PMID:28241475
Effective Visual Tracking Using Multi-Block and Scale Space Based on Kernelized Correlation Filters.
Jeong, Soowoong; Kim, Guisik; Lee, Sangkeun
2017-02-23
Accurate scale estimation and occlusion handling is a challenging problem in visual tracking. Recently, correlation filter-based trackers have shown impressive results in terms of accuracy, robustness, and speed. However, the model is not robust to scale variation and occlusion. In this paper, we address the problems associated with scale variation and occlusion by employing a scale space filter and multi-block scheme based on a kernelized correlation filter (KCF) tracker. Furthermore, we develop a more robust algorithm using an appearance update model that approximates the change of state of occlusion and deformation. In particular, an adaptive update scheme is presented to make each process robust. The experimental results demonstrate that the proposed method outperformed 29 state-of-the-art trackers on 100 challenging sequences. Specifically, the results obtained with the proposed scheme were improved by 8% and 18% compared to those of the KCF tracker for 49 occlusion and 64 scale variation sequences, respectively. Therefore, the proposed tracker can be a robust and useful tool for object tracking when occlusion and scale variation are involved.
FBG wavelength demodulation based on a radio frequency optical true time delay method.
Wang, Jin; Zhu, Wanshan; Ma, Chenyuan; Xu, Tong
2018-06-01
A new fiber Bragg grating (FBG) wavelength shift demodulation method based on optical true time delay microwave phase detection is proposed. We used a microwave photonic link (MPL) to transport a radio frequency (RF) signal over a dispersion compensation fiber (DCF). The wavelength shift of the FBG will cause the time delay change of the optical carrier that propagates in an optical fiber with chromatic dispersion, which will result in the variation of the RF signal phase. A long DCF was adopted to enlarge the RF signal phase variation. An IQ mixer was used to measure the RF phase variation of the RF signal propagating in the MPL, and the wavelength shift of the FBG can be obtained by the measured RF signal phase variation. The experimental results showed that the wavelength shift measurement resolution is 2 pm when the group velocity dispersion of the DCF is 79.5 ps/nm and the frequency of the RF signal is 18 GHz. The demodulation time is as short as 0.1 ms. The measurement resolution can be improved simply by using a higher frequency of the RF signal and a longer DCF or larger chromatic dispersion value of the DCF.
Mean Field Type Control with Congestion (II): An Augmented Lagrangian Method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Achdou, Yves, E-mail: achdou@ljll.univ-paris-diderot.fr; Laurière, Mathieu
This work deals with a numerical method for solving a mean-field type control problem with congestion. It is the continuation of an article by the same authors, in which suitably defined weak solutions of the system of partial differential equations arising from the model were discussed and existence and uniqueness were proved. Here, the focus is put on numerical methods: a monotone finite difference scheme is proposed and shown to have a variational interpretation. Then an Alternating Direction Method of Multipliers for solving the variational problem is addressed. It is based on an augmented Lagrangian. Two kinds of boundary conditionsmore » are considered: periodic conditions and more realistic boundary conditions associated to state constrained problems. Various test cases and numerical results are presented.« less
Ni, Zhuoya; Liu, Zhigang; Li, Zhao-Liang; Nerry, Françoise; Huo, Hongyuan; Sun, Rui; Yang, Peiqi; Zhang, Weiwei
2016-04-06
Significant research progress has recently been made in estimating fluorescence in the oxygen absorption bands, however, quantitative retrieval of fluorescence data is still affected by factors such as atmospheric effects. In this paper, top-of-atmosphere (TOA) radiance is generated by the MODTRAN 4 and SCOPE models. Based on simulated data, sensitivity analysis is conducted to assess the sensitivities of four indicators-depth_absorption_band, depth_nofs-depth_withfs, radiance and Fs/radiance-to atmospheric parameters (sun zenith angle (SZA), sensor height, elevation, visibility (VIS) and water content) in the oxygen absorption bands. The results indicate that the SZA and sensor height are the most sensitive parameters and that variations in these two parameters result in large variations calculated as the variation value/the base value in the oxygen absorption depth in the O₂-A and O₂-B bands (111.4% and 77.1% in the O₂-A band; and 27.5% and 32.6% in the O₂-B band, respectively). A comparison of fluorescence retrieval using three methods (Damm method, Braun method and DOAS) and SCOPE Fs indicates that the Damm method yields good results and that atmospheric correction can improve the accuracy of fluorescence retrieval. Damm method is the improved 3FLD method but considering atmospheric effects. Finally, hyperspectral airborne images combined with other parameters (SZA, VIS and water content) are exploited to estimate fluorescence using the Damm method and 3FLD method. The retrieval fluorescence is compared with the field measured fluorescence, yielding good results (R² = 0.91 for Damm vs. SCOPE SIF; R² = 0.65 for 3FLD vs. SCOPE SIF). Five types of vegetation, including ailanthus, elm, mountain peach, willow and Chinese ash, exhibit consistent associations between the retrieved fluorescence and field measured fluorescence.
Ni, Zhuoya; Liu, Zhigang; Li, Zhao-Liang; Nerry, Françoise; Huo, Hongyuan; Sun, Rui; Yang, Peiqi; Zhang, Weiwei
2016-01-01
Significant research progress has recently been made in estimating fluorescence in the oxygen absorption bands, however, quantitative retrieval of fluorescence data is still affected by factors such as atmospheric effects. In this paper, top-of-atmosphere (TOA) radiance is generated by the MODTRAN 4 and SCOPE models. Based on simulated data, sensitivity analysis is conducted to assess the sensitivities of four indicators—depth_absorption_band, depth_nofs-depth_withfs, radiance and Fs/radiance—to atmospheric parameters (sun zenith angle (SZA), sensor height, elevation, visibility (VIS) and water content) in the oxygen absorption bands. The results indicate that the SZA and sensor height are the most sensitive parameters and that variations in these two parameters result in large variations calculated as the variation value/the base value in the oxygen absorption depth in the O2-A and O2-B bands (111.4% and 77.1% in the O2-A band; and 27.5% and 32.6% in the O2-B band, respectively). A comparison of fluorescence retrieval using three methods (Damm method, Braun method and DOAS) and SCOPE Fs indicates that the Damm method yields good results and that atmospheric correction can improve the accuracy of fluorescence retrieval. Damm method is the improved 3FLD method but considering atmospheric effects. Finally, hyperspectral airborne images combined with other parameters (SZA, VIS and water content) are exploited to estimate fluorescence using the Damm method and 3FLD method. The retrieval fluorescence is compared with the field measured fluorescence, yielding good results (R2 = 0.91 for Damm vs. SCOPE SIF; R2 = 0.65 for 3FLD vs. SCOPE SIF). Five types of vegetation, including ailanthus, elm, mountain peach, willow and Chinese ash, exhibit consistent associations between the retrieved fluorescence and field measured fluorescence. PMID:27058542
Tracking and recognition face in videos with incremental local sparse representation model
NASA Astrophysics Data System (ADS)
Wang, Chao; Wang, Yunhong; Zhang, Zhaoxiang
2013-10-01
This paper addresses the problem of tracking and recognizing faces via incremental local sparse representation. First a robust face tracking algorithm is proposed via employing local sparse appearance and covariance pooling method. In the following face recognition stage, with the employment of a novel template update strategy, which combines incremental subspace learning, our recognition algorithm adapts the template to appearance changes and reduces the influence of occlusion and illumination variation. This leads to a robust video-based face tracking and recognition with desirable performance. In the experiments, we test the quality of face recognition in real-world noisy videos on YouTube database, which includes 47 celebrities. Our proposed method produces a high face recognition rate at 95% of all videos. The proposed face tracking and recognition algorithms are also tested on a set of noisy videos under heavy occlusion and illumination variation. The tracking results on challenging benchmark videos demonstrate that the proposed tracking algorithm performs favorably against several state-of-the-art methods. In the case of the challenging dataset in which faces undergo occlusion and illumination variation, and tracking and recognition experiments under significant pose variation on the University of California, San Diego (Honda/UCSD) database, our proposed method also consistently demonstrates a high recognition rate.
MEM spectral analysis for predicting influenza epidemics in Japan.
Sumi, Ayako; Kamo, Ken-ichi
2012-03-01
The prediction of influenza epidemics has long been the focus of attention in epidemiology and mathematical biology. In this study, we tested whether time series analysis was useful for predicting the incidence of influenza in Japan. The method of time series analysis we used consists of spectral analysis based on the maximum entropy method (MEM) in the frequency domain and the nonlinear least squares method in the time domain. Using this time series analysis, we analyzed the incidence data of influenza in Japan from January 1948 to December 1998; these data are unique in that they covered the periods of pandemics in Japan in 1957, 1968, and 1977. On the basis of the MEM spectral analysis, we identified the periodic modes explaining the underlying variations of the incidence data. The optimum least squares fitting (LSF) curve calculated with the periodic modes reproduced the underlying variation of the incidence data. An extension of the LSF curve could be used to predict the incidence of influenza quantitatively. Our study suggested that MEM spectral analysis would allow us to model temporal variations of influenza epidemics with multiple periodic modes much more effectively than by using the method of conventional time series analysis, which has been used previously to investigate the behavior of temporal variations in influenza data.
Tapio, I; Värv, S; Bennewitz, J; Maleviciute, J; Fimland, E; Grislis, Z; Meuwissen, T H E; Miceikiene, I; Olsaker, I; Viinalass, H; Vilkki, J; Kantanen, J
2006-12-01
Northern European indigenous cattle breeds are currently endangered and at a risk of becoming extinct. We analyzed variation at 20 microsatellite loci in 23 indigenous, 3 old imported, and 9 modern commercial cattle breeds that are presently distributed in northern Europe. We measured the breeds' allelic richness and heterozygosity, and studied their genetic relationships with a neighbor-joining tree based on the Chord genetic distance matrix. We used the Weitzman approach and the core set diversity measure of Eding et al. (2002) to quantify the contribution of each breed to the maximum amount of genetic diversity and to identify breeds important for the conservation of genetic diversity. We defined 11 breeds as a "safe set" of breeds (not endangered) and estimated a reduction in genetic diversity if all nonsafe (endangered) breeds were lost. We then calculated the increase in genetic diversity by adding one by one each of the nonsafe breeds to the safe set (the safe-set-plus-one approach). The neighbor-joining tree grouped the northern European cattle breeds into Black-and-White type, Baltic Red, and Nordic cattle groups. Väne cattle, Bohus Poll, and Danish Jersey had the highest relative contribution to the maximum amount of genetic diversity when the diversity was quantified by the Weitzman diversity measure. These breeds not only showed phylogenetic distinctiveness but also low within-population variation. When the Eding et al. method was applied, Eastern Finncattle and Lithuanian White Backed cattle contributed most of the genetic variation. If the loss of the nonsafe set of breeds happens, the reduction in genetic diversity would be substantial (72%) based on the Weitzman approach, but relatively small (1.81%) based on the Eding et al. method. The safe set contained only 66% of the observed microsatellite alleles. The safe-set-plus-one approach indicated that Bohus Poll and Väne cattle contributed most to the Weitzman diversity, whereas the Eastern Finncattle contribution was the highest according to the Eding et al. method. Our results indicate that both methods of Weitzman and Eding et al. recognize the importance of local populations as a valuable resource of genetic variation.
NASA Technical Reports Server (NTRS)
Roth, Don J.; Carney, Dorothy V.; Baaklini, George Y.; Bodis, James R.; Rauser, Richard W.
1998-01-01
Ultrasonic velocity/time-of-flight imaging that uses back surface reflections to gauge volumetric material quality is highly suited for quantitative characterization of microstructural gradients including those due to pore fraction, density, fiber fraction, and chemical composition variations. However, a weakness of conventional pulse-echo ultrasonic velocity/time-of-flight imaging is that the image shows the effects of thickness as well as microstructural variations unless the part is uniformly thick. This limits this imaging method's usefulness in practical applications. Prior studies have described a pulse-echo time-of-flight-based ultrasonic imaging method that requires using a single transducer in combination with a reflector plate placed behind samples that eliminates the effect of thickness variation in the image. In those studies, this method was successful at isolating ultrasonic variations due to material microstructure in plate-like samples of silicon nitride, metal matrix composite, and polymer matrix composite. In this study, the method is engineered for inspection of more complex-shaped structures-those having (hollow) tubular/curved geometry. The experimental inspection technique and results are described as applied to (1) monolithic mullite ceramic and polymer matrix composite 'proof-of-concept' tubular structures that contain machined patches of various depths and (2) as-manufactured monolithic silicon nitride ceramic and silicon carbide/silicon carbide composite tubular structures that might be used in 'real world' applications.
Eigenvalue sensitivity analysis of planar frames with variable joint and support locations
NASA Technical Reports Server (NTRS)
Chuang, Ching H.; Hou, Gene J. W.
1991-01-01
Two sensitivity equations are derived in this study based upon the continuum approach for eigenvalue sensitivity analysis of planar frame structures with variable joint and support locations. A variational form of an eigenvalue equation is first derived in which all of the quantities are expressed in the local coordinate system attached to each member. Material derivative of this variational equation is then sought to account for changes in member's length and orientation resulting form the perturbation of joint and support locations. Finally, eigenvalue sensitivity equations are formulated in either domain quantities (by the domain method) or boundary quantities (by the boundary method). It is concluded that the sensitivity equation derived by the boundary method is more efficient in computation but less accurate than that of the domain method. Nevertheless, both of them in terms of computational efficiency are superior to the conventional direct differentiation method and the finite difference method.
Application of the variational-asymptotical method to composite plates
NASA Technical Reports Server (NTRS)
Hodges, Dewey H.; Lee, Bok W.; Atilgan, Ali R.
1992-01-01
A method is developed for the 3D analysis of laminated plate deformation which is an extension of a variational-asymptotical method by Atilgan and Hodges (1991). Both methods are based on the treatment of plate deformation by splitting the 3D analysis into linear through-the-thickness analysis and 2D plate analysis. Whereas the first technique tackles transverse shear deformation in the second asymptotical approximation, the present method simplifies its treatment and restricts it to the first approximation. Both analytical techniques are applied to the linear cylindrical bending problem, and the strain and stress distributions are derived and compared with those of the exact solution. The present theory provides more accurate results than those of the classical laminated-plate theory for the transverse displacement of 2-, 3-, and 4-layer cross-ply laminated plates. The method can give reliable estimates of the in-plane strain and displacement distributions.
Time-frequency domain SNR estimation and its application in seismic data processing
NASA Astrophysics Data System (ADS)
Zhao, Yan; Liu, Yang; Li, Xuxuan; Jiang, Nansen
2014-08-01
Based on an approach estimating frequency domain signal-to-noise ratio (FSNR), we propose a method to evaluate time-frequency domain signal-to-noise ratio (TFSNR). This method adopts short-time Fourier transform (STFT) to estimate instantaneous power spectrum of signal and noise, and thus uses their ratio to compute TFSNR. Unlike FSNR describing the variation of SNR with frequency only, TFSNR depicts the variation of SNR with time and frequency, and thus better handles non-stationary seismic data. By considering TFSNR, we develop methods to improve the effects of inverse Q filtering and high frequency noise attenuation in seismic data processing. Inverse Q filtering considering TFSNR can better solve the problem of amplitude amplification of noise. The high frequency noise attenuation method considering TFSNR, different from other de-noising methods, distinguishes and suppresses noise using an explicit criterion. Examples of synthetic and real seismic data illustrate the correctness and effectiveness of the proposed methods.
[Chemical variation in Aurantii Fructus before and after processing based on UHPLC-Q-TOF-MS].
Cheung, Tung-Kin; Li, Wei; Ho, Hing-Man; Liang, Zhi-Tao; Huang, Chuan-Qi
2016-06-01
To explore the processing mechanism of Aurantii Fructus decoction pieces used in Guangdong province and Hong Kong by analysing the chemical variation between raw and processed Aurantii Fructus with different methods based on UHPLC-Q-TOF-MS. The total ion chromatograms detected in positive and negative ion modes, and ion peak area ratio before and after processing were taken as variation indexes in the comparison. The results indicated that fermented Aurantii Fructus could produce three new ingredients, namely eriodictyol-7-glucoside, hesperetin-7-O-glucoside and 5-demethylnobiletin. At the same time, it could significantly increase the content of naringenin and hesperetin components, and could increase the content of such limonin derivatives as sudachinoid A, obacunoic acid and limoninand nomilinic acid. This suggests that the fermentation processing method of Aurantii Fructus decoction pieces used in Guangdong province and Hong Kong is of important significance for enhancing biological activity and bioavailability, and improving the clinical efficacy of Aurantii Fructus decoction pieces, and so is worth further protection and promotion. Copyright© by the Chinese Pharmaceutical Association.
NASA Technical Reports Server (NTRS)
Atluri, Satya N.; Shen, Shengping
2002-01-01
In this paper, a very simple method is used to derive the weakly singular traction boundary integral equation based on the integral relationships for displacement gradients. The concept of the MLPG method is employed to solve the integral equations, especially those arising in solid mechanics. A moving Least Squares (MLS) interpolation is selected to approximate the trial functions in this paper. Five boundary integral Solution methods are introduced: direct solution method; displacement boundary-value problem; traction boundary-value problem; mixed boundary-value problem; and boundary variational principle. Based on the local weak form of the BIE, four different nodal-based local test functions are selected, leading to four different MLPG methods for each BIE solution method. These methods combine the advantages of the MLPG method and the boundary element method.
Dudik, Joshua M; Kurosu, Atsuko; Coyle, James L; Sejdić, Ervin
2015-04-01
Cervical auscultation with high resolution sensors is currently under consideration as a method of automatically screening for specific swallowing abnormalities. To be clinically useful without human involvement, any devices based on cervical auscultation should be able to detect specified swallowing events in an automatic manner. In this paper, we comparatively analyze the density-based spatial clustering of applications with noise algorithm (DBSCAN), a k-means based algorithm, and an algorithm based on quadratic variation as methods of differentiating periods of swallowing activity from periods of time without swallows. These algorithms utilized swallowing vibration data exclusively and compared the results to a gold standard measure of swallowing duration. Data was collected from 23 subjects that were actively suffering from swallowing difficulties. Comparing the performance of the DBSCAN algorithm with a proven segmentation algorithm that utilizes k-means clustering demonstrated that the DBSCAN algorithm had a higher sensitivity and correctly segmented more swallows. Comparing its performance with a threshold-based algorithm that utilized the quadratic variation of the signal showed that the DBSCAN algorithm offered no direct increase in performance. However, it offered several other benefits including a faster run time and more consistent performance between patients. All algorithms showed noticeable differentiation from the endpoints provided by a videofluoroscopy examination as well as reduced sensitivity. In summary, we showed that the DBSCAN algorithm is a viable method for detecting the occurrence of a swallowing event using cervical auscultation signals, but significant work must be done to improve its performance before it can be implemented in an unsupervised manner. Copyright © 2015 Elsevier Ltd. All rights reserved.
Reschovsky, James D; Hadley, Jack; Romano, Patrick S
2013-10-01
Control for area differences in population health (casemix adjustment) is necessary to measure geographic variations in medical spending. Studies use various casemix adjustment methods, resulting in very different geographic variation estimates. We study casemix adjustment methodological issues and evaluate alternative approaches using claims from 1.6 million Medicare beneficiaries in 60 representative communities. Two key casemix adjustment methods-controlling for patient conditions obtained from diagnoses on claims and expenditures of those at the end of life-were evaluated. We failed to find evidence of bias in the former approach attributable to area differences in physician diagnostic patterns, as others have found, and found that the assumption underpinning the latter approach-that persons close to death are equally sick across areas-cannot be supported. Diagnosis-based approaches are more appropriate when current rather than prior year diagnoses are used. Population health likely explains more than 75% to 85% of cost variations across fixed sets of areas.
USDA-ARS?s Scientific Manuscript database
A Multilocus Sequence Typing (MLST) method based on allelic variation of 7 chromosomal loci was developed for characterizing genotypes within the genus Bradyrhizobium. With the method 29 distinct multilocus genotypes (GTs) were identified among 191 culture collection soybean strains. The occupancy ...
ERIC Educational Resources Information Center
Hughes, Stephen W.
2005-01-01
A little-known method of measuring the volume of small objects based on Archimedes' principle is described, which involves suspending an object in a water-filled container placed on electronic scales. The suspension technique is a variation on the hydrostatic weighing technique used for measuring volume. The suspension method was compared with two…
A variational multiscale method for particle-cloud tracking in turbomachinery flows
NASA Astrophysics Data System (ADS)
Corsini, A.; Rispoli, F.; Sheard, A. G.; Takizawa, K.; Tezduyar, T. E.; Venturini, P.
2014-11-01
We present a computational method for simulation of particle-laden flows in turbomachinery. The method is based on a stabilized finite element fluid mechanics formulation and a finite element particle-cloud tracking method. We focus on induced-draft fans used in process industries to extract exhaust gases in the form of a two-phase fluid with a dispersed solid phase. The particle-laden flow causes material wear on the fan blades, degrading their aerodynamic performance, and therefore accurate simulation of the flow would be essential in reliable computational turbomachinery analysis and design. The turbulent-flow nature of the problem is dealt with a Reynolds-Averaged Navier-Stokes model and Streamline-Upwind/Petrov-Galerkin/Pressure-Stabilizing/Petrov-Galerkin stabilization, the particle-cloud trajectories are calculated based on the flow field and closure models for the turbulence-particle interaction, and one-way dependence is assumed between the flow field and particle dynamics. We propose a closure model utilizing the scale separation feature of the variational multiscale method, and compare that to the closure utilizing the eddy viscosity model. We present computations for axial- and centrifugal-fan configurations, and compare the computed data to those obtained from experiments, analytical approaches, and other computational methods.
Chen, Jiaqing; Zhang, Pei; Lv, Mengying; Guo, Huimin; Huang, Yin; Zhang, Zunjian; Xu, Fengguo
2017-05-16
Data reduction techniques in gas chromatography-mass spectrometry-based untargeted metabolomics has made the following workflow of data analysis more lucid. However, the normalization process still perplexes researchers, and its effects are always ignored. In order to reveal the influences of normalization method, five representative normalization methods (mass spectrometry total useful signal, median, probabilistic quotient normalization, remove unwanted variation-random, and systematic ratio normalization) were compared in three real data sets with different types. First, data reduction techniques were used to refine the original data. Then, quality control samples and relative log abundance plots were utilized to evaluate the unwanted variations and the efficiencies of normalization process. Furthermore, the potential biomarkers which were screened out by the Mann-Whitney U test, receiver operating characteristic curve analysis, random forest, and feature selection algorithm Boruta in different normalized data sets were compared. The results indicated the determination of the normalization method was difficult because the commonly accepted rules were easy to fulfill but different normalization methods had unforeseen influences on both the kind and number of potential biomarkers. Lastly, an integrated strategy for normalization method selection was recommended.
NASA Astrophysics Data System (ADS)
Kawai, Soshi; Terashima, Hiroshi; Negishi, Hideyo
2015-11-01
This paper addresses issues in high-fidelity numerical simulations of transcritical turbulent flows at supercritical pressure. The proposed strategy builds on a tabulated look-up table method based on REFPROP database for an accurate estimation of non-linear behaviors of thermodynamic and fluid transport properties at the transcritical conditions. Based on the look-up table method we propose a numerical method that satisfies high-order spatial accuracy, spurious-oscillation-free property, and capability of capturing the abrupt variation in thermodynamic properties across the transcritical contact surface. The method introduces artificial mass diffusivity to the continuity and momentum equations in a physically-consistent manner in order to capture the steep transcritical thermodynamic variations robustly while maintaining spurious-oscillation-free property in the velocity field. The pressure evolution equation is derived from the full compressible Navier-Stokes equations and solved instead of solving the total energy equation to achieve the spurious pressure oscillation free property with an arbitrary equation of state including the present look-up table method. Flow problems with and without physical diffusion are employed for the numerical tests to validate the robustness, accuracy, and consistency of the proposed approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kawai, Soshi, E-mail: kawai@cfd.mech.tohoku.ac.jp; Terashima, Hiroshi; Negishi, Hideyo
2015-11-01
This paper addresses issues in high-fidelity numerical simulations of transcritical turbulent flows at supercritical pressure. The proposed strategy builds on a tabulated look-up table method based on REFPROP database for an accurate estimation of non-linear behaviors of thermodynamic and fluid transport properties at the transcritical conditions. Based on the look-up table method we propose a numerical method that satisfies high-order spatial accuracy, spurious-oscillation-free property, and capability of capturing the abrupt variation in thermodynamic properties across the transcritical contact surface. The method introduces artificial mass diffusivity to the continuity and momentum equations in a physically-consistent manner in order to capture themore » steep transcritical thermodynamic variations robustly while maintaining spurious-oscillation-free property in the velocity field. The pressure evolution equation is derived from the full compressible Navier–Stokes equations and solved instead of solving the total energy equation to achieve the spurious pressure oscillation free property with an arbitrary equation of state including the present look-up table method. Flow problems with and without physical diffusion are employed for the numerical tests to validate the robustness, accuracy, and consistency of the proposed approach.« less
Web-Based Learning in a Geometry Course
ERIC Educational Resources Information Center
Chan, Hsungrow; Tsai, Pengheng; Huang, Tien-Yu
2006-01-01
This study concerns applying Web-based learning with learner controlled instructional materials in a geometry course. The experimental group learned in a Web-based learning environment, and the control group learned in a classroom. We observed that the learning method accounted for a total variation in learning effect of 19.1% in the 3rd grade and…
Airborne data measurement system errors reduction through state estimation and control optimization
NASA Astrophysics Data System (ADS)
Sebryakov, G. G.; Muzhichek, S. M.; Pavlov, V. I.; Ermolin, O. V.; Skrinnikov, A. A.
2018-02-01
The paper discusses the problem of airborne data measurement system errors reduction through state estimation and control optimization. The approaches are proposed based on the methods of experiment design and the theory of systems with random abrupt structure variation. The paper considers various control criteria as applied to an aircraft data measurement system. The physics of criteria is explained, the mathematical description and the sequence of steps for each criterion application is shown. The formula is given for airborne data measurement system state vector posterior estimation based for systems with structure variations.
Wong, Kin-Yiu; Gao, Jiali
2008-09-09
In this paper, we describe an automated integration-free path-integral (AIF-PI) method, based on Kleinert's variational perturbation (KP) theory, to treat internuclear quantum-statistical effects in molecular systems. We have developed an analytical method to obtain the centroid potential as a function of the variational parameter in the KP theory, which avoids numerical difficulties in path-integral Monte Carlo or molecular dynamics simulations, especially at the limit of zero-temperature. Consequently, the variational calculations using the KP theory can be efficiently carried out beyond the first order, i.e., the Giachetti-Tognetti-Feynman-Kleinert variational approach, for realistic chemical applications. By making use of the approximation of independent instantaneous normal modes (INM), the AIF-PI method can readily be applied to many-body systems. Previously, we have shown that in the INM approximation, the AIF-PI method is accurate for computing the quantum partition function of a water molecule (3 degrees of freedom) and the quantum correction factor for the collinear H(3) reaction rate (2 degrees of freedom). In this work, the accuracy and properties of the KP theory are further investigated by using the first three order perturbations on an asymmetric double-well potential, the bond vibrations of H(2), HF, and HCl represented by the Morse potential, and a proton-transfer barrier modeled by the Eckart potential. The zero-point energy, quantum partition function, and tunneling factor for these systems have been determined and are found to be in excellent agreement with the exact quantum results. Using our new analytical results at the zero-temperature limit, we show that the minimum value of the computed centroid potential in the KP theory is in excellent agreement with the ground state energy (zero-point energy) and the position of the centroid potential minimum is the expectation value of particle position in wave mechanics. The fast convergent property of the KP theory is further examined in comparison with results from the traditional Rayleigh-Ritz variational approach and Rayleigh-Schrödinger perturbation theory in wave mechanics. The present method can be used for thermodynamic and quantum dynamic calculations, including to systematically determine the exact value of zero-point energy and to study kinetic isotope effects for chemical reactions in solution and in enzymes.
40 CFR 98.147 - Records that must be retained.
Code of Federal Regulations, 2013 CFR
2013-07-01
... (metric tons). (3) Data on carbonate-based mineral mass fractions provided by the raw material supplier... of this subpart. (4) Results of all tests used to verify the carbonate-based mineral mass fraction...(s), and any variations of the methods, used in the analyses. (iii) Mass fraction of each sample...
40 CFR 98.147 - Records that must be retained.
Code of Federal Regulations, 2012 CFR
2012-07-01
... (metric tons). (3) Data on carbonate-based mineral mass fractions provided by the raw material supplier... of this subpart. (4) Results of all tests used to verify the carbonate-based mineral mass fraction...(s), and any variations of the methods, used in the analyses. (iii) Mass fraction of each sample...
40 CFR 98.147 - Records that must be retained.
Code of Federal Regulations, 2014 CFR
2014-07-01
... (metric tons). (3) Data on carbonate-based mineral mass fractions provided by the raw material supplier... of this subpart. (4) Results of all tests used to verify the carbonate-based mineral mass fraction...(s), and any variations of the methods, used in the analyses. (iii) Mass fraction of each sample...
Strength of single-pole utility structures
Ronald W. Wolfe
2006-01-01
This section presents three basic methods for deriving and documenting Rn as an LTL value along with the coefficient of variation (COVR) for single-pole structures. These include the following: 1. An empirical analysis based primarily on tests of full-sized poles. 2. A theoretical analysis of mechanics-based models used in...
Linearized Programming of Memristors for Artificial Neuro-Sensor Signal Processing
Yang, Changju; Kim, Hyongsuk
2016-01-01
A linearized programming method of memristor-based neural weights is proposed. Memristor is known as an ideal element to implement a neural synapse due to its embedded functions of analog memory and analog multiplication. Its resistance variation with a voltage input is generally a nonlinear function of time. Linearization of memristance variation about time is very important for the easiness of memristor programming. In this paper, a method utilizing an anti-serial architecture for linear programming is proposed. The anti-serial architecture is composed of two memristors with opposite polarities. It linearizes the variation of memristance due to complimentary actions of two memristors. For programming a memristor, additional memristor with opposite polarity is employed. The linearization effect of weight programming of an anti-serial architecture is investigated and memristor bridge synapse which is built with two sets of anti-serial memristor architecture is taken as an application example of the proposed method. Simulations are performed with memristors of both linear drift model and nonlinear model. PMID:27548186
Linearized Programming of Memristors for Artificial Neuro-Sensor Signal Processing.
Yang, Changju; Kim, Hyongsuk
2016-08-19
A linearized programming method of memristor-based neural weights is proposed. Memristor is known as an ideal element to implement a neural synapse due to its embedded functions of analog memory and analog multiplication. Its resistance variation with a voltage input is generally a nonlinear function of time. Linearization of memristance variation about time is very important for the easiness of memristor programming. In this paper, a method utilizing an anti-serial architecture for linear programming is proposed. The anti-serial architecture is composed of two memristors with opposite polarities. It linearizes the variation of memristance due to complimentary actions of two memristors. For programming a memristor, additional memristor with opposite polarity is employed. The linearization effect of weight programming of an anti-serial architecture is investigated and memristor bridge synapse which is built with two sets of anti-serial memristor architecture is taken as an application example of the proposed method. Simulations are performed with memristors of both linear drift model and nonlinear model.
NASA Technical Reports Server (NTRS)
Gao, Bo-Cai; Wiscombe, W. J.
1993-01-01
A method for detecting cirrus clouds in terms of brightness temperature differences between narrow bands at 8, 11, and 12 mu m has been proposed by Ackerman et al. (1990). In this method, the variation of emissivity with wavelength for different surface targets was not taken into consideration. Based on state-of-the-art laboratory measurements of reflectance spectra of terrestrial materials by Salisbury and D'Aria (1992), we have found that the brightness temperature differences between the 8 and 11 mu m bands for soils, rocks and minerals, and dry vegetation can vary between approximately -8 K and +8 K due solely to surface emissivity variations. We conclude that although the method of Ackerman et al. is useful for detecting cirrus clouds over areas covered by green vegetation, water, and ice, it is less effective for detecting cirrus clouds over areas covered by bare soils, rocks and minerals, and dry vegetation. In addition, we recommend that in future the variation of surface emissivity with wavelength should be taken into account in algorithms for retrieving surface temperatures and low-level atmospheric temperature and water vapor profiles.
Quantification of intensity variations in functional MR images using rotated principal components
NASA Astrophysics Data System (ADS)
Backfrieder, W.; Baumgartner, R.; Sámal, M.; Moser, E.; Bergmann, H.
1996-08-01
In functional MRI (fMRI), the changes in cerebral haemodynamics related to stimulated neural brain activity are measured using standard clinical MR equipment. Small intensity variations in fMRI data have to be detected and distinguished from non-neural effects by careful image analysis. Based on multivariate statistics we describe an algorithm involving oblique rotation of the most significant principal components for an estimation of the temporal and spatial distribution of the stimulated neural activity over the whole image matrix. This algorithm takes advantage of strong local signal variations. A mathematical phantom was designed to generate simulated data for the evaluation of the method. In simulation experiments, the potential of the method to quantify small intensity changes, especially when processing data sets containing multiple sources of signal variations, was demonstrated. In vivo fMRI data collected in both visual and motor stimulation experiments were analysed, showing a proper location of the activated cortical regions within well known neural centres and an accurate extraction of the activation time profile. The suggested method yields accurate absolute quantification of in vivo brain activity without the need of extensive prior knowledge and user interaction.
Analysis of variability in additive manufactured open cell porous structures.
Evans, Sam; Jones, Eric; Fox, Pete; Sutcliffe, Chris
2017-06-01
In this article, a novel method of analysing build consistency of additively manufactured open cell porous structures is presented. Conventionally, methods such as micro computed tomography or scanning electron microscopy imaging have been applied to the measurement of geometric properties of porous material; however, high costs and low speeds make them unsuitable for analysing high volumes of components. Recent advances in the image-based analysis of open cell structures have opened up the possibility of qualifying variation in manufacturing of porous material. Here, a photogrammetric method of measurement, employing image analysis to extract values for geometric properties, is used to investigate the variation between identically designed porous samples measuring changes in material thickness and pore size, both intra- and inter-build. Following the measurement of 125 samples, intra-build material thickness showed variation of ±12%, and pore size ±4% of the mean measured values across five builds. Inter-build material thickness and pore size showed mean ranges higher than those of intra-build, ±16% and ±6% of the mean material thickness and pore size, respectively. Acquired measurements created baseline variation values and demonstrated techniques suitable for tracking build deviation and inspecting additively manufactured porous structures to indicate unwanted process fluctuations.
NASA Astrophysics Data System (ADS)
Peng, Chengtao; Qiu, Bensheng; Zhang, Cheng; Ma, Changyu; Yuan, Gang; Li, Ming
2017-07-01
Over the years, the X-ray computed tomography (CT) has been successfully used in clinical diagnosis. However, when the body of the patient to be examined contains metal objects, the image reconstructed would be polluted by severe metal artifacts, which affect the doctor's diagnosis of disease. In this work, we proposed a dynamic re-weighted total variation (DRWTV) technique combined with the statistic iterative reconstruction (SIR) method to reduce the artifacts. The DRWTV method is based on the total variation (TV) and re-weighted total variation (RWTV) techniques, but it provides a sparser representation than TV and protects the tissue details better than RWTV. Besides, the DRWTV can suppress the artifacts and noise, and the SIR convergence speed is also accelerated. The performance of the algorithm is tested on both simulated phantom dataset and clinical dataset, which are the teeth phantom with two metal implants and the skull with three metal implants, respectively. The proposed algorithm (SIR-DRWTV) is compared with two traditional iterative algorithms, which are SIR and SIR constrained by RWTV regulation (SIR-RWTV). The results show that the proposed algorithm has the best performance in reducing metal artifacts and protecting tissue details.
An intercomparison for NIRS and NYU passive thoron gas detectors at NYU.
Sorimachi, Atsuyuki; Ishikawa, Tetsuo; Tokonami, Shinji; Chittaporn, Passaporn; Harley, Naomi H
2012-04-01
An intercomparison on thoron ((220)Rn) measurement was carried out between National Institute of Radiological Sciences, Japan (NIRS), and New York University School of Medicine, USA (NYU). The measurements of (220)Rn concentration at NIRS and NYU were performed by using the scintillation cell method and the two-filter method, respectively, as the standard measurement method. Three types of alpha track detectors based on passive radon ((222)Rn)-(220)Rn discriminative measurement technique were used: Raduet and Radopot detectors were used at NIRS, and four-leaf detectors were used at NYU. In this study, the authors evaluated (220)Rn concentration variation in terms of run for exposure, measurement method, and exposure chamber. The detectors were exposed to (220)Rn gas with approximately 15 kBq m(-3) during the period from 0.75 to 3 d. As a result, the variation of each measurement method among these exposure runs was comparable to or less than that for the two-filter method. Agreement between the standard measurement methods of NIRS and NYU was observed to be about 10%, as is the case with the passive detectors. The Raduet detector showed a large variation in the detection response between the NIRS and NYU chambers, which could be related to different traceability.
NASA Astrophysics Data System (ADS)
Singh, Randhir; Das, Nilima; Kumar, Jitendra
2017-06-01
An effective analytical technique is proposed for the solution of the Lane-Emden equations. The proposed technique is based on the variational iteration method (VIM) and the convergence control parameter h . In order to avoid solving a sequence of nonlinear algebraic or complicated integrals for the derivation of unknown constant, the boundary conditions are used before designing the recursive scheme for solution. The series solutions are found which converges rapidly to the exact solution. Convergence analysis and error bounds are discussed. Accuracy, applicability of the method is examined by solving three singular problems: i) nonlinear Poisson-Boltzmann equation, ii) distribution of heat sources in the human head, iii) second-kind Lane-Emden equation.
NASA Astrophysics Data System (ADS)
Li, Dongming; Zhang, Lijuan; Wang, Ting; Liu, Huan; Yang, Jinhua; Chen, Guifen
2016-11-01
To improve the adaptive optics (AO) image's quality, we study the AO image restoration algorithm based on wavefront reconstruction technology and adaptive total variation (TV) method in this paper. Firstly, the wavefront reconstruction using Zernike polynomial is used for initial estimated for the point spread function (PSF). Then, we develop our proposed iterative solutions for AO images restoration, addressing the joint deconvolution issue. The image restoration experiments are performed to verify the image restoration effect of our proposed algorithm. The experimental results show that, compared with the RL-IBD algorithm and Wiener-IBD algorithm, we can see that GMG measures (for real AO image) from our algorithm are increased by 36.92%, and 27.44% respectively, and the computation time are decreased by 7.2%, and 3.4% respectively, and its estimation accuracy is significantly improved.
Oran, Omer Faruk; Ider, Yusuf Ziya
2012-08-21
Most algorithms for magnetic resonance electrical impedance tomography (MREIT) concentrate on reconstructing the internal conductivity distribution of a conductive object from the Laplacian of only one component of the magnetic flux density (∇²B(z)) generated by the internal current distribution. In this study, a new algorithm is proposed to solve this ∇²B(z)-based MREIT problem which is mathematically formulated as the steady-state scalar pure convection equation. Numerical methods developed for the solution of the more general convection-diffusion equation are utilized. It is known that the solution of the pure convection equation is numerically unstable if sharp variations of the field variable (in this case conductivity) exist or if there are inconsistent boundary conditions. Various stabilization techniques, based on introducing artificial diffusion, are developed to handle such cases and in this study the streamline upwind Petrov-Galerkin (SUPG) stabilization method is incorporated into the Galerkin weighted residual finite element method (FEM) to numerically solve the MREIT problem. The proposed algorithm is tested with simulated and also experimental data from phantoms. Successful conductivity reconstructions are obtained by solving the related convection equation using the Galerkin weighted residual FEM when there are no sharp variations in the actual conductivity distribution. However, when there is noise in the magnetic flux density data or when there are sharp variations in conductivity, it is found that SUPG stabilization is beneficial.
NASA Technical Reports Server (NTRS)
Bartels, Robert E.
2003-01-01
A variable order method of integrating the structural dynamics equations that is based on the state transition matrix has been developed. The method has been evaluated for linear time variant and nonlinear systems of equations. When the time variation of the system can be modeled exactly by a polynomial it produces nearly exact solutions for a wide range of time step sizes. Solutions of a model nonlinear dynamic response exhibiting chaotic behavior have been computed. Accuracy of the method has been demonstrated by comparison with solutions obtained by established methods.
NASA Astrophysics Data System (ADS)
Bouttier, Pierre-Antoine; Brankart, Jean-Michel; Candille, Guillem; Vidard, Arthur; Blayo, Eric; Verron, Jacques; Brasseur, Pierre
2015-04-01
In this project, the response of a variational data assimilation system based on NEMO and its linear tangent and adjoint model is investigated using a 4DVAR algorithm into a North-Atlantic model at eddy-permitting resolution. The assimilated data consist of Jason-2 and SARAL/AltiKA dataset collected during the 2013-2014 period. The main objective is to explore the robustness of the 4DVAR algorithm in the context of a realistic turbulent oceanic circulation at mid-latitude constrained by multi-satellite altimetry missions. This work relies on two previous studies. First, a study with similar objectives was performed based on academic double-gyre turbulent model and synthetic SARAL/AltiKA data, using the same DA experimental framework. Its main goal was to investigate the impact of turbulence on variational DA methods performance. The comparison with this previous work will bring to light the methodological and physical issues encountered by variational DA algorithms in a realistic context at similar, eddy-permitting spatial resolution. We also have demonstrated how a dataset mimicking future SWOT observations improves 4DVAR incremental performances at eddy-permitting resolution. Then, in the context of the OSTST and FP7 SANGOMA projects, an ensemble DA experiment based on the same model and observational datasets has been realized (see poster by Brasseur et al.). This work offers the opportunity to compare efficiency, pros and cons of both DA methods in the context of KA-band altimetric data, at spatial resolution commonly used today for research and operational applications. In this poster we will present the validation plan proposed to evaluate the skill of variational experiment vs. ensemble assimilation experiments covering the same period using independent observations (e.g. from Cryosat-2 mission).
NASA Astrophysics Data System (ADS)
Gallup, G. A.; Gerratt, J.
1985-09-01
The van der Waals energy between the two parts of a system is a very small fraction of the total electronic energy. In such cases, calculations have been based on perturbation theory. However, such an approach involves certain difficulties. For this reason, van der Waals energies have also been directly calculated from total energies. But such a method has definite limitations as to the size of systems which can be treated, and recently ab initio calculations have been combined with damped semiempirical long-range dispersion potentials to treat larger systems. In this procedure, large basis set superposition errors occur, which must be removed by the counterpoise method. The present investigation is concerned with an approach which is intermediate between the previously considered procedures. The first step in the new approach involves a variational calculation based upon valence bond functions. The procedure includes also the optimization of excited orbitals, and an approximation of atomic integrals and Hamiltonian matrix elements.
A variable-order laminated plate theory based on the variational-asymptotical method
NASA Technical Reports Server (NTRS)
Lee, Bok W.; Sutyrin, Vladislav G.; Hodges, Dewey H.
1993-01-01
The variational-asymptotical method is a mathematical technique by which the three-dimensional analysis of laminated plate deformation can be split into a linear, one-dimensional, through-the-thickness analysis and a nonlinear, two-dimensional, plate analysis. The elastic constants used in the plate analysis are obtained from the through-the-thickness analysis, along with approximate, closed-form three-dimensional distributions of displacement, strain, and stress. In this paper, a theory based on this technique is developed which is capable of approximating three-dimensional elasticity to any accuracy desired. The asymptotical method allows for the approximation of the through-the-thickness behavior in terms of the eigenfunctions of a certain Sturm-Liouville problem associated with the thickness coordinate. These eigenfunctions contain all the necessary information about the nonhomogeneities along the thickness coordinate of the plate and thus possess the appropriate discontinuities in the derivatives of displacement. The theory is presented in this paper along with numerical results for the eigenfunctions of various laminated plates.
Yamagata, Koichi; Yamanishi, Ayako; Kokubu, Chikara; Takeda, Junji; Sese, Jun
2016-01-01
An important challenge in cancer genomics is precise detection of structural variations (SVs) by high-throughput short-read sequencing, which is hampered by the high false discovery rates of existing analysis tools. Here, we propose an accurate SV detection method named COSMOS, which compares the statistics of the mapped read pairs in tumor samples with isogenic normal control samples in a distinct asymmetric manner. COSMOS also prioritizes the candidate SVs using strand-specific read-depth information. Performance tests on modeled tumor genomes revealed that COSMOS outperformed existing methods in terms of F-measure. We also applied COSMOS to an experimental mouse cell-based model, in which SVs were induced by genome engineering and gamma-ray irradiation, followed by polymerase chain reaction-based confirmation. The precision of COSMOS was 84.5%, while the next best existing method was 70.4%. Moreover, the sensitivity of COSMOS was the highest, indicating that COSMOS has great potential for cancer genome analysis. PMID:26833260
A novel approach of ensuring layout regularity correct by construction in advanced technologies
NASA Astrophysics Data System (ADS)
Ahmed, Shafquat Jahan; Vaderiya, Yagnesh; Gupta, Radhika; Parthasarathy, Chittoor; Marin, Jean-Claude; Robert, Frederic
2017-03-01
In advanced technology nodes, layout regularity has become a mandatory prerequisite to create robust designs less sensitive to variations in manufacturing process in order to improve yield and minimizing electrical variability. In this paper we describe a method for designing regular full custom layouts based on design and process co-optimization. The method includes various design rule checks that can be used on-the-fly during leaf-cell layout development. We extract a Layout Regularity Index (LRI) from the layouts based on the jogs, alignments and pitches used in the design for any given metal layer. Regularity Index of a layout is the direct indicator of manufacturing yield and is used to compare the relative health of different layout blocks in terms of process friendliness. The method has been deployed for 28nm and 40nm technology nodes for Memory IP and is being extended to other IPs (IO, standard-cell). We have quantified the gain of layout regularity with the deployed method on printability and electrical characteristics by process-variation (PV) band simulation analysis and have achieved up-to 5nm reduction in PV band.
Highly accurate symplectic element based on two variational principles
NASA Astrophysics Data System (ADS)
Qing, Guanghui; Tian, Jia
2018-02-01
For the stability requirement of numerical resultants, the mathematical theory of classical mixed methods are relatively complex. However, generalized mixed methods are automatically stable, and their building process is simple and straightforward. In this paper, based on the seminal idea of the generalized mixed methods, a simple, stable, and highly accurate 8-node noncompatible symplectic element (NCSE8) was developed by the combination of the modified Hellinger-Reissner mixed variational principle and the minimum energy principle. To ensure the accuracy of in-plane stress results, a simultaneous equation approach was also suggested. Numerical experimentation shows that the accuracy of stress results of NCSE8 are nearly the same as that of displacement methods, and they are in good agreement with the exact solutions when the mesh is relatively fine. NCSE8 has advantages of the clearing concept, easy calculation by a finite element computer program, higher accuracy and wide applicability for various linear elasticity compressible and nearly incompressible material problems. It is possible that NCSE8 becomes even more advantageous for the fracture problems due to its better accuracy of stresses.
Sewer infiltration/inflow: long-term monitoring based on diurnal variation of pollutant mass flux.
Bares, V; Stránský, D; Sýkora, P
2009-01-01
The paper deals with a method for quantification of infiltrating groundwater based on the variation of diurnal pollutant load and continuous water quality and quantity monitoring. Although the method gives us the potential to separate particular components of wastewater hygrograph, several aspects of the method should be discussed. Therefore, the paper investigates the cost-effectiveness, the relevance of pollutant load from surface waters (groundwater) and the influence of measurement time step. These aspects were studied in an experimental catchment of Prague sewer system, Czech Republic, within a three-month period. The results indicate high contribution of parasitic waters on night minimal discharge. Taking into account the uncertainty of the results and time-consuming maintenance of the sensor, the principal advantages of the method are evaluated. The study introduces a promising potential of the discussed measuring concept for quantification of groundwater infiltrating into the sewer system. It is shown that the conventional approach is sufficient and cost-effective even in those catchments, where significant contribution of foul sewage in night minima would have been assumed.
Blankena, Roos; Kleinloog, Rachel; Verweij, Bon H.; van Ooij, Pim; ten Haken, Bennie; Luijten, Peter R.; Rinkel, Gabriel J.E.; Zwanenburg, Jaco J.M.
2016-01-01
Purpose To develop a method for semi-quantitative wall thickness assessment on in vivo 7.0 tesla (7T) MRI images of intracranial aneurysms for studying the relation between apparent aneurysm wall thickness and wall shear stress. Materials and Methods Wall thickness was analyzed in 11 unruptured aneurysms in 9 patients, who underwent 7T MRI with a TSE based vessel wall sequence (0.8 mm isotropic resolution). A custom analysis program determined the in vivo aneurysm wall intensities, which were normalized to signal of nearby brain tissue and were used as measure for apparent wall thickness (AWT). Spatial wall thickness variation was determined as the interquartile range in AWT (the middle 50% of the AWT range). Wall shear stress was determined using phase contrast MRI (0.5 mm isotropic resolution). We performed visual and statistical comparisons (Pearson’s correlation) to study the relation between wall thickness and wall shear stress. Results 3D colored AWT maps of the aneurysms showed spatial AWT variation, which ranged from 0.07 to 0.53, with a mean variation of 0.22 (a variation of 1.0 roughly means a wall thickness variation of one voxel (0.8mm)). In all aneurysms, AWT was inversely related to WSS (mean correlation coefficient −0.35, P<0.05). Conclusions A method was developed to measure the wall thickness semi-quantitatively, using 7T MRI. An inverse correlation between wall shear stress and AWT was determined. In future studies, this non-invasive method can be used to assess spatial wall thickness variation in relation to pathophysiologic processes such as aneurysm growth and –rupture. PMID:26892986
1989-11-01
considerable promise is a variation of the familiar Lempel - Ziv adaptive data compression scheme that permits a straightforward mapping to hardware...types of data . The UNIX " compress " implementation is based upon Terry Welch’s 1984 variation of the Lempel - Ziv method (LZW). One flaw lies in the fact...or more; it must effec- tively compress all types of data (i.e. the algorithm must be universal); the implementation must be contained within a small
NASA Astrophysics Data System (ADS)
Carlo Ponzo, Felice; Ditommaso, Rocco
2015-04-01
This study presents an innovative strategy for automatic evaluation of the variable fundamental frequency and related damping factor of nonlinear structures during strong motion phases. Most of methods for damage detection are based on the assessment of the variations of the dynamic parameters characterizing the monitored structure. A crucial aspect of these methods is the automatic and accurate estimation of both structural eigen-frequencies and related damping factors also during the nonlinear behaviour. A new method, named STIRF (Short-Time Impulse Response Function - STIRF), based on the nonlinear interferometric analysis combined with the Fourier Transform (FT) here is proposed in order to allow scientists and engineers to characterize frequencies and damping variations of a monitored structure. The STIRF approach helps to overcome some limitation derived from the use of techniques based on simple Fourier Transform. These latter techniques provide good results when the response of the monitored system is stationary, but fails when the system exhibits a non-stationary, time-varying behaviour: even non-stationary input, soil-foundation and/or adjacent structures interaction phenomena can show the inadequacy of classic techniques to analysing the nonlinear and/or non-stationary behaviour of structures. In fact, using this kind of approach it is possible to improve some of the existing methods for the automatic damage detection providing stable results also during the strong motion phase. Results are consistent with those expected if compared with other techniques. The main advantage derived from the use of the proposed approach (STIRF) for Structural Health Monitoring is based on the simplicity of the interpretation of the nonlinear variations of the fundamental frequency and the related equivalent viscous damping factor. The proposed methodology has been tested on both numerical and experimental models also using data retrieved from shaking table tests. Based on the results provided in this study, the methodology seems to be able to evaluate fast variations (over time) of dynamic parameters of a generic reinforced concrete framed structure. Further analyses are necessary to better calibrate the length of the moving time-window (in order to minimize the spurious frequency within each Interferometric Response Function evaluated on both weak and strong motion phases) and to verify the possibility to use the STIRF to analyse the nonlinear behaviour of general systems. Acknowledgements This study was partially funded by the Italian Civil Protection Department within the project DPC-RELUIS 2014 - RS4 ''Seismic observatory of structures and health monitoring''. References R. Ditommaso, F.C. Ponzo (2015). Automatic evaluation of the fundamental frequency variations and related damping factor of reinforced concrete framed structures using the Short Time Impulse Response Function (STIRF). Engineering Structures, 82 (2015), 104-112. http://dx.doi.org/10.1016/j.engstruct.2014.10.023.
Hwang, Wonjun; Wang, Haitao; Kim, Hyunwoo; Kee, Seok-Cheol; Kim, Junmo
2011-04-01
The authors present a robust face recognition system for large-scale data sets taken under uncontrolled illumination variations. The proposed face recognition system consists of a novel illumination-insensitive preprocessing method, a hybrid Fourier-based facial feature extraction, and a score fusion scheme. First, in the preprocessing stage, a face image is transformed into an illumination-insensitive image, called an "integral normalized gradient image," by normalizing and integrating the smoothed gradients of a facial image. Then, for feature extraction of complementary classifiers, multiple face models based upon hybrid Fourier features are applied. The hybrid Fourier features are extracted from different Fourier domains in different frequency bandwidths, and then each feature is individually classified by linear discriminant analysis. In addition, multiple face models are generated by plural normalized face images that have different eye distances. Finally, to combine scores from multiple complementary classifiers, a log likelihood ratio-based score fusion scheme is applied. The proposed system using the face recognition grand challenge (FRGC) experimental protocols is evaluated; FRGC is a large available data set. Experimental results on the FRGC version 2.0 data sets have shown that the proposed method shows an average of 81.49% verification rate on 2-D face images under various environmental variations such as illumination changes, expression changes, and time elapses.
NASA Technical Reports Server (NTRS)
Bartels, Robert E.
2002-01-01
A variable order method of integrating initial value ordinary differential equations that is based on the state transition matrix has been developed. The method has been evaluated for linear time variant and nonlinear systems of equations. While it is more complex than most other methods, it produces exact solutions at arbitrary time step size when the time variation of the system can be modeled exactly by a polynomial. Solutions to several nonlinear problems exhibiting chaotic behavior have been computed. Accuracy of the method has been demonstrated by comparison with an exact solution and with solutions obtained by established methods.
Doets, Esmée L; Cavelaars, Adrienne E J M; Dhonukshe-Rutten, Rosalie A M; van 't Veer, Pieter; de Groot, Lisette C P G M
2012-05-01
To signal key issues for harmonising approaches for establishing micronutrient recommendations by explaining observed variation in recommended intakes of folate, vitamin B12, Fe and Zn for adults and elderly people. We explored differences in recommended intakes of folate, vitamin B12, Fe and Zn for adults between nine reports on micronutrient recommendations. Approaches used for setting recommendations were compared as well as eminence-based decisions regarding the selection of health indicators indicating adequacy of intakes and the consulted evidence base. In nearly all reports, recommendations were based on the average nutrient requirement. Variation in recommended folate intakes (200-400 μg/d) was related to differences in the consulted evidence base, whereas variation in vitamin B12 recommendations (1.4-3.0 μg/d) was due to the selection of different CV (10-20 %) and health indicators (maintenance of haematological status or basal losses). Variation in recommended Fe intakes (men 8-10 mg/d, premenopausal women 14.8-19.6 mg/d, postmenopausal women 7.5-10.0 mg/d) was explained by different assumed reference weights and bioavailability factors (10-18 %). Variation in Zn recommendations (men 7-14 mg/d, women 4.9-9.0 mg/d) was also explained by different bioavailability factors (24-48 %) as well as differences in the consulted evidence base. For the harmonisation of approaches for setting recommended intakes of folate, vitamin B12, Fe and Zn across European countries, standardised methods are needed to (i) select health indicators and define adequate biomarker concentrations, (ii) make assumptions about inter-individual variation in requirements, (iii) derive bioavailability factors and (iv) collate, select, interpret and integrate evidence on requirements.
Cher, Chen-Yong; Coteus, Paul W; Gara, Alan; Kursun, Eren; Paulsen, David P; Schuelke, Brian A; Sheets, II, John E; Tian, Shurong
2013-10-01
A processor-implemented method for determining aging of a processing unit in a processor the method comprising: calculating an effective aging profile for the processing unit wherein the effective aging profile quantifies the effects of aging on the processing unit; combining the effective aging profile with process variation data, actual workload data and operating conditions data for the processing unit; and determining aging through an aging sensor of the processing unit using the effective aging profile, the process variation data, the actual workload data, architectural characteristics and redundancy data, and the operating conditions data for the processing unit.
Detection of nucleic acid sequences by invader-directed cleavage
Brow, Mary Ann D.; Hall, Jeff Steven Grotelueschen; Lyamichev, Victor; Olive, David Michael; Prudent, James Robert
1999-01-01
The present invention relates to means for the detection and characterization of nucleic acid sequences, as well as variations in nucleic acid sequences. The present invention also relates to methods for forming a nucleic acid cleavage structure on a target sequence and cleaving the nucleic acid cleavage structure in a site-specific manner. The 5' nuclease activity of a variety of enzymes is used to cleave the target-dependent cleavage structure, thereby indicating the presence of specific nucleic acid sequences or specific variations thereof. The present invention further relates to methods and devices for the separation of nucleic acid molecules based by charge.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pupyshev, V.I.; Scherbinin, A.V.; Stepanov, N.F.
1997-11-01
The approach based on the multiplicative form of a trial wave function within the framework of the variational method, initially proposed by Kirkwood and Buckingham, is shown to be an effective analytical tool in the quantum mechanical study of atoms and molecules. As an example, the elementary proof is given to the fact that the ground state energy of a molecular system placed into the box with walls of finite height goes to the corresponding eigenvalue of the Dirichlet boundary value problem when the height of the walls is growing up to infinity. {copyright} {ital 1997 American Institute of Physics.}
NASA Astrophysics Data System (ADS)
Durán-Flórez, F.; Caicedo, L. C.; Gonzalez, J. E.
2018-04-01
In quantum mechanics it is very difficult to obtain exact solutions, therefore, it is necessary to resort to tools and methods that facilitate the calculations of the solutions of these systems, one of these methods is the variational method that consists in proposing a wave function that depend on several parameters that are adjusted to get close to the exact solution. Authors in the past have performed calculations applying this method using exponential and Gaussian orbital functions with linear and quadratic correlation factors. In this paper, a Gaussian function with a linear correlation factor is proposed, for the calculation of the binding energy of an impurity D ‑ centered on a quantum dot of radius r, the Gaussian function is dependent on the radius of the quantum dot.
[Hydrologic variability and sensitivity based on Hurst coefficient and Bartels statistic].
Lei, Xu; Xie, Ping; Wu, Zi Yi; Sang, Yan Fang; Zhao, Jiang Yan; Li, Bin Bin
2018-04-01
Due to the global climate change and frequent human activities in recent years, the pure stochastic components of hydrological sequence is mixed with one or several of the variation ingredients, including jump, trend, period and dependency. It is urgently needed to clarify which indices should be used to quantify the degree of their variability. In this study, we defined the hydrological variability based on Hurst coefficient and Bartels statistic, and used Monte Carlo statistical tests to test and analyze their sensitivity to different variants. When the hydrological sequence had jump or trend variation, both Hurst coefficient and Bartels statistic could reflect the variation, with the Hurst coefficient being more sensitive to weak jump or trend variation. When the sequence had period, only the Bartels statistic could detect the mutation of the sequence. When the sequence had a dependency, both the Hurst coefficient and the Bartels statistics could reflect the variation, with the latter could detect weaker dependent variations. For the four variations, both the Hurst variability and Bartels variability increased with the increases of variation range. Thus, they could be used to measure the variation intensity of the hydrological sequence. We analyzed the temperature series of different weather stations in the Lancang River basin. Results showed that the temperature of all stations showed the upward trend or jump, indicating that the entire basin had experienced warming in recent years and the temperature variability in the upper and lower reaches was much higher. This case study showed the practicability of the proposed method.
Face recognition via edge-based Gabor feature representation for plastic surgery-altered images
NASA Astrophysics Data System (ADS)
Chude-Olisah, Chollette C.; Sulong, Ghazali; Chude-Okonkwo, Uche A. K.; Hashim, Siti Z. M.
2014-12-01
Plastic surgery procedures on the face introduce skin texture variations between images of the same person (intra-subject), thereby making the task of face recognition more difficult than in normal scenario. Usually, in contemporary face recognition systems, the original gray-level face image is used as input to the Gabor descriptor, which translates to encoding some texture properties of the face image. The texture-encoding process significantly degrades the performance of such systems in the case of plastic surgery due to the presence of surgically induced intra-subject variations. Based on the proposition that the shape of significant facial components such as eyes, nose, eyebrow, and mouth remains unchanged after plastic surgery, this paper employs an edge-based Gabor feature representation approach for the recognition of surgically altered face images. We use the edge information, which is dependent on the shapes of the significant facial components, to address the plastic surgery-induced texture variation problems. To ensure that the significant facial components represent useful edge information with little or no false edges, a simple illumination normalization technique is proposed for preprocessing. Gabor wavelet is applied to the edge image to accentuate on the uniqueness of the significant facial components for discriminating among different subjects. The performance of the proposed method is evaluated on the Georgia Tech (GT) and the Labeled Faces in the Wild (LFW) databases with illumination and expression problems, and the plastic surgery database with texture changes. Results show that the proposed edge-based Gabor feature representation approach is robust against plastic surgery-induced face variations amidst expression and illumination problems and outperforms the existing plastic surgery face recognition methods reported in the literature.
Simulated linear test applied to quantitative proteomics.
Pham, T V; Jimenez, C R
2016-09-01
Omics studies aim to find significant changes due to biological or functional perturbation. However, gene and protein expression profiling experiments contain inherent technical variation. In discovery proteomics studies where the number of samples is typically small, technical variation plays an important role because it contributes considerably to the observed variation. Previous methods place both technical and biological variations in tightly integrated mathematical models that are difficult to adapt for different technological platforms. Our aim is to derive a statistical framework that allows the inclusion of a wide range of technical variability. We introduce a new method called the simulated linear test, or the s-test, that is easy to implement and easy to adapt for different models of technical variation. It generates virtual data points from the observed values according to a pre-defined technical distribution and subsequently employs linear modeling for significance analysis. We demonstrate the flexibility of the proposed approach by deriving a new significance test for quantitative discovery proteomics for which missing values have been a major issue for traditional methods such as the t-test. We evaluate the result on two label-free (phospho) proteomics datasets based on ion-intensity quantitation. Available at http://www.oncoproteomics.nl/software/stest.html : t.pham@vumc.nl. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Yadav, Ram Bharos; Srivastava, Subodh; Srivastava, Rajeev
2016-01-01
The proposed framework is obtained by casting the noise removal problem into a variational framework. This framework automatically identifies the various types of noise present in the magnetic resonance image and filters them by choosing an appropriate filter. This filter includes two terms: the first term is a data likelihood term and the second term is a prior function. The first term is obtained by minimizing the negative log likelihood of the corresponding probability density functions: Gaussian or Rayleigh or Rician. Further, due to the ill-posedness of the likelihood term, a prior function is needed. This paper examines three partial differential equation based priors which include total variation based prior, anisotropic diffusion based prior, and a complex diffusion (CD) based prior. A regularization parameter is used to balance the trade-off between data fidelity term and prior. The finite difference scheme is used for discretization of the proposed method. The performance analysis and comparative study of the proposed method with other standard methods is presented for brain web dataset at varying noise levels in terms of peak signal-to-noise ratio, mean square error, structure similarity index map, and correlation parameter. From the simulation results, it is observed that the proposed framework with CD based prior is performing better in comparison to other priors in consideration.
NASA Astrophysics Data System (ADS)
Nemoto, Mitsutaka; Nomura, Yukihiro; Hanaoka, Shohei; Masutani, Yoshitaka; Yoshikawa, Takeharu; Hayashi, Naoto; Yoshioka, Naoki; Ohtomo, Kuni
Anatomical point landmarks as most primitive anatomical knowledge are useful for medical image understanding. In this study, we propose a detection method for anatomical point landmark based on appearance models, which include gray-level statistical variations at point landmarks and their surrounding area. The models are built based on results of Principal Component Analysis (PCA) of sample data sets. In addition, we employed generative learning method by transforming ROI of sample data. In this study, we evaluated our method with 24 data sets of body trunk CT images and obtained 95.8 ± 7.3 % of the average sensitivity in 28 landmarks.
Novel crystal timing calibration method based on total variation
NASA Astrophysics Data System (ADS)
Yu, Xingjian; Isobe, Takashi; Watanabe, Mitsuo; Liu, Huafeng
2016-11-01
A novel crystal timing calibration method based on total variation (TV), abbreviated as ‘TV merge’, has been developed for a high-resolution positron emission tomography (PET) system. The proposed method was developed for a system with a large number of crystals, it can provide timing calibration at the crystal level. In the proposed method, the timing calibration process was formulated as a linear problem. To robustly optimize the timing resolution, a TV constraint was added to the linear equation. Moreover, to solve the computer memory problem associated with the calculation of the timing calibration factors for systems with a large number of crystals, the merge component was used for obtaining the crystal level timing calibration values. Compared with other conventional methods, the data measured from a standard cylindrical phantom filled with a radioisotope solution was sufficient for performing a high-precision crystal-level timing calibration. In this paper, both simulation and experimental studies were performed to demonstrate the effectiveness and robustness of the TV merge method. We compare the timing resolutions of a 22Na point source, which was located in the field of view (FOV) of the brain PET system, with various calibration techniques. After implementing the TV merge method, the timing resolution improved from 3.34 ns at full width at half maximum (FWHM) to 2.31 ns FWHM.
Using chaos to generate variations on movement sequences
NASA Astrophysics Data System (ADS)
Bradley, Elizabeth; Stuart, Joshua
1998-12-01
We describe a method for introducing variations into predefined motion sequences using a chaotic symbol-sequence reordering technique. A progression of symbols representing the body positions in a dance piece, martial arts form, or other motion sequence is mapped onto a chaotic trajectory, establishing a symbolic dynamics that links the movement sequence and the attractor structure. A variation on the original piece is created by generating a trajectory with slightly different initial conditions, inverting the mapping, and using special corpus-based graph-theoretic interpolation schemes to smooth any abrupt transitions. Sensitive dependence guarantees that the variation is different from the original; the attractor structure and the symbolic dynamics guarantee that the two resemble one another in both aesthetic and mathematical senses.
Caporale, Lynn Helena
2012-09-01
This overview of a special issue of Annals of the New York Academy of Sciences discusses uneven distribution of distinct types of variation across the genome, the dependence of specific types of variation upon distinct classes of DNA sequences and/or the induction of specific proteins, the circumstances in which distinct variation-generating systems are activated, and the implications of this work for our understanding of evolution and of cancer. Also discussed is the value of non text-based computational methods for analyzing information carried by DNA, early insights into organizational frameworks that affect genome behavior, and implications of this work for comparative genomics. © 2012 New York Academy of Sciences.
Peterson, Leif E
2002-01-01
CLUSFAVOR (CLUSter and Factor Analysis with Varimax Orthogonal Rotation) 5.0 is a Windows-based computer program for hierarchical cluster and principal-component analysis of microarray-based transcriptional profiles. CLUSFAVOR 5.0 standardizes input data; sorts data according to gene-specific coefficient of variation, standard deviation, average and total expression, and Shannon entropy; performs hierarchical cluster analysis using nearest-neighbor, unweighted pair-group method using arithmetic averages (UPGMA), or furthest-neighbor joining methods, and Euclidean, correlation, or jack-knife distances; and performs principal-component analysis. PMID:12184816
NASA Astrophysics Data System (ADS)
Zheng, Qiang; Li, Honglun; Fan, Baode; Wu, Shuanhu; Xu, Jindong
2017-12-01
Active contour model (ACM) has been one of the most widely utilized methods in magnetic resonance (MR) brain image segmentation because of its ability of capturing topology changes. However, most of the existing ACMs only consider single-slice information in MR brain image data, i.e., the information used in ACMs based segmentation method is extracted only from one slice of MR brain image, which cannot take full advantage of the adjacent slice images' information, and cannot satisfy the local segmentation of MR brain images. In this paper, a novel ACM is proposed to solve the problem discussed above, which is based on multi-variate local Gaussian distribution and combines the adjacent slice images' information in MR brain image data to satisfy segmentation. The segmentation is finally achieved through maximizing the likelihood estimation. Experiments demonstrate the advantages of the proposed ACM over the single-slice ACM in local segmentation of MR brain image series.
NASA Technical Reports Server (NTRS)
Barger, R. L.
1974-01-01
A method has been developed for designing families of airfoils in which the members of a family have the same basic type of pressure distribution but vary in thickness ratio or lift, or both. Thickness ratio and lift may be prescribed independently. The method which is based on the Theodorsen thick-airfoil theory permits moderate variations from the basic shape on which the family is based.
NASA Astrophysics Data System (ADS)
Jetty, Lauren E.
The purpose of this two-phase, sequential explanatory mixed-methods study was to understand and explain the variation seen in secondary science teachers' enactment of reform-based instructional practices. Utilizing teacher socialization theory, this mixed-methods analysis was conducted to determine the relative influence of secondary science teachers' characteristics, backgrounds and experiences across their teacher development to explain the range of teaching practices exhibited by graduates from three reform-oriented teacher preparation programs. Data for this study were obtained from the Investigating the Meaningfulness of Preservice Programs Across the Continuum of Teaching (IMPPACT) Project, a multi-university, longitudinal study funded by NSF. In the first quantitative phase of the study, data for the sample (N=120) were collected from three surveys from the IMPPACT Project database. Hierarchical multiple regression analysis was used to examine the separate as well as the combined influence of factors such as teachers' personal and professional background characteristics, beliefs about reform-based science teaching, feelings of preparedness to teach science, school context, school culture and climate of professional learning, and influences of the policy environment on the teachers' use of reform-based instructional practices. Findings indicate three blocks of variables, professional background, beliefs/efficacy, and local school context added significant contribution to explaining nearly 38% of the variation in secondary science teachers' use of reform-based instructional practices. The five variables that significantly contributed to explaining variation in teachers' use of reform-based instructional practices in the full model were, university of teacher preparation, sense of preparation for teaching science, the quality of professional development, science content focused professional, and the perceived level of professional autonomy. Using the results from phase one, the second qualitative phase selected six case study teachers based on their levels of reform-based teaching practices to highlight teachers across the range of practices from low, average, to high levels of implementation. Using multiple interview sources, phase two helped to further explain the variation in levels of reform-based practices. Themes related to teachers' backgrounds, local contexts, and state policy environments were developed as they related to teachers' socialization experiences across these contexts. The results of the qualitative analysis identified the following factors differentiating teachers who enacted reform-based instructional practices from those who did not: 1) extensive science research experiences prior to their preservice teacher preparation; 2) the structure and quality of their field placements; 3) developing and valuing a research-based understanding of teaching and learning as a result of their preservice teacher preparation experiences; 4) the professional culture of their school context where there was support for a high degree of professional autonomy and receiving support from "educational companions" with a specific focus on teacher pedagogy to support student learning; and 5) a greater sense of agency to navigate their districts' interpretation and implementation of state polices. Implications for key stakeholders as well as directions for future research are discussed.
Masking as an effective quality control method for next-generation sequencing data analysis.
Yun, Sajung; Yun, Sijung
2014-12-13
Next generation sequencing produces base calls with low quality scores that can affect the accuracy of identifying simple nucleotide variation calls, including single nucleotide polymorphisms and small insertions and deletions. Here we compare the effectiveness of two data preprocessing methods, masking and trimming, and the accuracy of simple nucleotide variation calls on whole-genome sequence data from Caenorhabditis elegans. Masking substitutes low quality base calls with 'N's (undetermined bases), whereas trimming removes low quality bases that results in a shorter read lengths. We demonstrate that masking is more effective than trimming in reducing the false-positive rate in single nucleotide polymorphism (SNP) calling. However, both of the preprocessing methods did not affect the false-negative rate in SNP calling with statistical significance compared to the data analysis without preprocessing. False-positive rate and false-negative rate for small insertions and deletions did not show differences between masking and trimming. We recommend masking over trimming as a more effective preprocessing method for next generation sequencing data analysis since masking reduces the false-positive rate in SNP calling without sacrificing the false-negative rate although trimming is more commonly used currently in the field. The perl script for masking is available at http://code.google.com/p/subn/. The sequencing data used in the study were deposited in the Sequence Read Archive (SRX450968 and SRX451773).
Evaluating abundance and trends in a Hawaiian avian community using state-space analysis
Camp, Richard J.; Brinck, Kevin W.; Gorresen, P.M.; Paxton, Eben H.
2016-01-01
Estimating population abundances and patterns of change over time are important in both ecology and conservation. Trend assessment typically entails fitting a regression to a time series of abundances to estimate population trajectory. However, changes in abundance estimates from year-to-year across time are due to both true variation in population size (process variation) and variation due to imperfect sampling and model fit. State-space models are a relatively new method that can be used to partition the error components and quantify trends based only on process variation. We compare a state-space modelling approach with a more traditional linear regression approach to assess trends in uncorrected raw counts and detection-corrected abundance estimates of forest birds at Hakalau Forest National Wildlife Refuge, Hawai‘i. Most species demonstrated similar trends using either method. In general, evidence for trends using state-space models was less strong than for linear regression, as measured by estimates of precision. However, while the state-space models may sacrifice precision, the expectation is that these estimates provide a better representation of the real world biological processes of interest because they are partitioning process variation (environmental and demographic variation) and observation variation (sampling and model variation). The state-space approach also provides annual estimates of abundance which can be used by managers to set conservation strategies, and can be linked to factors that vary by year, such as climate, to better understand processes that drive population trends.
Vahabi, Zahra; Amirfattahi, Rasoul; Shayegh, Farzaneh; Ghassemi, Fahimeh
2015-09-01
Considerable efforts have been made in order to predict seizures. Among these methods, the ones that quantify synchronization between brain areas, are the most important methods. However, to date, a practically acceptable result has not been reported. In this paper, we use a synchronization measurement method that is derived according to the ability of bi-spectrum in determining the nonlinear properties of a system. In this method, first, temporal variation of the bi-spectrum of different channels of electro cardiography (ECoG) signals are obtained via an extended wavelet-based time-frequency analysis method; then, to compare different channels, the bi-phase correlation measure is introduced. Since, in this way, the temporal variation of the amount of nonlinear coupling between brain regions, which have not been considered yet, are taken into account, results are more reliable than the conventional phase-synchronization measures. It is shown that, for 21 patients of FSPEEG database, bi-phase correlation can discriminate the pre-ictal and ictal states, with very low false positive rates (FPRs) (average: 0.078/h) and high sensitivity (100%). However, the proposed seizure predictor still cannot significantly overcome the random predictor for all patients.
X-ray computed tomography using curvelet sparse regularization.
Wieczorek, Matthias; Frikel, Jürgen; Vogel, Jakob; Eggl, Elena; Kopp, Felix; Noël, Peter B; Pfeiffer, Franz; Demaret, Laurent; Lasser, Tobias
2015-04-01
Reconstruction of x-ray computed tomography (CT) data remains a mathematically challenging problem in medical imaging. Complementing the standard analytical reconstruction methods, sparse regularization is growing in importance, as it allows inclusion of prior knowledge. The paper presents a method for sparse regularization based on the curvelet frame for the application to iterative reconstruction in x-ray computed tomography. In this work, the authors present an iterative reconstruction approach based on the alternating direction method of multipliers using curvelet sparse regularization. Evaluation of the method is performed on a specifically crafted numerical phantom dataset to highlight the method's strengths. Additional evaluation is performed on two real datasets from commercial scanners with different noise characteristics, a clinical bone sample acquired in a micro-CT and a human abdomen scanned in a diagnostic CT. The results clearly illustrate that curvelet sparse regularization has characteristic strengths. In particular, it improves the restoration and resolution of highly directional, high contrast features with smooth contrast variations. The authors also compare this approach to the popular technique of total variation and to traditional filtered backprojection. The authors conclude that curvelet sparse regularization is able to improve reconstruction quality by reducing noise while preserving highly directional features.
A second-order accurate kinetic-theory-based method for inviscid compressible flows
NASA Technical Reports Server (NTRS)
Deshpande, Suresh M.
1986-01-01
An upwind method for the numerical solution of the Euler equations is presented. This method, called the kinetic numerical method (KNM), is based on the fact that the Euler equations are moments of the Boltzmann equation of the kinetic theory of gases when the distribution function is Maxwellian. The KNM consists of two phases, the convection phase and the collision phase. The method is unconditionally stable and explicit. It is highly vectorizable and can be easily made total variation diminishing for the distribution function by a suitable choice of the interpolation strategy. The method is applied to a one-dimensional shock-propagation problem and to a two-dimensional shock-reflection problem.
Phase-Shifted Based Numerical Method for Modeling Frequency-Dependent Effects on Seismic Reflections
NASA Astrophysics Data System (ADS)
Chen, Xuehua; Qi, Yingkai; He, Xilei; He, Zhenhua; Chen, Hui
2016-08-01
The significant velocity dispersion and attenuation has often been observed when seismic waves propagate in fluid-saturated porous rocks. Both the magnitude and variation features of the velocity dispersion and attenuation are frequency-dependent and related closely to the physical properties of the fluid-saturated porous rocks. To explore the effects of frequency-dependent dispersion and attenuation on the seismic responses, in this work, we present a numerical method for seismic data modeling based on the diffusive and viscous wave equation (DVWE), which introduces the poroelastic theory and takes into account diffusive and viscous attenuation in diffusive-viscous-theory. We derive a phase-shift wave extrapolation algorithm in frequencywavenumber domain for implementing the DVWE-based simulation method that can handle the simultaneous lateral variations in velocity, diffusive coefficient and viscosity. Then, we design a distributary channels model in which a hydrocarbon-saturated sand reservoir is embedded in one of the channels. Next, we calculated the synthetic seismic data to analytically and comparatively illustrate the seismic frequency-dependent behaviors related to the hydrocarbon-saturated reservoir, by employing DVWE-based and conventional acoustic wave equation (AWE) based method, respectively. The results of the synthetic seismic data delineate the intrinsic energy loss, phase delay, lower instantaneous dominant frequency and narrower bandwidth due to the frequency-dependent dispersion and attenuation when seismic wave travels through the hydrocarbon-saturated reservoir. The numerical modeling method is expected to contribute to improve the understanding of the features and mechanism of the seismic frequency-dependent effects resulted from the hydrocarbon-saturated porous rocks.
Delanghe, Joris R; Cobbaert, Christa; Galteau, Marie-Madeleine; Harmoinen, Aimo; Jansen, Rob; Kruse, Rolf; Laitinen, Päivi; Thienpont, Linda M; Wuyts, Birgitte; Weykamp, Cas; Panteghini, Mauro
2008-01-01
The European In Vitro Diagnostics (IVD) directive requires traceability to reference methods and materials of analytes. It is a task of the profession to verify the trueness of results and IVD compatibility. The results of a trueness verification study by the European Communities Confederation of Clinical Chemistry (EC4) working group on creatinine standardization are described, in which 189 European laboratories analyzed serum creatinine in a commutable serum-based material, using analytical systems from seven companies. Values were targeted using isotope dilution gas chromatography/mass spectrometry. Results were tested on their compliance to a set of three criteria: trueness, i.e., no significant bias relative to the target value, between-laboratory variation and within-laboratory variation relative to the maximum allowable error. For the lower and intermediate level, values differed significantly from the target value in the Jaffe and the dry chemistry methods. At the high level, dry chemistry yielded higher results. Between-laboratory coefficients of variation ranged from 4.37% to 8.74%. Total error budget was mainly consumed by the bias. Non-compensated Jaffe methods largely exceeded the total error budget. Best results were obtained for the enzymatic method. The dry chemistry method consumed a large part of its error budget due to calibration bias. Despite the European IVD directive and the growing needs for creatinine standardization, an unacceptable inter-laboratory variation was observed, which was mainly due to calibration differences. The calibration variation has major clinical consequences, in particular in pediatrics, where reference ranges for serum and plasma creatinine are low, and in the estimation of glomerular filtration rate.
Halder, Indrani; Yang, Bao-Zhu; Kranzler, Henry R.; Stein, Murray B.; Shriver, Mark D.; Gelernter, Joel
2010-01-01
Variation in individual admixture proportions leads to heterogeneity within populations. Though novel methods and marker panels have been developed to quantify individual admixture, empirical data describing individual admixture distributions are limited. We investigated variation in individual admixture in four US populations [European American (EA), African American (AA) and Hispanics from Connecticut (EC) and California (WC)] assuming three-way intermixture among Europeans, Africans and Indigenous Americans. Admixture estimates were inferred using a panel of 36 microsatellites and 1 SNP, which have significant allele frequency differences between ancestral populations, and by using both a maximum likelihood (ML) based method and a Bayesian method implemented in the program STRUCTURE. Simulation studies showed that estimates obtained with this marker panel are within 96% of expected values. EAs had the lowest non-European admixture with both methods, but showed greater homogeneity with STRUCTURE than with ML. All other samples showed a high degree of variation in admixture estimates with both methods, were highly concordant and showed evidence of admixture stratification. With both methods, AA subjects had 16% European and <10% Indigenous American admixture on average. EC Hispanics had higher mean African admixture and the WC Hispanics higher mean Indigenous American admixture, possibly reflecting their different continental origins. PMID:19572378
Design and Optimization of Nanomaterials for Sensing Applications
NASA Astrophysics Data System (ADS)
Sanderson, Robert Noboru
Nanomaterials, materials with one or more of their dimensions on the nanoscale, have emerged as an important field in the development of next-generation sensing systems. Their high surface-to-volume ratio makes them useful for sensing, but also makes them sensitive to processing defects and inherent material defects. To develop and optimize these systems, it is thus necessary to characterize these defects to understand their origin and how to work around them. Scanning probe microscopy (SPM) techniques like atomic force microscopy (AFM) and scanning tunneling microscopy (STM) are important characterization methods which can measure nanoscale topography and electronic structure. These methods are appealing in nanomaterial systems because they are non-damaging and provide local, high-resolution data, and so are capable of detecting nanoscale features such as single defect sites. There are difficulties, however, in the interpretation of SPM data. For instance, AFM-based methods are prone to experimental artifacts due to long-range interactions, such as capacitive crosstalk in Kelvin probe force microscopy (KPFM), and artifacts due to the finite size of the probe tip, such as incorrect surface tracking at steep topographical features. Mechanical characterization (via force spectroscopy) of nanomaterials with significant nanoscale variations, such as tethered lipid bilayer membranes (tLBMs), is also difficult since variations in the bulk system's mechanical behavior must be distinguished from local fluctuations. Additionally, interpretation of STM data is non-trivial due to local variations in electron density in addition to topographical variations. In this thesis we overcome some limitations of SPM methods by supplementing them with additional surface analytical methods as well as computational methods, and we characterize several nanomaterial systems. Current-carrying vapor-liquid-solid Si nanowires (useful for interdigitated-electrode-based sensors) are characterized using finite-element-method (FEM)-supplemented KPFM to retrieve useful information about processing defects, contact resistance, and the primary charge carriers. Next, a tLBM system's stiffness and the stiffness' dependence on tethering molecule concentration is measured using statistical analysis of thousands of AFM force spectra, demonstrating a biosensor-compatible system with a controllable bulk rigidity. Finally, we utilize surface analytical techniques to inform the development of a novel three-dimensional graphene system for sensing applications.
Fuzzy control of power converters based on quasilinear modelling
NASA Astrophysics Data System (ADS)
Li, C. K.; Lee, W. L.; Chou, Y. W.
1995-03-01
Unlike feedback control by the fuzzy PID method, a new fuzzy control algorithm based on quasilinear modelling of the DC-DC converter is proposed. Investigation is carried out using a buck-boost converter. Simulation results demonstrated that the converter can be regulated with improved performance even when subjected to input disturbance and load variation.
USDA-ARS?s Scientific Manuscript database
The high spatial resolution of QuickBird satellite images makes it possible to show spatial variability at fine details. However, the effect of topography-induced illumination variations become more evident, even in moderately sloped areas. Based on a high resolution (1 m) digital elevation model ge...
Smooth quantile normalization.
Hicks, Stephanie C; Okrah, Kwame; Paulson, Joseph N; Quackenbush, John; Irizarry, Rafael A; Bravo, Héctor Corrada
2018-04-01
Between-sample normalization is a critical step in genomic data analysis to remove systematic bias and unwanted technical variation in high-throughput data. Global normalization methods are based on the assumption that observed variability in global properties is due to technical reasons and are unrelated to the biology of interest. For example, some methods correct for differences in sequencing read counts by scaling features to have similar median values across samples, but these fail to reduce other forms of unwanted technical variation. Methods such as quantile normalization transform the statistical distributions across samples to be the same and assume global differences in the distribution are induced by only technical variation. However, it remains unclear how to proceed with normalization if these assumptions are violated, for example, if there are global differences in the statistical distributions between biological conditions or groups, and external information, such as negative or control features, is not available. Here, we introduce a generalization of quantile normalization, referred to as smooth quantile normalization (qsmooth), which is based on the assumption that the statistical distribution of each sample should be the same (or have the same distributional shape) within biological groups or conditions, but allowing that they may differ between groups. We illustrate the advantages of our method on several high-throughput datasets with global differences in distributions corresponding to different biological conditions. We also perform a Monte Carlo simulation study to illustrate the bias-variance tradeoff and root mean squared error of qsmooth compared to other global normalization methods. A software implementation is available from https://github.com/stephaniehicks/qsmooth.
Tao, Ran; Fletcher, P Thomas; Gerber, Samuel; Whitaker, Ross T
2009-01-01
This paper presents a method for correcting the geometric and greyscale distortions in diffusion-weighted MRI that result from inhomogeneities in the static magnetic field. These inhomogeneities may due to imperfections in the magnet or to spatial variations in the magnetic susceptibility of the object being imaged--so called susceptibility artifacts. Echo-planar imaging (EPI), used in virtually all diffusion weighted acquisition protocols, assumes a homogeneous static field, which generally does not hold for head MRI. The resulting distortions are significant, sometimes more than ten millimeters. These artifacts impede accurate alignment of diffusion images with structural MRI, and are generally considered an obstacle to the joint analysis of connectivity and structure in head MRI. In principle, susceptibility artifacts can be corrected by acquiring (and applying) a field map. However, as shown in the literature and demonstrated in this paper, field map corrections of susceptibility artifacts are not entirely accurate and reliable, and thus field maps do not produce reliable alignment of EPIs with corresponding structural images. This paper presents a new, image-based method for correcting susceptibility artifacts. The method relies on a variational formulation of the match between an EPI baseline image and a corresponding T2-weighted structural image but also specifically accounts for the physics of susceptibility artifacts. We derive a set of partial differential equations associated with the optimization, describe the numerical methods for solving these equations, and present results that demonstrate the effectiveness of the proposed method compared with field-map correction.
Apramian, Tavis; Cristancho, Sayra; Watling, Chris; Ott, Michael; Lingard, Lorelei
2017-01-01
Background Expert physicians develop their own ways of doing things. The influence of such practice variation in clinical learning is insufficiently understood. Our grounded theory study explored how residents make sense of, and behave in relation to, the procedural variations of faculty surgeons. Method We sampled senior postgraduate surgical residents to construct a theoretical framework for how residents make sense of procedural variations. Using a constructivist grounded theory approach, we used marginal participant observation in the operating room across 56 surgical cases (146 hours), field interviews (38), and formal interviews (6) to develop a theoretical framework for residents’ ways of dealing with procedural variations. Data analysis used constant comparison to iteratively refine the framework and data collection until theoretical saturation was reached. Results The core category of the constructed theory was called thresholds of principle and preference and it captured how faculty members position some procedural variations as negotiable and others not. The term thresholding was coined to describe residents’ daily experiences of spotting, mapping, and negotiating their faculty members’ thresholds and defending their own emerging thresholds. Conclusions Thresholds of principle and preference play a key role in workplace-based medical education. Postgraduate medical learners are occupied on a day-to-day level with thresholding and attempting to make sense of the procedural variations of faculty. Workplace-based teaching and assessment should include an understanding of the integral role of thresholding in shaping learners’ development. Future research should explore the nature and impact of thresholding in workplace-based learning beyond the surgical context. PMID:26505105
A RSSI-based parameter tracking strategy for constrained position localization
NASA Astrophysics Data System (ADS)
Du, Jinze; Diouris, Jean-François; Wang, Yide
2017-12-01
In this paper, a received signal strength indicator (RSSI)-based parameter tracking strategy for constrained position localization is proposed. To estimate channel model parameters, least mean squares method (LMS) is associated with the trilateration method. In the context of applications where the positions are constrained on a grid, a novel tracking strategy is proposed to determine the real position and obtain the actual parameters in the monitored region. Based on practical data acquired from a real localization system, an experimental channel model is constructed to provide RSSI values and verify the proposed tracking strategy. Quantitative criteria are given to guarantee the efficiency of the proposed tracking strategy by providing a trade-off between the grid resolution and parameter variation. The simulation results show a good behavior of the proposed tracking strategy in the presence of space-time variation of the propagation channel. Compared with the existing RSSI-based algorithms, the proposed tracking strategy exhibits better localization accuracy but consumes more calculation time. In addition, a tracking test is performed to validate the effectiveness of the proposed tracking strategy.
The role of continuity in residual-based variational multiscale modeling of turbulence
NASA Astrophysics Data System (ADS)
Akkerman, I.; Bazilevs, Y.; Calo, V. M.; Hughes, T. J. R.; Hulshoff, S.
2008-02-01
This paper examines the role of continuity of the basis in the computation of turbulent flows. We compare standard finite elements and non-uniform rational B-splines (NURBS) discretizations that are employed in Isogeometric Analysis (Hughes et al. in Comput Methods Appl Mech Eng, 194:4135 4195, 2005). We make use of quadratic discretizations that are C 0-continuous across element boundaries in standard finite elements, and C 1-continuous in the case of NURBS. The variational multiscale residual-based method (Bazilevs in Isogeometric analysis of turbulence and fluid-structure interaction, PhD thesis, ICES, UT Austin, 2006; Bazilevs et al. in Comput Methods Appl Mech Eng, submitted, 2007; Calo in Residual-based multiscale turbulence modeling: finite volume simulation of bypass transition. PhD thesis, Department of Civil and Environmental Engineering, Stanford University, 2004; Hughes et al. in proceedings of the XXI international congress of theoretical and applied mechanics (IUTAM), Kluwer, 2004; Scovazzi in Multiscale methods in science and engineering, PhD thesis, Department of Mechanical Engineering, Stanford Universty, 2004) is employed as a turbulence modeling technique. We find that C 1-continuous discretizations outperform their C 0-continuous counterparts on a per-degree-of-freedom basis. We also find that the effect of continuity is greater for higher Reynolds number flows.
Moving object detection via low-rank total variation regularization
NASA Astrophysics Data System (ADS)
Wang, Pengcheng; Chen, Qian; Shao, Na
2016-09-01
Moving object detection is a challenging task in video surveillance. Recently proposed Robust Principal Component Analysis (RPCA) can recover the outlier patterns from the low-rank data under some mild conditions. However, the l-penalty in RPCA doesn't work well in moving object detection because the irrepresentable condition is often not satisfied. In this paper, a method based on total variation (TV) regularization scheme is proposed. In our model, image sequences captured with a static camera are highly related, which can be described using a low-rank matrix. Meanwhile, the low-rank matrix can absorb background motion, e.g. periodic and random perturbation. The foreground objects in the sequence are usually sparsely distributed and drifting continuously, and can be treated as group outliers from the highly-related background scenes. Instead of l-penalty, we exploit the total variation of the foreground. By minimizing the total variation energy, the outliers tend to collapse and finally converge to be the exact moving objects. The TV-penalty is superior to the l-penalty especially when the outlier is in the majority for some pixels, and our method can estimate the outlier explicitly with less bias but higher variance. To solve the problem, a joint optimization function is formulated and can be effectively solved through the inexact Augmented Lagrange Multiplier (ALM) method. We evaluate our method along with several state-of-the-art approaches in MATLAB. Both qualitative and quantitative results demonstrate that our proposed method works effectively on a large range of complex scenarios.
On characterizing population commonalities and subject variations in brain networks.
Ghanbari, Yasser; Bloy, Luke; Tunc, Birkan; Shankar, Varsha; Roberts, Timothy P L; Edgar, J Christopher; Schultz, Robert T; Verma, Ragini
2017-05-01
Brain networks based on resting state connectivity as well as inter-regional anatomical pathways obtained using diffusion imaging have provided insight into pathology and development. Such work has underscored the need for methods that can extract sub-networks that can accurately capture the connectivity patterns of the underlying population while simultaneously describing the variation of sub-networks at the subject level. We have designed a multi-layer graph clustering method that extracts clusters of nodes, called 'network hubs', which display higher levels of connectivity within the cluster than to the rest of the brain. The method determines an atlas of network hubs that describes the population, as well as weights that characterize subject-wise variation in terms of within- and between-hub connectivity. This lowers the dimensionality of brain networks, thereby providing a representation amenable to statistical analyses. The applicability of the proposed technique is demonstrated by extracting an atlas of network hubs for a population of typically developing controls (TDCs) as well as children with autism spectrum disorder (ASD), and using the structural and functional networks of a population to determine the subject-level variation of these hubs and their inter-connectivity. These hubs are then used to compare ASD and TDCs. Our method is generalizable to any population whose connectivity (structural or functional) can be captured via non-negative network graphs. Copyright © 2015 Elsevier B.V. All rights reserved.
Level set formulation of two-dimensional Lagrangian vortex detection methods
NASA Astrophysics Data System (ADS)
Hadjighasem, Alireza; Haller, George
2016-10-01
We propose here the use of the variational level set methodology to capture Lagrangian vortex boundaries in 2D unsteady velocity fields. This method reformulates earlier approaches that seek material vortex boundaries as extremum solutions of variational problems. We demonstrate the performance of this technique for two different variational formulations built upon different notions of coherence. The first formulation uses an energy functional that penalizes the deviation of a closed material line from piecewise uniform stretching [Haller and Beron-Vera, J. Fluid Mech. 731, R4 (2013)]. The second energy function is derived for a graph-based approach to vortex boundary detection [Hadjighasem et al., Phys. Rev. E 93, 063107 (2016)]. Our level-set formulation captures an a priori unknown number of vortices simultaneously at relatively low computational cost. We illustrate the approach by identifying vortices from different coherence principles in several examples.
NASA Astrophysics Data System (ADS)
Muralidhara, .; Vasa, Nilesh J.; Singaperumal, M.
2010-02-01
A micro-electro-discharge machine (Micro EDM) was developed incorporating a piezoactuated direct drive tool feed mechanism for micromachining of Silicon using a copper tool. Tool and workpiece materials are removed during Micro EDM process which demand for a tool wear compensation technique to reach the specified depth of machining on the workpiece. An in-situ axial tool wear and machining depth measurement system is developed to investigate axial wear ratio variations with machining depth. Stepwise micromachining experiments on silicon wafer were performed to investigate the variations in the silicon removal and tool wear depths with increase in tool feed. Based on these experimental data, a tool wear compensation method is proposed to reach the desired depth of micromachining on silicon using copper tool. Micromachining experiments are performed with the proposed tool wear compensation method and a maximum workpiece machining depth variation of 6% was observed.
Spectroscopic vector analysis for fast pattern quality monitoring
NASA Astrophysics Data System (ADS)
Sohn, Younghoon; Ryu, Sungyoon; Lee, Chihoon; Yang, Yusin
2018-03-01
In semiconductor industry, fast and effective measurement of pattern variation has been key challenge for assuring massproduct quality. Pattern measurement techniques such as conventional CD-SEMs or Optical CDs have been extensively used, but these techniques are increasingly limited in terms of measurement throughput and time spent in modeling. In this paper we propose time effective pattern monitoring method through the direct spectrum-based approach. In this technique, a wavelength band sensitive to a specific pattern change is selected from spectroscopic ellipsometry signal scattered by pattern to be measured, and the amplitude and phase variation in the wavelength band are analyzed as a measurement index of the pattern change. This pattern change measurement technique is applied to several process steps and verified its applicability. Due to its fast and simple analysis, the methods can be adapted to the massive process variation monitoring maximizing measurement throughput.
Three-dimensional compact explicit-finite difference time domain scheme with density variation
NASA Astrophysics Data System (ADS)
Tsuchiya, Takao; Maruta, Naoki
2018-07-01
In this paper, the density variation is implemented in the three-dimensional compact-explicit finite-difference time-domain (CE-FDTD) method. The formulation is first developed based on the continuity equation and the equation of motion, which include the density. Some numerical demonstrations are performed for the three-dimensional sound wave propagation in a two density layered medium. The numerical results are compared with the theoretical results to verify the proposed formulation.
Object Classification With Joint Projection and Low-Rank Dictionary Learning.
Foroughi, Homa; Ray, Nilanjan; Hong Zhang
2018-02-01
For an object classification system, the most critical obstacles toward real-world applications are often caused by large intra-class variability, arising from different lightings, occlusion, and corruption, in limited sample sets. Most methods in the literature would fail when the training samples are heavily occluded, corrupted or have significant illumination or viewpoint variations. Besides, most of the existing methods and especially deep learning-based methods, need large training sets to achieve a satisfactory recognition performance. Although using the pre-trained network on a generic large-scale data set and fine-tune it to the small-sized target data set is a widely used technique, this would not help when the content of base and target data sets are very different. To address these issues simultaneously, we propose a joint projection and low-rank dictionary learning method using dual graph constraints. Specifically, a structured class-specific dictionary is learned in the low-dimensional space, and the discrimination is further improved by imposing a graph constraint on the coding coefficients, that maximizes the intra-class compactness and inter-class separability. We enforce structural incoherence and low-rank constraints on sub-dictionaries to reduce the redundancy among them, and also make them robust to variations and outliers. To preserve the intrinsic structure of data, we introduce a supervised neighborhood graph into the framework to make the proposed method robust to small-sized and high-dimensional data sets. Experimental results on several benchmark data sets verify the superior performance of our method for object classification of small-sized data sets, which include a considerable amount of different kinds of variation, and may have high-dimensional feature vectors.
Manufacture of threads with variable pitch by using noncircular gears
NASA Astrophysics Data System (ADS)
Slătineanu, L.; Dodun, O.; Coteață, M.; Coman, I.; Nagîț, G.; Beșliu, I.
2016-08-01
There are mechanical equipments in which shafts threaded with variable pitch are included. Such a shaft could be met in the case of worm specific to the double enveloping worm gearing. Over the years, the researchers investigated some possibilities to geometrically define and manufacture the shaft zones characterized by a variable pitch. One of the methods able to facilitate the manufacture of threads with variable pitch is based on the use of noncircular gears in the threading kinematic chain for threading by cutting. In order to design the noncircular gears, the mathematical law of pitch variation has to be known. An analysis of pitch variation based on geometrical considerations was developed in the case of a double enveloping globoid worm. Subsequently, on the bases of a proper situation, a numerical model was determined. In this way, an approximately law of pitch variation was determined and it could be taken into consideration when designing the noncircular gears included in the kinematic chain of the cutting machine tool.
NASA Astrophysics Data System (ADS)
Mareschal, J.; Jaupart, C. P.
2013-12-01
Most of the variations in surface heat flux in stable continents are caused by variations in crustal heat production, with an almost uniform heat flux at the base of the crust ( 15+/-3 mW/m2). Such relatively small differences in Moho heat flux cannot be resolved by heat flow data alone, but they lead to important lateral variations in lithospheric temperatures and thicknesses. In order to better constrain temperatures in the lower lithosphere, we have combined surface heat flow and heat production data from the southern Superior Province in Canada with vertical shear wave velocity profiles obtained from surface wave inversion. We use the Monte-Carlo method to generate lithospheric temperature profiles from which shear wave velocity can be calculated for a given mantle composition. We eliminate thermal models which yield lithospheric and sub-lithospheric velocities that do not fit the shear wave velocity profile. Surface heat flux being constrained, the free parameters of the thermal model are: the mantle heat flux, the mantle heat production, the crustal differentiation index (ratio of surface to bulk crustal heat production) and the temperature of the mantle isentrope. Two conclusions emerge from this study. One is that, for some profiles, the vertical variations in shear wave velocities cannot be accounted for by temperature alone but also require compositional changes within the lithosphere. The second is that there are long wavelength horizontal variations in mantle temperatures (~80-100K) at the base of the lithosphere and in the mantle below
The study on the effect of pattern density distribution on the STI CMP process
NASA Astrophysics Data System (ADS)
Sub, Yoon Myung; Hian, Bernard Yap Tzen; Fong, Lee It; Anak, Philip Menit; Minhar, Ariffin Bin; Wui, Tan Kim; Kim, Melvin Phua Twang; Jin, Looi Hui; Min, Foo Thai
2017-08-01
The effects of pattern density on CMP characteristics were investigated using specially designed wafer for the characterization of pattern-dependencies in STI CMP [1]. The purpose of this study is to investigate the planarization behavior based on a direct STI CMP used in cerium (CeO2) based slurry system in terms of pattern density variation. The minimal design rule (DR) of 180nm generation technology node was adopted for the mask layout. The mask was successfully applied for evaluation of a cerium (CeO2) abrasive based direct STI CMP process. In this study, we described a planarization behavior of the loading-effects of pattern density variation which were characterized with layout pattern density and pitch variations using masks mentioned above. Furthermore, the characterizing pattern dependent on the variations of the dimensions and spacing features, in thickness remaining after CMP, were analyzed and evaluated. The goal was to establish a concept of library method which will be used to generate design rules reducing the probability of CMP-related failures. Details of the characterization were measured in various layouts showing different pattern density ranges and the effects of pattern density on STI CMP has been discussed in this paper.
A prior-based integrative framework for functional transcriptional regulatory network inference
Siahpirani, Alireza F.
2017-01-01
Abstract Transcriptional regulatory networks specify regulatory proteins controlling the context-specific expression levels of genes. Inference of genome-wide regulatory networks is central to understanding gene regulation, but remains an open challenge. Expression-based network inference is among the most popular methods to infer regulatory networks, however, networks inferred from such methods have low overlap with experimentally derived (e.g. ChIP-chip and transcription factor (TF) knockouts) networks. Currently we have a limited understanding of this discrepancy. To address this gap, we first develop a regulatory network inference algorithm, based on probabilistic graphical models, to integrate expression with auxiliary datasets supporting a regulatory edge. Second, we comprehensively analyze our and other state-of-the-art methods on different expression perturbation datasets. Networks inferred by integrating sequence-specific motifs with expression have substantially greater agreement with experimentally derived networks, while remaining more predictive of expression than motif-based networks. Our analysis suggests natural genetic variation as the most informative perturbation for network inference, and, identifies core TFs whose targets are predictable from expression. Multiple reasons make the identification of targets of other TFs difficult, including network architecture and insufficient variation of TF mRNA level. Finally, we demonstrate the utility of our inference algorithm to infer stress-specific regulatory networks and for regulator prioritization. PMID:27794550
From stage to age in variable environments: life expectancy and survivorship.
Tuljapurkar, Shripad; Horvitz, Carol C
2006-06-01
Stage-based demographic data are now available on many species of plants and some animals, and they often display temporal and spatial variability. We provide exact formulas to compute age-specific life expectancy and survivorship from stage-based data for three models of temporal variability: cycles, serially independent random variation, and a Markov chain. These models provide a comprehensive description of patterns of temporal variation. Our formulas describe the effects of cohort (birth) environmental condition on mortality at all ages, and of the effects on survivorship of environmental variability experienced over the course of life. This paper complements existing methods for time-invariant stage-based data, and adds to the information on population growth and dynamics available from stochastic demography.
USDA-ARS?s Scientific Manuscript database
The objective of the paper is to study the temporal variations of the subsurface soil properties due to seasonal and weather effects using a combination of a new seismic surface method and an existing acoustic probe system. A laser Doppler vibrometer (LDV) based multi-channel analysis of surface wav...
USDA-ARS?s Scientific Manuscript database
Although many near infrared (NIR) spectrometric calibrations exist for a variety of components in soy, current calibration methods are often limited by either a small sample size on which the calibrations are based or a wide variation in sample preparation and measurement methods, which yields unrel...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-22
... secured borrowers within each year), the coefficients of variation of the time series of annual default... the method you use, please do not submit your comment multiple times via different methods. You may... component to directly recognize the credit risk on such loans.\\4\\ At the time of the Farm Bill's enactment...
Middleton, Michael S; Haufe, William; Hooker, Jonathan; Borga, Magnus; Dahlqvist Leinhard, Olof; Romu, Thobias; Tunón, Patrik; Hamilton, Gavin; Wolfson, Tanya; Gamst, Anthony; Loomba, Rohit; Sirlin, Claude B
2017-05-01
Purpose To determine the repeatability and accuracy of a commercially available magnetic resonance (MR) imaging-based, semiautomated method to quantify abdominal adipose tissue and thigh muscle volume and hepatic proton density fat fraction (PDFF). Materials and Methods This prospective study was institutional review board- approved and HIPAA compliant. All subjects provided written informed consent. Inclusion criteria were age of 18 years or older and willingness to participate. The exclusion criterion was contraindication to MR imaging. Three-dimensional T1-weighted dual-echo body-coil images were acquired three times. Source images were reconstructed to generate water and calibrated fat images. Abdominal adipose tissue and thigh muscle were segmented, and their volumes were estimated by using a semiautomated method and, as a reference standard, a manual method. Hepatic PDFF was estimated by using a confounder-corrected chemical shift-encoded MR imaging method with hybrid complex-magnitude reconstruction and, as a reference standard, MR spectroscopy. Tissue volume and hepatic PDFF intra- and interexamination repeatability were assessed by using intraclass correlation and coefficient of variation analysis. Tissue volume and hepatic PDFF accuracy were assessed by means of linear regression with the respective reference standards. Results Adipose and thigh muscle tissue volumes of 20 subjects (18 women; age range, 25-76 years; body mass index range, 19.3-43.9 kg/m 2 ) were estimated by using the semiautomated method. Intra- and interexamination intraclass correlation coefficients were 0.996-0.998 and coefficients of variation were 1.5%-3.6%. For hepatic MR imaging PDFF, intra- and interexamination intraclass correlation coefficients were greater than or equal to 0.994 and coefficients of variation were less than or equal to 7.3%. In the regression analyses of manual versus semiautomated volume and spectroscopy versus MR imaging, PDFF slopes and intercepts were close to the identity line, and correlations of determination at multivariate analysis (R 2 ) ranged from 0.744 to 0.994. Conclusion This MR imaging-based, semiautomated method provides high repeatability and accuracy for estimating abdominal adipose tissue and thigh muscle volumes and hepatic PDFF. © RSNA, 2017.
Non-invasive sex assessment in bovine semen by Raman spectroscopy
NASA Astrophysics Data System (ADS)
De Luca, A. C.; Managó, S.; Ferrara, M. A.; Rendina, I.; Sirleto, L.; Puglisi, R.; Balduzzi, D.; Galli, A.; Ferraro, P.; Coppola, G.
2014-05-01
X- and Y-chromosome-bearing sperm cell sorting is of great interest, especially for animal production management systems and genetic improvement programs. Here, we demonstrate an optical method based on Raman spectroscopy to separate X- and Y-chromosome-bearing sperm cells, overcoming many of the limitations associated with current sex-sorting protocols. A priori Raman imaging of bull spermatozoa was utilized to select the sampling points (head-neck region), which were then used to discriminate cells based on a spectral classification model. Main variations of Raman peaks associated with the DNA content were observed together with a variation due to the sex membrane proteins. Next, we used principal component analysis to determine the efficiency of our device as a cell sorting method. The results (>90% accuracy) demonstrated that Raman spectroscopy is a powerful candidate for the development of a highly efficient, non-invasive, and non-destructive tool for sperm sexing.
Retinex based low-light image enhancement using guided filtering and variational framework
NASA Astrophysics Data System (ADS)
Zhang, Shi; Tang, Gui-jin; Liu, Xiao-hua; Luo, Su-huai; Wang, Da-dong
2018-03-01
A new image enhancement algorithm based on Retinex theory is proposed to solve the problem of bad visual effect of an image in low-light conditions. First, an image is converted from the RGB color space to the HSV color space to get the V channel. Next, the illuminations are respectively estimated by the guided filtering and the variational framework on the V channel and combined into a new illumination by average gradient. The new reflectance is calculated using V channel and the new illumination. Then a new V channel obtained by multiplying the new illumination and reflectance is processed with contrast limited adaptive histogram equalization (CLAHE). Finally, the new image in HSV space is converted back to RGB space to obtain the enhanced image. Experimental results show that the proposed method has better subjective quality and objective quality than existing methods.
Li, Chunmei; Yu, Zhilong; Fu, Yusi; Pang, Yuhong; Huang, Yanyi
2017-04-26
We develop a novel single-cell-based platform through digital counting of amplified genomic DNA fragments, named multifraction amplification (mfA), to detect the copy number variations (CNVs) in a single cell. Amplification is required to acquire genomic information from a single cell, while introducing unavoidable bias. Unlike prevalent methods that directly infer CNV profiles from the pattern of sequencing depth, our mfA platform denatures and separates the DNA molecules from a single cell into multiple fractions of a reaction mix before amplification. By examining the sequencing result of each fraction for a specific fragment and applying a segment-merge maximum likelihood algorithm to the calculation of copy number, we digitize the sequencing-depth-based CNV identification and thus provide a method that is less sensitive to the amplification bias. In this paper, we demonstrate a mfA platform through multiple displacement amplification (MDA) chemistry. When performing the mfA platform, the noise of MDA is reduced; therefore, the resolution of single-cell CNV identification can be improved to 100 kb. We can also determine the genomic region free of allelic drop-out with mfA platform, which is impossible for conventional single-cell amplification methods.
A study of the stress wave factor technique for nondestructive evaluation of composite materials
NASA Technical Reports Server (NTRS)
Sarrafzadeh-Khoee, A.; Kiernan, M. T.; Duke, J. C., Jr.; Henneke, E. G., II
1986-01-01
The acousto-ultrasonic method of nondestructive evaluation is an extremely sensitive means of assessing material response. Efforts continue to complete the understanding of this method. In order to achieve the full sensitivity of the technique, extreme care must be taken in its performance. This report provides an update of the efforts to advance the understanding of this method and to increase its application to the nondestructive evaluation of composite materials. Included are descriptions of a novel optical system that is capable of measuring in-plane and out-of-plane displacements, an IBM PC-based data acquisition system, an extensive data analysis software package, the azimuthal variation of acousto-ultrasonic behavior in graphite/epoxy laminates, and preliminary examination of processing variation in graphite-aluminum tubes.
Disaggregating tree and grass phenology in tropical savannas
NASA Astrophysics Data System (ADS)
Zhou, Qiang
Savannas are mixed tree-grass systems and as one of the world's largest biomes represent an important component of the Earth system affecting water and energy balances, carbon sequestration and biodiversity as well as supporting large human populations. Savanna vegetation structure and its distribution, however, may change because of major anthropogenic disturbances from climate change, wildfire, agriculture, and livestock production. The overstory and understory may have different water use strategies, different nutrient requirements and have different responses to fire and climate variation. The accurate measurement of the spatial distribution and structure of the overstory and understory are essential for understanding the savanna ecosystem. This project developed a workflow for separating the dynamics of the overstory and understory fractional cover in savannas at the continental scale (Australia, South America, and Africa). Previous studies have successfully separated the phenology of Australian savanna vegetation into persistent and seasonal greenness using time series decomposition, and into fractions of photosynthetic vegetation (PV), non-photosynthetic vegetation (NPV) and bare soil (BS) using linear unmixing. This study combined these methods to separate the understory and overstory signal in both the green and senescent phenological stages using remotely sensed imagery from the MODIS (MODerate resolution Imaging Spectroradiometer) sensor. The methods and parameters were adjusted based on the vegetation variation. The workflow was first tested at the Australian site. Here the PV estimates for overstory and understory showed best performance, however NPV estimates exhibited spatial variation in validation relationships. At the South American site (Cerrado), an additional method based on frequency unmixing was developed to separate green vegetation components with similar phenology. When the decomposition and frequency methods were compared, the frequency method was better for extracting the green tree phenology, but the original decomposition method was better for retrieval of understory grass phenology. Both methods, however, were less accurate than in the Cerrado than in Australia due to intermingling and intergrading of grass and small woody components. Since African savanna trees are predominantly deciduous, the frequency method was combined with the linear unmixing of fractional cover to attempt to separate the relatively similar phenology of deciduous trees and seasonal grasses. The results for Africa revealed limitations associated with both methods. There was spatial and seasonal variation in the spectral indices used to unmix fractional cover resulting in poor validation for NPV in particular. The frequency analysis revealed significant phase variation indicative of different phenology, but these could not be clearly ascribed to separate grass and tree components. Overall findings indicate that site-specific variation and vegetation structure and composition, along with MODIS pixel resolution, and the simple vegetation index approach used was not robust across the different savanna biomes. The approach showed generally better performance for estimating PV fraction, and separating green phenology, but there were major inconsistencies, errors and biases in estimation of NPV and BS outside of the Australian savanna environment.
Choi, Sanghun; Hoffman, Eric A; Wenzel, Sally E; Castro, Mario; Lin, Ching-Long
2014-09-15
Lung air trapping is estimated via quantitative computed tomography (CT) using density threshold-based measures on an expiration scan. However, the effects of scanner differences and imaging protocol adherence on quantitative assessment are known to be problematic. This study investigates the effects of protocol differences, such as using different CT scanners and breath-hold coaches in a multicenter asthmatic study, and proposes new methods that can adjust intersite and intersubject variations. CT images of 50 healthy subjects and 42 nonsevere and 52 severe asthmatics at total lung capacity (TLC) and functional residual capacity (FRC) were acquired using three different scanners and two different coaching methods at three institutions. A fraction threshold-based approach based on the corrected Hounsfield unit of air with tracheal density was applied to quantify air trapping at FRC. The new air-trapping method was enhanced by adding a lung-shaped metric at TLC and the lobar ratio of air-volume change between TLC and FRC. The fraction-based air-trapping method is able to collapse air-trapping data of respective populations into distinct regression lines. Relative to a constant value-based clustering scheme, the slope-based clustering scheme shows the improved performance and reduced misclassification rate of healthy subjects. Furthermore, both lung shape and air-volume change are found to be discriminant variables for differentiating among three populations of healthy subjects and nonsevere and severe asthmatics. In conjunction with the lung shape and air-volume change, the fraction-based measure of air trapping enables differentiation of severe asthmatics from nonsevere asthmatics and nonsevere asthmatics from healthy subjects, critical for the development and evaluation of new therapeutic interventions. Copyright © 2014 the American Physiological Society.
Choi, Sanghun; Hoffman, Eric A.; Wenzel, Sally E.; Castro, Mario
2014-01-01
Lung air trapping is estimated via quantitative computed tomography (CT) using density threshold-based measures on an expiration scan. However, the effects of scanner differences and imaging protocol adherence on quantitative assessment are known to be problematic. This study investigates the effects of protocol differences, such as using different CT scanners and breath-hold coaches in a multicenter asthmatic study, and proposes new methods that can adjust intersite and intersubject variations. CT images of 50 healthy subjects and 42 nonsevere and 52 severe asthmatics at total lung capacity (TLC) and functional residual capacity (FRC) were acquired using three different scanners and two different coaching methods at three institutions. A fraction threshold-based approach based on the corrected Hounsfield unit of air with tracheal density was applied to quantify air trapping at FRC. The new air-trapping method was enhanced by adding a lung-shaped metric at TLC and the lobar ratio of air-volume change between TLC and FRC. The fraction-based air-trapping method is able to collapse air-trapping data of respective populations into distinct regression lines. Relative to a constant value-based clustering scheme, the slope-based clustering scheme shows the improved performance and reduced misclassification rate of healthy subjects. Furthermore, both lung shape and air-volume change are found to be discriminant variables for differentiating among three populations of healthy subjects and nonsevere and severe asthmatics. In conjunction with the lung shape and air-volume change, the fraction-based measure of air trapping enables differentiation of severe asthmatics from nonsevere asthmatics and nonsevere asthmatics from healthy subjects, critical for the development and evaluation of new therapeutic interventions. PMID:25103972
NASA Astrophysics Data System (ADS)
Xu, Chunmei; Huang, Fu-yu; Yin, Jian-ling; Chen, Yu-dan; Mao, Shao-juan
2016-10-01
The influence of aberration on misalignment of optical system is considered fully, the deficiencies of Gauss optical correction method is pointed, and a correction method for transmission-type misalignment optical system is proposed based on aberration theory. The variation regularity of single lens aberration caused by axial displacement is analyzed, and the aberration effect is defined. On this basis, through calculating the size of lens adjustment induced by the image position error and the magnifying rate error, the misalignment correction formula based on the constraints of the aberration is deduced mathematically. Taking the three lens collimation system for an example, the test is carried out to validate this method, and its superiority is proved.
Just, Rebecca S; Irwin, Jodi A
2018-05-01
Some of the expected advantages of next generation sequencing (NGS) for short tandem repeat (STR) typing include enhanced mixture detection and genotype resolution via sequence variation among non-homologous alleles of the same length. However, at the same time that NGS methods for forensic DNA typing have advanced in recent years, many caseworking laboratories have implemented or are transitioning to probabilistic genotyping to assist the interpretation of complex autosomal STR typing results. Current probabilistic software programs are designed for length-based data, and were not intended to accommodate sequence strings as the product input. Yet to leverage the benefits of NGS for enhanced genotyping and mixture deconvolution, the sequence variation among same-length products must be utilized in some form. Here, we propose use of the longest uninterrupted stretch (LUS) in allele designations as a simple method to represent sequence variation within the STR repeat regions and facilitate - in the nearterm - probabilistic interpretation of NGS-based typing results. An examination of published population data indicated that a reference LUS region is straightforward to define for most autosomal STR loci, and that using repeat unit plus LUS length as the allele designator can represent greater than 80% of the alleles detected by sequencing. A proof of concept study performed using a freely available probabilistic software demonstrated that the LUS length can be used in allele designations when a program does not require alleles to be integers, and that utilizing sequence information improves interpretation of both single-source and mixed contributor STR typing results as compared to using repeat unit information alone. The LUS concept for allele designation maintains the repeat-based allele nomenclature that will permit backward compatibility to extant STR databases, and the LUS lengths themselves will be concordant regardless of the NGS assay or analysis tools employed. Further, these biologically based, easy-to-derive designations uphold clear relationships between parent alleles and their stutter products, enabling analysis in fully continuous probabilistic programs that model stutter while avoiding the algorithmic complexities that come with string based searches. Though using repeat unit plus LUS length as the allele designator does not capture variation that occurs outside of the core repeat regions, this straightforward approach would permit the large majority of known STR sequence variation to be used for mixture deconvolution and, in turn, result in more informative mixture statistics in the near term. Ultimately, the method could bridge the gap from current length-based probabilistic systems to facilitate broader adoption of NGS by forensic DNA testing laboratories. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
Two Novel Methods and Multi-Mode Periodic Solutions for the Fermi-Pasta-Ulam Model
NASA Astrophysics Data System (ADS)
Arioli, Gianni; Koch, Hans; Terracini, Susanna
2005-04-01
We introduce two novel methods for studying periodic solutions of the FPU β-model, both numerically and rigorously. One is a variational approach, based on the dual formulation of the problem, and the other involves computer-assisted proofs. These methods are used e.g. to construct a new type of solutions, whose energy is spread among several modes, associated with closely spaced resonances.
Grain growth prediction based on data assimilation by implementing 4DVar on multi-phase-field model
NASA Astrophysics Data System (ADS)
Ito, Shin-ichi; Nagao, Hiromichi; Kasuya, Tadashi; Inoue, Junya
2017-12-01
We propose a method to predict grain growth based on data assimilation by using a four-dimensional variational method (4DVar). When implemented on a multi-phase-field model, the proposed method allows us to calculate the predicted grain structures and uncertainties in them that depend on the quality and quantity of the observational data. We confirm through numerical tests involving synthetic data that the proposed method correctly reproduces the true phase-field assumed in advance. Furthermore, it successfully quantifies uncertainties in the predicted grain structures, where such uncertainty quantifications provide valuable information to optimize the experimental design.
NASA Astrophysics Data System (ADS)
Chunguang, L.; Qingjiu, T.
2012-07-01
Based on MODIS remote sensing data, method and technology to extraction the time and space distribution information of algae bloom is studied and established. The dynamic feature of time and space in Taihu Lake from 2009 to 2011 can be obtained by extracted method. Variation of cyanobacterial bloom in the Taihu Lake is analyzed and discussed. The algae bloom frequency index (AFI) and algae bloom sustainability index (ASI) is important criterion which can show the interannual and inter-monthly variation in the whole area or the subregion of Taihu Lake. Utilizing the AFI and ASI from 2009 to 2011, it found some phenomena that: the booming frequency decreased from the north and west to the East and South of Taihu Lake. The annual month algae bloom variation of AFI reflect the booming existing twin peaks in the high shock level and lag trend in general. In the subregion statistics, the IBD and ASI in 2011 show the abnormal condition in the border between the Gongshan Bay and Central Lake. The date is obvious earlier than that on the same subregion in previous years and that on others subregion in the same year.
σ-SCF: A direct energy-targeting method to mean-field excited states
NASA Astrophysics Data System (ADS)
Ye, Hong-Zhou; Welborn, Matthew; Ricke, Nathan D.; Van Voorhis, Troy
2017-12-01
The mean-field solutions of electronic excited states are much less accessible than ground state (e.g., Hartree-Fock) solutions. Energy-based optimization methods for excited states, like Δ-SCF (self-consistent field), tend to fall into the lowest solution consistent with a given symmetry—a problem known as "variational collapse." In this work, we combine the ideas of direct energy-targeting and variance-based optimization in order to describe excited states at the mean-field level. The resulting method, σ-SCF, has several advantages. First, it allows one to target any desired excited state by specifying a single parameter: a guess of the energy of that state. It can therefore, in principle, find all excited states. Second, it avoids variational collapse by using a variance-based, unconstrained local minimization. As a consequence, all states—ground or excited—are treated on an equal footing. Third, it provides an alternate approach to locate Δ-SCF solutions that are otherwise hardly accessible by the usual non-aufbau configuration initial guess. We present results for this new method for small atoms (He, Be) and molecules (H2, HF). We find that σ-SCF is very effective at locating excited states, including individual, high energy excitations within a dense manifold of excited states. Like all single determinant methods, σ-SCF shows prominent spin-symmetry breaking for open shell states and our results suggest that this method could be further improved with spin projection.
σ-SCF: A direct energy-targeting method to mean-field excited states.
Ye, Hong-Zhou; Welborn, Matthew; Ricke, Nathan D; Van Voorhis, Troy
2017-12-07
The mean-field solutions of electronic excited states are much less accessible than ground state (e.g., Hartree-Fock) solutions. Energy-based optimization methods for excited states, like Δ-SCF (self-consistent field), tend to fall into the lowest solution consistent with a given symmetry-a problem known as "variational collapse." In this work, we combine the ideas of direct energy-targeting and variance-based optimization in order to describe excited states at the mean-field level. The resulting method, σ-SCF, has several advantages. First, it allows one to target any desired excited state by specifying a single parameter: a guess of the energy of that state. It can therefore, in principle, find all excited states. Second, it avoids variational collapse by using a variance-based, unconstrained local minimization. As a consequence, all states-ground or excited-are treated on an equal footing. Third, it provides an alternate approach to locate Δ-SCF solutions that are otherwise hardly accessible by the usual non-aufbau configuration initial guess. We present results for this new method for small atoms (He, Be) and molecules (H 2 , HF). We find that σ-SCF is very effective at locating excited states, including individual, high energy excitations within a dense manifold of excited states. Like all single determinant methods, σ-SCF shows prominent spin-symmetry breaking for open shell states and our results suggest that this method could be further improved with spin projection.
Guo, Junbin; Wang, Jianqiang; Guo, Xiaosong; Yu, Chuanqiang; Sun, Xiaoyan
2014-01-01
Preceding vehicle detection and tracking at nighttime are challenging problems due to the disturbance of other extraneous illuminant sources coexisting with the vehicle lights. To improve the detection accuracy and robustness of vehicle detection, a novel method for vehicle detection and tracking at nighttime is proposed in this paper. The characteristics of taillights in the gray level are applied to determine the lower boundary of the threshold for taillights segmentation, and the optimal threshold for taillight segmentation is calculated using the OTSU algorithm between the lower boundary and the highest grayscale of the region of interest. The candidate taillight pairs are extracted based on the similarity between left and right taillights, and the non-vehicle taillight pairs are removed based on the relevance analysis of vehicle location between frames. To reduce the false negative rate of vehicle detection, a vehicle tracking method based on taillights estimation is applied. The taillight spot candidate is sought in the region predicted by Kalman filtering, and the disturbed taillight is estimated based on the symmetry and location of the other taillight of the same vehicle. Vehicle tracking is completed after estimating its location according to the two taillight spots. The results of experiments on a vehicle platform indicate that the proposed method could detect vehicles quickly, correctly and robustly in the actual traffic environments with illumination variation. PMID:25195855
Guo, Junbin; Wang, Jianqiang; Guo, Xiaosong; Yu, Chuanqiang; Sun, Xiaoyan
2014-08-19
Preceding vehicle detection and tracking at nighttime are challenging problems due to the disturbance of other extraneous illuminant sources coexisting with the vehicle lights. To improve the detection accuracy and robustness of vehicle detection, a novel method for vehicle detection and tracking at nighttime is proposed in this paper. The characteristics of taillights in the gray level are applied to determine the lower boundary of the threshold for taillights segmentation, and the optimal threshold for taillight segmentation is calculated using the OTSU algorithm between the lower boundary and the highest grayscale of the region of interest. The candidate taillight pairs are extracted based on the similarity between left and right taillights, and the non-vehicle taillight pairs are removed based on the relevance analysis of vehicle location between frames. To reduce the false negative rate of vehicle detection, a vehicle tracking method based on taillights estimation is applied. The taillight spot candidate is sought in the region predicted by Kalman filtering, and the disturbed taillight is estimated based on the symmetry and location of the other taillight of the same vehicle. Vehicle tracking is completed after estimating its location according to the two taillight spots. The results of experiments on a vehicle platform indicate that the proposed method could detect vehicles quickly, correctly and robustly in the actual traffic environments with illumination variation.
NASA Astrophysics Data System (ADS)
Irbah, A.; Damé, L.; Meftah, M.; Bekki, S.; Bolsée, D.
2017-12-01
The solar spectral irradiance (SSI) and its temporal variations are of prime importance to apprehend the physics of the Sun and to understand its effects on Earth climate through changes of atmospheric properties. Ground based measurements of SSI are indeed affected by the Earth atmosphere and space observations are therefore required to perform adequate observations. Only a few long series of SSI space measurements were obtained these last decades. The SOLSPEC instrument of the SOLAR payload on the International Space Station (ISS) has recorded one of them from April 2008 to February 2017 covering almost the whole solar cycle 24. The instrument is a spectro-radiometer recording data of the Sun from 166 to 3088 nm. Operated from the ISS in a harsh environment it needed appropriate processing methods to extract significant scientific results from noise and instrumental effects. We present the methods used to process the data to evidence visible SSI variations during cycle 24. We discuss the results obtained showing SSI variations in phase with solar activity. We compare them with SORCE/SIM measurements.
FIT: statistical modeling tool for transcriptome dynamics under fluctuating field conditions
Iwayama, Koji; Aisaka, Yuri; Kutsuna, Natsumaro
2017-01-01
Abstract Motivation: Considerable attention has been given to the quantification of environmental effects on organisms. In natural conditions, environmental factors are continuously changing in a complex manner. To reveal the effects of such environmental variations on organisms, transcriptome data in field environments have been collected and analyzed. Nagano et al. proposed a model that describes the relationship between transcriptomic variation and environmental conditions and demonstrated the capability to predict transcriptome variation in rice plants. However, the computational cost of parameter optimization has prevented its wide application. Results: We propose a new statistical model and efficient parameter optimization based on the previous study. We developed and released FIT, an R package that offers functions for parameter optimization and transcriptome prediction. The proposed method achieves comparable or better prediction performance within a shorter computational time than the previous method. The package will facilitate the study of the environmental effects on transcriptomic variation in field conditions. Availability and Implementation: Freely available from CRAN (https://cran.r-project.org/web/packages/FIT/). Contact: anagano@agr.ryukoku.ac.jp Supplementary information: Supplementary data are available at Bioinformatics online PMID:28158396
NASA Astrophysics Data System (ADS)
Zhao, Jin; Han-Ming, Zhang; Bin, Yan; Lei, Li; Lin-Yuan, Wang; Ai-Long, Cai
2016-03-01
Sparse-view x-ray computed tomography (CT) imaging is an interesting topic in CT field and can efficiently decrease radiation dose. Compared with spatial reconstruction, a Fourier-based algorithm has advantages in reconstruction speed and memory usage. A novel Fourier-based iterative reconstruction technique that utilizes non-uniform fast Fourier transform (NUFFT) is presented in this work along with advanced total variation (TV) regularization for a fan sparse-view CT. The proposition of a selective matrix contributes to improve reconstruction quality. The new method employs the NUFFT and its adjoin to iterate back and forth between the Fourier and image space. The performance of the proposed algorithm is demonstrated through a series of digital simulations and experimental phantom studies. Results of the proposed algorithm are compared with those of existing TV-regularized techniques based on compressed sensing method, as well as basic algebraic reconstruction technique. Compared with the existing TV-regularized techniques, the proposed Fourier-based technique significantly improves convergence rate and reduces memory allocation, respectively. Projected supported by the National High Technology Research and Development Program of China (Grant No. 2012AA011603) and the National Natural Science Foundation of China (Grant No. 61372172).
Generalising Ward's Method for Use with Manhattan Distances.
Strauss, Trudie; von Maltitz, Michael Johan
2017-01-01
The claim that Ward's linkage algorithm in hierarchical clustering is limited to use with Euclidean distances is investigated. In this paper, Ward's clustering algorithm is generalised to use with l1 norm or Manhattan distances. We argue that the generalisation of Ward's linkage method to incorporate Manhattan distances is theoretically sound and provide an example of where this method outperforms the method using Euclidean distances. As an application, we perform statistical analyses on languages using methods normally applied to biology and genetic classification. We aim to quantify differences in character traits between languages and use a statistical language signature based on relative bi-gram (sequence of two letters) frequencies to calculate a distance matrix between 32 Indo-European languages. We then use Ward's method of hierarchical clustering to classify the languages, using the Euclidean distance and the Manhattan distance. Results obtained from using the different distance metrics are compared to show that the Ward's algorithm characteristic of minimising intra-cluster variation and maximising inter-cluster variation is not violated when using the Manhattan metric.
Mining sequence variations in representative polyploid sugarcane germplasm accessions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Xiping; Song, Jian; You, Qian
Sugarcane (Saccharum spp.) is one of the most important economic crops because of its high sugar production and biofuel potential. Due to the high polyploid level and complex genome of sugarcane, it has been a huge challenge to investigate genomic sequence variations, which are critical for identifying alleles contributing to important agronomic traits. In order to mine the genetic variations in sugarcane, genotyping by sequencing (GBS), was used to genotype 14 representative Saccharum complex accessions. GBS is a method to generate a large number of markers, enabled by next generation sequencing (NGS) and the genome complexity reduction using restriction enzymes.more » To use GBS for high throughput genotyping highly polyploid sugarcane, the GBS analysis pipelines in 14 Saccharum complex accessions were established by evaluating different alignment methods, sequence variants callers, and sequence depth for single nucleotide polymorphism (SNP) filtering. By using the established pipeline, a total of 76,251 non-redundant SNPs, 5642 InDels, 6380 presence/absence variants (PAVs), and 826 copy number variations (CNVs) were detected among the 14 accessions. In addition, non-reference based universal network enabled analysis kit and Stacks de novo called 34,353 and 109,043 SNPs, respectively. In the 14 accessions, the percentages of single dose SNPs ranged from 38.3% to 62.3% with an average of 49.6%, much more than the portions of multiple dosage SNPs. Concordantly called SNPs were used to evaluate the phylogenetic relationship among the 14 accessions. The results showed that the divergence time between the Erianthus genus and the Saccharum genus was more than 10 million years ago (MYA). The Saccharum species separated from their common ancestors ranging from 0.19 to 1.65 MYA. The GBS pipelines including the reference sequences, alignment methods, sequence variant callers, and sequence depth were recommended and discussed for the Saccharum complex and other related species. A large number of sequence variations were discovered in the Saccharum complex, including SNPs, InDels, PAVs, and CNVs. Genome-wide SNPs were further used to illustrate sequence features of polyploid species and demonstrated the divergence of different species in the Saccharum complex. The results of this study showed that GBS was an effective NGS-based method to discover genomic sequence variations in highly polyploid and heterozygous species.« less
Mining sequence variations in representative polyploid sugarcane germplasm accessions
Yang, Xiping; Song, Jian; You, Qian; ...
2017-08-09
Sugarcane (Saccharum spp.) is one of the most important economic crops because of its high sugar production and biofuel potential. Due to the high polyploid level and complex genome of sugarcane, it has been a huge challenge to investigate genomic sequence variations, which are critical for identifying alleles contributing to important agronomic traits. In order to mine the genetic variations in sugarcane, genotyping by sequencing (GBS), was used to genotype 14 representative Saccharum complex accessions. GBS is a method to generate a large number of markers, enabled by next generation sequencing (NGS) and the genome complexity reduction using restriction enzymes.more » To use GBS for high throughput genotyping highly polyploid sugarcane, the GBS analysis pipelines in 14 Saccharum complex accessions were established by evaluating different alignment methods, sequence variants callers, and sequence depth for single nucleotide polymorphism (SNP) filtering. By using the established pipeline, a total of 76,251 non-redundant SNPs, 5642 InDels, 6380 presence/absence variants (PAVs), and 826 copy number variations (CNVs) were detected among the 14 accessions. In addition, non-reference based universal network enabled analysis kit and Stacks de novo called 34,353 and 109,043 SNPs, respectively. In the 14 accessions, the percentages of single dose SNPs ranged from 38.3% to 62.3% with an average of 49.6%, much more than the portions of multiple dosage SNPs. Concordantly called SNPs were used to evaluate the phylogenetic relationship among the 14 accessions. The results showed that the divergence time between the Erianthus genus and the Saccharum genus was more than 10 million years ago (MYA). The Saccharum species separated from their common ancestors ranging from 0.19 to 1.65 MYA. The GBS pipelines including the reference sequences, alignment methods, sequence variant callers, and sequence depth were recommended and discussed for the Saccharum complex and other related species. A large number of sequence variations were discovered in the Saccharum complex, including SNPs, InDels, PAVs, and CNVs. Genome-wide SNPs were further used to illustrate sequence features of polyploid species and demonstrated the divergence of different species in the Saccharum complex. The results of this study showed that GBS was an effective NGS-based method to discover genomic sequence variations in highly polyploid and heterozygous species.« less
NASA Astrophysics Data System (ADS)
Gao, Yuan; Ma, Jiayi; Yuille, Alan L.
2017-05-01
This paper addresses the problem of face recognition when there is only few, or even only a single, labeled examples of the face that we wish to recognize. Moreover, these examples are typically corrupted by nuisance variables, both linear (i.e., additive nuisance variables such as bad lighting, wearing of glasses) and non-linear (i.e., non-additive pixel-wise nuisance variables such as expression changes). The small number of labeled examples means that it is hard to remove these nuisance variables between the training and testing faces to obtain good recognition performance. To address the problem we propose a method called Semi-Supervised Sparse Representation based Classification (S$^3$RC). This is based on recent work on sparsity where faces are represented in terms of two dictionaries: a gallery dictionary consisting of one or more examples of each person, and a variation dictionary representing linear nuisance variables (e.g., different lighting conditions, different glasses). The main idea is that (i) we use the variation dictionary to characterize the linear nuisance variables via the sparsity framework, then (ii) prototype face images are estimated as a gallery dictionary via a Gaussian Mixture Model (GMM), with mixed labeled and unlabeled samples in a semi-supervised manner, to deal with the non-linear nuisance variations between labeled and unlabeled samples. We have done experiments with insufficient labeled samples, even when there is only a single labeled sample per person. Our results on the AR, Multi-PIE, CAS-PEAL, and LFW databases demonstrate that the proposed method is able to deliver significantly improved performance over existing methods.
Salient Point Detection in Protrusion Parts of 3D Object Robust to Isometric Variations
NASA Astrophysics Data System (ADS)
Mirloo, Mahsa; Ebrahimnezhad, Hosein
2018-03-01
In this paper, a novel method is proposed to detect 3D object salient points robust to isometric variations and stable against scaling and noise. Salient points can be used as the representative points from object protrusion parts in order to improve the object matching and retrieval algorithms. The proposed algorithm is started by determining the first salient point of the model based on the average geodesic distance of several random points. Then, according to the previous salient point, a new point is added to this set of points in each iteration. By adding every salient point, decision function is updated. Hence, a condition is created for selecting the next point in which the iterative point is not extracted from the same protrusion part so that drawing out of a representative point from every protrusion part is guaranteed. This method is stable against model variations with isometric transformations, scaling, and noise with different levels of strength due to using a feature robust to isometric variations and considering the relation between the salient points. In addition, the number of points used in averaging process is decreased in this method, which leads to lower computational complexity in comparison with the other salient point detection algorithms.
Zhang, Peng; Li, Houqiang; Wang, Honghui; Wong, Stephen T C; Zhou, Xiaobo
2011-01-01
Peak detection is one of the most important steps in mass spectrometry (MS) analysis. However, the detection result is greatly affected by severe spectrum variations. Unfortunately, most current peak detection methods are neither flexible enough to revise false detection results nor robust enough to resist spectrum variations. To improve flexibility, we introduce peak tree to represent the peak information in MS spectra. Each tree node is a peak judgment on a range of scales, and each tree decomposition, as a set of nodes, is a candidate peak detection result. To improve robustness, we combine peak detection and common peak alignment into a closed-loop framework, which finds the optimal decomposition via both peak intensity and common peak information. The common peak information is derived and loopily refined from the density clustering of the latest peak detection result. Finally, we present an improved ant colony optimization biomarker selection method to build a whole MS analysis system. Experiment shows that our peak detection method can better resist spectrum variations and provide higher sensitivity and lower false detection rates than conventional methods. The benefits from our peak-tree-based system for MS disease analysis are also proved on real SELDI data.
Krempa, Heather M.
2015-10-29
Relative percent differences between methods were greater than 10 percent for most analyzed trace elements. Barium, cobalt, manganese, and boron had concentrations that were significantly different between sampling methods. Barium, molybdenum, boron, and uranium method concentrations indicate a close association between pump and grab samples based on bivariate plots and simple linear regressions. Grab sample concentrations were generally larger than pump concentrations for these elements and may be because of using a larger pore sized filter for grab samples. Analysis of zinc blank samples suggests zinc contamination in filtered grab samples. Variations of analyzed trace elements between pump and grab samples could reduce the ability to monitor temporal changes and potential groundwater contamination threats. The degree of precision necessary for monitoring potential groundwater threats and application objectives need to be considered when determining acceptable variation amounts.
Matrix effect and recovery terminology issues in regulated drug bioanalysis.
Huang, Yong; Shi, Robert; Gee, Winnie; Bonderud, Richard
2012-02-01
Understanding the meaning of the terms used in the bioanalytical method validation guidance is essential for practitioners to implement best practice. However, terms that have several meanings or that have different interpretations exist within bioanalysis, and this may give rise to differing practices. In this perspective we discuss an important but often confusing term - 'matrix effect (ME)' - in regulated drug bioanalysis. The ME can be interpreted as either the ionization change or the measurement bias of the method caused by the nonanalyte matrix. The ME definition dilemma makes its evaluation challenging. The matrix factor is currently used as a standard method for evaluation of ionization changes caused by the matrix in MS-based methods. Standard additions to pre-extraction samples have been suggested to evaluate the overall effects of a matrix from different sources on the analytical system, because it covers ionization variation and extraction recovery variation. We also provide our personal views on the term 'recovery'.
Detection of nucleic acids by multiple sequential invasive cleavages
Hall, Jeff G.; Lyamichev, Victor I.; Mast, Andrea L.; Brow, Mary Ann D.
1999-01-01
The present invention relates to means for the detection and characterization of nucleic acid sequences, as well as variations in nucleic acid sequences. The present invention also relates to methods for forming a nucleic acid cleavage structure on a target sequence and cleaving the nucleic acid cleavage structure in a site-specific manner. The structure-specific nuclease activity of a variety of enzymes is used to cleave the target-dependent cleavage structure, thereby indicating the presence of specific nucleic acid sequences or specific variations thereof. The present invention further relates to methods and devices for the separation of nucleic acid molecules based on charge. The present invention also provides methods for the detection of non-target cleavage products via the formation of a complete and activated protein binding region. The invention further provides sensitive and specific methods for the detection of human cytomegalovirus nucleic acid in a sample.
Hall, Jeff G.; Lyamichev, Victor I.; Mast, Andrea L.; Brow, Mary Ann; Kwiatkowski, Robert W.; Vavra, Stephanie H.
2005-03-29
The present invention relates to means for the detection and characterization of nucleic acid sequences, as well as variations in nucleic acid sequences. The present invention also relates to methods for forming a nucleic acid cleavage structure on a target sequence and cleaving the nucleic acid cleavage structure in a site-specific manner. The structure-specific nuclease activity of a variety of enzymes is used to cleave the target-dependent cleavage structure, thereby indicating the presence of specific nucleic acid sequences or specific variations thereof. The present invention further relates to methods and devices for the separation of nucleic acid molecules based on charge. The present invention also provides methods for the detection of non-target cleavage products via the formation of a complete and activated protein binding region. The invention further provides sensitive and specific methods for the detection of nucleic acid from various viruses in a sample.
Detection of nucleic acids by multiple sequential invasive cleavages 02
Hall, Jeff G.; Lyamichev, Victor I.; Mast, Andrea L.; Brow, Mary Ann D.
2002-01-01
The present invention relates to means for the detection and characterization of nucleic acid sequences, as well as variations in nucleic acid sequences. The present invention also relates to methods for forming a nucleic acid cleavage structure on a target sequence and cleaving the nucleic acid cleavage structure in a site-specific manner. The structure-specific nuclease activity of a variety of enzymes is used to cleave the target-dependent cleavage structure, thereby indicating the presence of specific nucleic acid sequences or specific variations thereof. The present invention further relates to methods and devices for the separation of nucleic acid molecules based on charge. The present invention also provides methods for the detection of non-target cleavage products via the formation of a complete and activated protein binding region. The invention further provides sensitive and specific methods for the detection of human cytomegalovirus nucleic acid in a sample.
Detection of nucleic acids by multiple sequential invasive cleavages
Hall, Jeff G; Lyamichev, Victor I; Mast, Andrea L; Brow, Mary Ann D
2012-10-16
The present invention relates to means for the detection and characterization of nucleic acid sequences, as well as variations in nucleic acid sequences. The present invention also relates to methods for forming a nucleic acid cleavage structure on a target sequence and cleaving the nucleic acid cleavage structure in a site-specific manner. The structure-specific nuclease activity of a variety of enzymes is used to cleave the target-dependent cleavage structure, thereby indicating the presence of specific nucleic acid sequences or specific variations thereof. The present invention further relates to methods and devices for the separation of nucleic acid molecules based on charge. The present invention also provides methods for the detection of non-target cleavage products via the formation of a complete and activated protein binding region. The invention further provides sensitive and specific methods for the detection of human cytomegalovirus nucleic acid in a sample.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Guanglei, E-mail: guangleizhang@bjtu.edu.cn; Department of Biomedical Engineering, School of Computer and Information Technology, Beijing Jiaotong University, Beijing 100044; Pu, Huangsheng
2015-02-23
Images of pharmacokinetic parameters (also known as parametric images) in dynamic fluorescence molecular tomography (FMT) can provide three-dimensional metabolic information for biological studies and drug development. However, the ill-posed nature of FMT and the high temporal variation of fluorophore concentration together make it difficult to obtain accurate parametric images in small animals in vivo. In this letter, we present a method to directly reconstruct the parametric images from the boundary measurements based on hybrid FMT/X-ray computed tomography (XCT) system. This method can not only utilize structural priors obtained from the XCT system to mitigate the ill-posedness of FMT but alsomore » make full use of the temporal correlations of boundary measurements to model the high temporal variation of fluorophore concentration. The results of numerical simulation and mouse experiment demonstrate that the proposed method leads to significant improvements in the reconstruction quality of parametric images.« less
Adaptive DSPI phase denoising using mutual information and 2D variational mode decomposition
NASA Astrophysics Data System (ADS)
Xiao, Qiyang; Li, Jian; Wu, Sijin; Li, Weixian; Yang, Lianxiang; Dong, Mingli; Zeng, Zhoumo
2018-04-01
In digital speckle pattern interferometry (DSPI), noise interference leads to a low peak signal-to-noise ratio (PSNR) and measurement errors in the phase map. This paper proposes an adaptive DSPI phase denoising method based on two-dimensional variational mode decomposition (2D-VMD) and mutual information. Firstly, the DSPI phase map is subjected to 2D-VMD in order to obtain a series of band-limited intrinsic mode functions (BLIMFs). Then, on the basis of characteristics of the BLIMFs and in combination with mutual information, a self-adaptive denoising method is proposed to obtain noise-free components containing the primary phase information. The noise-free components are reconstructed to obtain the denoising DSPI phase map. Simulation and experimental results show that the proposed method can effectively reduce noise interference, giving a PSNR that is higher than that of two-dimensional empirical mode decomposition methods.
Ghasemzadeh, I; Aghamolaei, T; Hosseini-Parandar, F
2015-01-01
Introduction: In recent years, medical education has changed dramatically and many medical schools in the world have been trying for expand modern training methods. Purpose of the research is to appraise the medical students of teacher-based and student-based teaching methods in Infectious diseases course, in the Medical School of Hormozgan Medical Sciences University. Methods: In this interventional study, a total of 52 medical scholars that used Section in this Infectious diseases course were included. About 50% of this course was presented by a teacher-based teaching method (lecture) and 50% by a student-based teaching method (problem-based learning). The satisfaction of students regarding these methods was assessed by a questionnaire and a test was used to measure their learning. information are examined with using SPSS 19 and paired t-test. Results: The satisfaction of students of student-based teaching method (problem-based learning) was more positive than their satisfaction of teacher-based teaching method (lecture).The mean score of students in teacher-based teaching method was 12.03 (SD=4.08) and in the student-based teaching method it was 15.50 (SD=4.26) and where is a considerable variation among them (p<0.001). Conclusion: The use of the student-based teaching method (problem-based learning) in comparison with the teacher-based teaching method (lecture) to present the Infectious diseases course led to the student satisfaction and provided additional learning opportunities.
Singh, Akanksha; Sharma, Vinay; Dikshit, Harsh Kumar; Aski, Muraleedhar; Kumar, Harish; Thirunavukkarasu, Nepolean; Patil, Basavanagouda S.; Kumar, Shiv; Sarker, Ashutosh
2017-01-01
Lentil is a major cool-season grain legume grown in South Asia, West Asia, and North Africa. Populations in developing countries of these regions have micronutrient deficiencies; therefore, breeding programs should focus more on improving the micronutrient content of food. In the present study, a set of 96 diverse germplasm lines were evaluated at three different locations in India to examine the variation in iron (Fe) and zinc (Zn) concentration and identify simple sequence repeat (SSR) markers that associate with the genetic variation. The genetic variation among genotypes of the association mapping (AM) panel was characterized using a genetic distance-based and a general model-based clustering method. The model-based analysis identified six subpopulations, which satisfactorily explained the genetic structure of the AM panel. AM analysis identified three SSRs (PBALC 13, PBALC 206, and GLLC 563) associated with grain Fe concentration explaining 9% to 11% of phenotypic variation and four SSRs (PBALC 353, SSR 317–1, PLC 62, and PBALC 217) were associated with grain Zn concentration explaining 14%, to 21% of phenotypic variation. These identified SSRs exhibited consistent performance across locations. These candidate SSRs can be used in marker-assisted genetic improvement for developing Fe and Zn fortified lentil varieties. Favorable alleles and promising genotypes identified in this study can be utilized for lentil biofortification. PMID:29161321
Rock, Cassandra; Shamlou, Parviz Ayazi; Levy, M. Susana
2003-01-01
A method is described for high-throughput monitoring of DNA backbone integrity in plasmids and artificial chromosomes in solution. The method is based on the denaturation properties of double-stranded DNA in alkaline conditions and uses PicoGreen fluorochrome to monitor denaturation. In the present method, fluorescence enhancement of PicoGreen at pH 12.4 is normalised by its value at pH 8 to give a ratio that is proportional to the average backbone integrity of the DNA molecules in the sample. A good regression fit (r2 > 0.98) was obtained when results derived from the present method and those derived from agarose gel electrophoresis were compared. Spiking experiments indicated that the method is sensitive enough to detect a proportion of 6% (v/v) molecules with an average of less than two breaks per molecule. Under manual operation, validation parameters such as inter-assay and intra-assay variation gave values of <5% coefficient of variation. Automation of the method showed equivalence to the manual procedure with high reproducibility and low variability within wells. The method described requires as little as 0.5 ng of DNA per well and a 96-well microplate can be analysed in 12 min providing an attractive option for analysis of high molecular weight vectors. A preparation of a 116 kb bacterial artificial chromosome was subjected to chemical and shear degradation and DNA integrity was tested using the method. Good correlation was obtained between time of chemical degradation and shear rate with fluorescence response. Results obtained from pulsed- field electrophoresis of sheared samples were in agreement with those obtained using the microplate-based method. PMID:12771229
CoNVaQ: a web tool for copy number variation-based association studies.
Larsen, Simon Jonas; do Canto, Luisa Matos; Rogatto, Silvia Regina; Baumbach, Jan
2018-05-18
Copy number variations (CNVs) are large segments of the genome that are duplicated or deleted. Structural variations in the genome have been linked to many complex diseases. Similar to how genome-wide association studies (GWAS) have helped discover single-nucleotide polymorphisms linked to disease phenotypes, the extension of GWAS to CNVs has aided the discovery of structural variants associated with human traits and diseases. We present CoNVaQ, an easy-to-use web-based tool for CNV-based association studies. The web service allows users to upload two sets of CNV segments and search for genomic regions where the occurrence of CNVs is significantly associated with the phenotype. CoNVaQ provides two models: a simple statistical model using Fisher's exact test and a novel query-based model matching regions to user-defined queries. For each region, the method computes a global q-value statistic by repeated permutation of samples among the populations. We demonstrate our platform by using it to analyze a data set of HPV-positive and HPV-negative penile cancer patients. CoNVaQ provides a simple workflow for performing CNV-based association studies. It is made available as a web platform in order to provide a user-friendly workflow for biologists and clinicians to carry out CNV data analysis without installing any software. Through the web interface, users are also able to analyze their results to find overrepresented GO terms and pathways. In addition, our method is also available as a package for the R programming language. CoNVaQ is available at https://convaq.compbio.sdu.dk .
NASA Astrophysics Data System (ADS)
Kajiwara, Itsuro; Furuya, Keiichiro; Ishizuka, Shinichi
2018-07-01
Model-based controllers with adaptive design variables are often used to control an object with time-dependent characteristics. However, the controller's performance is influenced by many factors such as modeling accuracy and fluctuations in the object's characteristics. One method to overcome these negative factors is to tune model-based controllers. Herein we propose an online tuning method to maintain control performance for an object that exhibits time-dependent variations. The proposed method employs the poles of the controller as design variables because the poles significantly impact performance. Specifically, we use the simultaneous perturbation stochastic approximation (SPSA) to optimize a model-based controller with multiple design variables. Moreover, a vibration control experiment of an object with time-dependent characteristics as the temperature is varied demonstrates that the proposed method allows adaptive control and stably maintains the closed-loop characteristics.
Xiao, Qiyang; Li, Jian; Bai, Zhiliang; Sun, Jiedi; Zhou, Nan; Zeng, Zhoumo
2016-12-13
In this study, a small leak detection method based on variational mode decomposition (VMD) and ambiguity correlation classification (ACC) is proposed. The signals acquired from sensors were decomposed using the VMD, and numerous components were obtained. According to the probability density function (PDF), an adaptive de-noising algorithm based on VMD is proposed for noise component processing and de-noised components reconstruction. Furthermore, the ambiguity function image was employed for analysis of the reconstructed signals. Based on the correlation coefficient, ACC is proposed to detect the small leak of pipeline. The analysis of pipeline leakage signals, using 1 mm and 2 mm leaks, has shown that proposed detection method can detect a small leak accurately and effectively. Moreover, the experimental results have shown that the proposed method achieved better performances than support vector machine (SVM) and back propagation neural network (BP) methods.
Xiao, Qiyang; Li, Jian; Bai, Zhiliang; Sun, Jiedi; Zhou, Nan; Zeng, Zhoumo
2016-01-01
In this study, a small leak detection method based on variational mode decomposition (VMD) and ambiguity correlation classification (ACC) is proposed. The signals acquired from sensors were decomposed using the VMD, and numerous components were obtained. According to the probability density function (PDF), an adaptive de-noising algorithm based on VMD is proposed for noise component processing and de-noised components reconstruction. Furthermore, the ambiguity function image was employed for analysis of the reconstructed signals. Based on the correlation coefficient, ACC is proposed to detect the small leak of pipeline. The analysis of pipeline leakage signals, using 1 mm and 2 mm leaks, has shown that proposed detection method can detect a small leak accurately and effectively. Moreover, the experimental results have shown that the proposed method achieved better performances than support vector machine (SVM) and back propagation neural network (BP) methods. PMID:27983577
Ter Braak, Cajo J F; Peres-Neto, Pedro; Dray, Stéphane
2017-01-01
Statistical testing of trait-environment association from data is a challenge as there is no common unit of observation: the trait is observed on species, the environment on sites and the mediating abundance on species-site combinations. A number of correlation-based methods, such as the community weighted trait means method (CWM), the fourth-corner correlation method and the multivariate method RLQ, have been proposed to estimate such trait-environment associations. In these methods, valid statistical testing proceeds by performing two separate resampling tests, one site-based and the other species-based and by assessing significance by the largest of the two p -values (the p max test). Recently, regression-based methods using generalized linear models (GLM) have been proposed as a promising alternative with statistical inference via site-based resampling. We investigated the performance of this new approach along with approaches that mimicked the p max test using GLM instead of fourth-corner. By simulation using models with additional random variation in the species response to the environment, the site-based resampling tests using GLM are shown to have severely inflated type I error, of up to 90%, when the nominal level is set as 5%. In addition, predictive modelling of such data using site-based cross-validation very often identified trait-environment interactions that had no predictive value. The problem that we identify is not an "omitted variable bias" problem as it occurs even when the additional random variation is independent of the observed trait and environment data. Instead, it is a problem of ignoring a random effect. In the same simulations, the GLM-based p max test controlled the type I error in all models proposed so far in this context, but still gave slightly inflated error in more complex models that included both missing (but important) traits and missing (but important) environmental variables. For screening the importance of single trait-environment combinations, the fourth-corner test is shown to give almost the same results as the GLM-based tests in far less computing time.
ERIC Educational Resources Information Center
Lancioni, Giulio E.; Bellini, Domenico; Oliva, Doretta; Singh, Nirbhay N.; O'Reilly, Mark F.; Sigafoos, Jeff; Lang, Russell
2012-01-01
Background: A camera-based microswitch technology was recently developed to monitor small facial responses of persons with multiple disabilities and allow those responses to control environmental stimulation. This study assessed such a technology with 2 new participants using slight variations of previous responses. Method: The technology involved…
Mitigating component performance variation
Gara, Alan G.; Sylvester, Steve S.; Eastep, Jonathan M.; Nagappan, Ramkumar; Cantalupo, Christopher M.
2018-01-09
Apparatus and methods may provide for characterizing a plurality of similar components of a distributed computing system based on a maximum safe operation level associated with each component and storing characterization data in a database and allocating non-uniform power to each similar component based at least in part on the characterization data in the database to substantially equalize performance of the components.
Noecker, Cecilia; Eng, Alexander; Srinivasan, Sujatha; Theriot, Casey M; Young, Vincent B; Jansson, Janet K; Fredricks, David N; Borenstein, Elhanan
2016-01-01
Multiple molecular assays now enable high-throughput profiling of the ecology, metabolic capacity, and activity of the human microbiome. However, to date, analyses of such multi-omic data typically focus on statistical associations, often ignoring extensive prior knowledge of the mechanisms linking these various facets of the microbiome. Here, we introduce a comprehensive framework to systematically link variation in metabolomic data with community composition by utilizing taxonomic, genomic, and metabolic information. Specifically, we integrate available and inferred genomic data, metabolic network modeling, and a method for predicting community-wide metabolite turnover to estimate the biosynthetic and degradation potential of a given community. Our framework then compares variation in predicted metabolic potential with variation in measured metabolites' abundances to evaluate whether community composition can explain observed shifts in the community metabolome, and to identify key taxa and genes contributing to the shifts. Focusing on two independent vaginal microbiome data sets, each pairing 16S community profiling with large-scale metabolomics, we demonstrate that our framework successfully recapitulates observed variation in 37% of metabolites. Well-predicted metabolite variation tends to result from disease-associated metabolism. We further identify several disease-enriched species that contribute significantly to these predictions. Interestingly, our analysis also detects metabolites for which the predicted variation negatively correlates with the measured variation, suggesting environmental control points of community metabolism. Applying this framework to gut microbiome data sets reveals similar trends, including prediction of bile acid metabolite shifts. This framework is an important first step toward a system-level multi-omic integration and an improved mechanistic understanding of the microbiome activity and dynamics in health and disease. Studies characterizing both the taxonomic composition and metabolic profile of various microbial communities are becoming increasingly common, yet new computational methods are needed to integrate and interpret these data in terms of known biological mechanisms. Here, we introduce an analytical framework to link species composition and metabolite measurements, using a simple model to predict the effects of community ecology on metabolite concentrations and evaluating whether these predictions agree with measured metabolomic profiles. We find that a surprisingly large proportion of metabolite variation in the vaginal microbiome can be predicted based on species composition (including dramatic shifts associated with disease), identify putative mechanisms underlying these predictions, and evaluate the roles of individual bacterial species and genes. Analysis of gut microbiome data using this framework recovers similar community metabolic trends. This framework lays the foundation for model-based multi-omic integrative studies, ultimately improving our understanding of microbial community metabolism.
Noecker, Cecilia; Eng, Alexander; Srinivasan, Sujatha; Theriot, Casey M.; Young, Vincent B.; Jansson, Janet K.; Fredricks, David N.
2016-01-01
ABSTRACT Multiple molecular assays now enable high-throughput profiling of the ecology, metabolic capacity, and activity of the human microbiome. However, to date, analyses of such multi-omic data typically focus on statistical associations, often ignoring extensive prior knowledge of the mechanisms linking these various facets of the microbiome. Here, we introduce a comprehensive framework to systematically link variation in metabolomic data with community composition by utilizing taxonomic, genomic, and metabolic information. Specifically, we integrate available and inferred genomic data, metabolic network modeling, and a method for predicting community-wide metabolite turnover to estimate the biosynthetic and degradation potential of a given community. Our framework then compares variation in predicted metabolic potential with variation in measured metabolites’ abundances to evaluate whether community composition can explain observed shifts in the community metabolome, and to identify key taxa and genes contributing to the shifts. Focusing on two independent vaginal microbiome data sets, each pairing 16S community profiling with large-scale metabolomics, we demonstrate that our framework successfully recapitulates observed variation in 37% of metabolites. Well-predicted metabolite variation tends to result from disease-associated metabolism. We further identify several disease-enriched species that contribute significantly to these predictions. Interestingly, our analysis also detects metabolites for which the predicted variation negatively correlates with the measured variation, suggesting environmental control points of community metabolism. Applying this framework to gut microbiome data sets reveals similar trends, including prediction of bile acid metabolite shifts. This framework is an important first step toward a system-level multi-omic integration and an improved mechanistic understanding of the microbiome activity and dynamics in health and disease. IMPORTANCE Studies characterizing both the taxonomic composition and metabolic profile of various microbial communities are becoming increasingly common, yet new computational methods are needed to integrate and interpret these data in terms of known biological mechanisms. Here, we introduce an analytical framework to link species composition and metabolite measurements, using a simple model to predict the effects of community ecology on metabolite concentrations and evaluating whether these predictions agree with measured metabolomic profiles. We find that a surprisingly large proportion of metabolite variation in the vaginal microbiome can be predicted based on species composition (including dramatic shifts associated with disease), identify putative mechanisms underlying these predictions, and evaluate the roles of individual bacterial species and genes. Analysis of gut microbiome data using this framework recovers similar community metabolic trends. This framework lays the foundation for model-based multi-omic integrative studies, ultimately improving our understanding of microbial community metabolism. PMID:27239563
Forecasting seasonal hydrologic response in major river basins
NASA Astrophysics Data System (ADS)
Bhuiyan, A. M.
2014-05-01
Seasonal precipitation variation due to natural climate variation influences stream flow and the apparent frequency and severity of extreme hydrological conditions such as flood and drought. To study hydrologic response and understand the occurrence of extreme hydrological events, the relevant forcing variables must be identified. This study attempts to assess and quantify the historical occurrence and context of extreme hydrologic flow events and quantify the relation between relevant climate variables. Once identified, the flow data and climate variables are evaluated to identify the primary relationship indicators of hydrologic extreme event occurrence. Existing studies focus on developing basin-scale forecasting techniques based on climate anomalies in El Nino/La Nina episodes linked to global climate. Building on earlier work, the goal of this research is to quantify variations in historical river flows at seasonal temporal-scale, and regional to continental spatial-scale. The work identifies and quantifies runoff variability of major river basins and correlates flow with environmental forcing variables such as El Nino, La Nina, sunspot cycle. These variables are expected to be the primary external natural indicators of inter-annual and inter-seasonal patterns of regional precipitation and river flow. Relations between continental-scale hydrologic flows and external climate variables are evaluated through direct correlations in a seasonal context with environmental phenomenon such as sun spot numbers (SSN), Southern Oscillation Index (SOI), and Pacific Decadal Oscillation (PDO). Methods including stochastic time series analysis and artificial neural networks are developed to represent the seasonal variability evident in the historical records of river flows. River flows are categorized into low, average and high flow levels to evaluate and simulate flow variations under associated climate variable variations. Results demonstrated not any particular method is suited to represent scenarios leading to extreme flow conditions. For selected flow scenarios, the persistence model performance may be comparable to more complex multivariate approaches, and complex methods did not always improve flow estimation. Overall model performance indicates inclusion of river flows and forcing variables on average improve model extreme event forecasting skills. As a means to further refine the flow estimation, an ensemble forecast method is implemented to provide a likelihood-based indication of expected river flow magnitude and variability. Results indicate seasonal flow variations are well-captured in the ensemble range, therefore the ensemble approach can often prove efficient in estimating extreme river flow conditions. The discriminant prediction approach, a probabilistic measure to forecast streamflow, is also adopted to derive model performance. Results show the efficiency of the method in terms of representing uncertainties in the forecasts.
Zhou, Yong; Liang, Jinyang; Maslov, Konstantin I.; Wang, Lihong V.
2013-01-01
We propose a cross-correlation-based method to measure blood flow velocity by using photoacoustic microscopy. Unlike in previous auto-correlation-based methods, the measured flow velocity here is independent of particle size. Thus, an absolute flow velocity can be obtained without calibration. We first measured the flow velocity ex vivo, using defibrinated bovine blood. Then, flow velocities in vessels with different structures in a mouse ear were quantified in vivo. We further measured the flow variation in the same vessel and at a vessel bifurcation. All the experimental results indicate that our method can be used to accurately quantify blood velocity in vivo. PMID:24081077
Tang, Chen; Han, Lin; Ren, Hongwei; Zhou, Dongjian; Chang, Yiming; Wang, Xiaohang; Cui, Xiaolong
2008-10-01
We derive the second-order oriented partial-differential equations (PDEs) for denoising in electronic-speckle-pattern interferometry fringe patterns from two points of view. The first is based on variational methods, and the second is based on controlling diffusion direction. Our oriented PDE models make the diffusion along only the fringe orientation. The main advantage of our filtering method, based on oriented PDE models, is that it is very easy to implement compared with the published filtering methods along the fringe orientation. We demonstrate the performance of our oriented PDE models via application to two computer-simulated and experimentally obtained speckle fringes and compare with related PDE models.
Favaloro, Emmanuel J; Wong, Richard C W; Silvestrini, Roger; McEvoy, Robert; Jovanovich, Susan; Roberts-Thomson, Peter
2005-02-01
We evaluated the performance of anticardiolipin (aCL) and beta2-glycoprotein I (beta2-GPI) antibody assays through a large external quality assurance program. Data from the 2002 cycle of the Royal College of Pathologists of Australasia Quality Assurance Program (RCPA QAP) were analyzed for variation in reported numerical values and semiquantitative results or interpretations according to method type or group and in conjunction with available clinical data. High interlaboratory variation in numerical results and notable method-based variation, combined with a general lack of consensus in semiquantitative reporting, continues to be observed. Numerical results from cross-laboratory testing of 12 serum samples (for immunoglobulin G [IgG]-aCL, IgM-aCL, and IgG-beta2-GPI) yielded interlaboratory coefficients of variation (CVs) that were higher than 50% in six of 12 (50%) specimens for IgG-aCL, and 12 of 12 (100%) specimens for IgM-aCL and IgG-beta2-GPI. Semiquantitative reporting also varied considerably, with total (100%) consensus occurring in only four of 36 (11%) occasions. General consensus (where > 90% of participating laboratories agreed that a given serum sample gave a result of either negative or positive) was only obtained on 13 of 36 (36%) occasions. Variation in results between different method types or groups were also present, resulting in potential biasing of the RCPA QAP-defined target results by the large number of laboratories using the dominant aCL assays. Finally, laboratory findings frequently did not agree with the available clinical information. In conclusion, in a large proportion of specimens from the 2002 RCPA QAP cycle, laboratories could not agree on whether a serum sample tested was aCL-positive or aCL-negative, or beta2-GPI-positive or beta2-GPI-negative. Despite prior attempts to improve the standardization of testing and reporting practices, laboratory testing for aCL and anti-beta2-GPI still demonstrates significant interlaboratory and intermethod variation, which needs to be taken into account for the clinical interpretation of test results, especially those from different laboratories.
The Hyperfine Structure of the Ground State in the Muonic Helium Atoms
NASA Astrophysics Data System (ADS)
Aznabayev, D. T.; Bekbaev, A. K.; Korobov, V. I.
2018-05-01
Non-relativistic ionization energies 3He2+μ-e- and 4He2+μ-e- of helium-muonic atoms are calculated for ground states. The calculations are based on the variational method of the exponential expansion. Convergence of the variational energies is studied by an increasing of a number of the basis functions N. This allows to claim that the obtained energy values have 26 significant digits for ground states. With the obtained results we calculate hyperfine splitting of the muonic helium atoms.
Computational fluid mechanics utilizing the variational principle of modeling damping seals
NASA Technical Reports Server (NTRS)
Abernathy, J. M.
1986-01-01
A computational fluid dynamics code for application to traditional incompressible flow problems has been developed. The method is actually a slight compressibility approach which takes advantage of the bulk modulus and finite sound speed of all real fluids. The finite element numerical analog uses a dynamic differencing scheme based, in part, on a variational principle for computational fluid dynamics. The code was developed in order to study the feasibility of damping seals for high speed turbomachinery. Preliminary seal analyses have been performed.
Finite-temperature Gutzwiller approximation from the time-dependent variational principle
NASA Astrophysics Data System (ADS)
Lanatà, Nicola; Deng, Xiaoyu; Kotliar, Gabriel
2015-08-01
We develop an extension of the Gutzwiller approximation to finite temperatures based on the Dirac-Frenkel variational principle. Our method does not rely on any entropy inequality, and is substantially more accurate than the approaches proposed in previous works. We apply our theory to the single-band Hubbard model at different fillings, and show that our results compare quantitatively well with dynamical mean field theory in the metallic phase. We discuss potential applications of our technique within the framework of first-principle calculations.
Yahia, K; Cardoso, A J M; Ghoggal, A; Zouzou, S E
2014-03-01
Fast Fourier transform (FFT) analysis has been successfully used for fault diagnosis in induction machines. However, this method does not always provide good results for the cases of load torque, speed and voltages variation, leading to a variation of the motor-slip and the consequent FFT problems that appear due to the non-stationary nature of the involved signals. In this paper, the discrete wavelet transform (DWT) of the apparent-power signal for the airgap-eccentricity fault detection in three-phase induction motors is presented in order to overcome the above FFT problems. The proposed method is based on the decomposition of the apparent-power signal from which wavelet approximation and detail coefficients are extracted. The energy evaluation of a known bandwidth permits to define a fault severity factor (FSF). Simulation as well as experimental results are provided to illustrate the effectiveness and accuracy of the proposed method presented even for the case of load torque variations. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
Li, Tianxin; Zhou, Xing Chen; Ikhumhen, Harrison Odion; Difei, An
2018-05-01
In recent years, with the significant increase in urban development, it has become necessary to optimize the current air monitoring stations to reflect the quality of air in the environment. Highlighting the spatial representation of some air monitoring stations using Beijing's regional air monitoring station data from 2012 to 2014, the monthly mean particulate matter concentration (PM10) in the region was calculated and through the IDW interpolation method and spatial grid statistical method using GIS, the spatial distribution of PM10 concentration in the whole region was deduced. The spatial distribution variation of districts in Beijing using the gridding model was performed, and through the 3-year spatial analysis, PM10 concentration data including the variation and spatial overlay (1.5 km × 1.5 km cell resolution grid), the spatial distribution result obtained showed that the total PM10 concentration frequency variation exceeded the standard. It is very important to optimize the layout of the existing air monitoring stations by combining the concentration distribution of air pollutants with the spatial region using GIS.
Armour, John A. L.; Palla, Raquel; Zeeuwen, Patrick L. J. M.; den Heijer, Martin; Schalkwijk, Joost; Hollox, Edward J.
2007-01-01
Recent work has demonstrated an unexpected prevalence of copy number variation in the human genome, and has highlighted the part this variation may play in predisposition to common phenotypes. Some important genes vary in number over a high range (e.g. DEFB4, which commonly varies between two and seven copies), and have posed formidable technical challenges for accurate copy number typing, so that there are no simple, cheap, high-throughput approaches suitable for large-scale screening. We have developed a simple comparative PCR method based on dispersed repeat sequences, using a single pair of precisely designed primers to amplify products simultaneously from both test and reference loci, which are subsequently distinguished and quantified via internal sequence differences. We have validated the method for the measurement of copy number at DEFB4 by comparison of results from >800 DNA samples with copy number measurements by MAPH/REDVR, MLPA and array-CGH. The new Paralogue Ratio Test (PRT) method can require as little as 10 ng genomic DNA, appears to be comparable in accuracy to the other methods, and for the first time provides a rapid, simple and inexpensive method for copy number analysis, suitable for application to typing thousands of samples in large case-control association studies. PMID:17175532
NASA Astrophysics Data System (ADS)
Batterman, Stuart; Cook, Richard; Justin, Thomas
2015-04-01
Traffic activity encompasses the number, mix, speed and acceleration of vehicles on roadways. The temporal pattern and variation of traffic activity reflects vehicle use, congestion and safety issues, and it represents a major influence on emissions and concentrations of traffic-related air pollutants. Accurate characterization of vehicle flows is critical in analyzing and modeling urban and local-scale pollutants, especially in near-road environments and traffic corridors. This study describes methods to improve the characterization of temporal variation of traffic activity. Annual, monthly, daily and hourly temporal allocation factors (TAFs), which describe the expected temporal variation in traffic activity, were developed using four years of hourly traffic activity data recorded at 14 continuous counting stations across the Detroit, Michigan, U.S. region. Five sites also provided vehicle classification. TAF-based models provide a simple means to apportion annual average estimates of traffic volume to hourly estimates. The analysis shows the need to separate TAFs for total and commercial vehicles, and weekdays, Saturdays, Sundays and observed holidays. Using either site-specific or urban-wide TAFs, nearly all of the variation in historical traffic activity at the street scale could be explained; unexplained variation was attributed to adverse weather, traffic accidents and construction. The methods and results presented in this paper can improve air quality dispersion modeling of mobile sources, and can be used to evaluate and model temporal variation in ambient air quality monitoring data and exposure estimates.
Batterman, Stuart; Cook, Richard; Justin, Thomas
2015-01-01
Traffic activity encompasses the number, mix, speed and acceleration of vehicles on roadways. The temporal pattern and variation of traffic activity reflects vehicle use, congestion and safety issues, and it represents a major influence on emissions and concentrations of traffic-related air pollutants. Accurate characterization of vehicle flows is critical in analyzing and modeling urban and local-scale pollutants, especially in near-road environments and traffic corridors. This study describes methods to improve the characterization of temporal variation of traffic activity. Annual, monthly, daily and hourly temporal allocation factors (TAFs), which describe the expected temporal variation in traffic activity, were developed using four years of hourly traffic activity data recorded at 14 continuous counting stations across the Detroit, Michigan, U.S. region. Five sites also provided vehicle classification. TAF-based models provide a simple means to apportion annual average estimates of traffic volume to hourly estimates. The analysis shows the need to separate TAFs for total and commercial vehicles, and weekdays, Saturdays, Sundays and observed holidays. Using either site-specific or urban-wide TAFs, nearly all of the variation in historical traffic activity at the street scale could be explained; unexplained variation was attributed to adverse weather, traffic accidents and construction. The methods and results presented in this paper can improve air quality dispersion modeling of mobile sources, and can be used to evaluate and model temporal variation in ambient air quality monitoring data and exposure estimates. PMID:25844042
Biometric identification based on novel frequency domain facial asymmetry measures
NASA Astrophysics Data System (ADS)
Mitra, Sinjini; Savvides, Marios; Vijaya Kumar, B. V. K.
2005-03-01
In the modern world, the ever-growing need to ensure a system's security has spurred the growth of the newly emerging technology of biometric identification. The present paper introduces a novel set of facial biometrics based on quantified facial asymmetry measures in the frequency domain. In particular, we show that these biometrics work well for face images showing expression variations and have the potential to do so in presence of illumination variations as well. A comparison of the recognition rates with those obtained from spatial domain asymmetry measures based on raw intensity values suggests that the frequency domain representation is more robust to intra-personal distortions and is a novel approach for performing biometric identification. In addition, some feature analysis based on statistical methods comparing the asymmetry measures across different individuals and across different expressions is presented.
Alarcón-Ríos, Lucía; Velo-Antón, Guillermo; Kaliontzopoulou, Antigoni
2017-04-01
The study of morphological variation among and within taxa can shed light on the evolution of phenotypic diversification. In the case of urodeles, the dorso-ventral view of the head captures most of the ontogenetic and evolutionary variation of the entire head, which is a structure with a high potential for being a target of selection due to its relevance in ecological and social functions. Here, we describe a non-invasive procedure of geometric morphometrics for exploring morphological variation in the external dorso-ventral view of urodeles' head. To explore the accuracy of the method and its potential for describing morphological patterns we applied it to two populations of Salamandra salamandra gallaica from NW Iberia. Using landmark-based geometric morphometrics, we detected differences in head shape between populations and sexes, and an allometric relationship between shape and size. We also determined that not all differences in head shape are due to size variation, suggesting intrinsic shape differences across sexes and populations. These morphological patterns had not been previously explored in S. salamandra, despite the high levels of intraspecific diversity within this species. The methodological procedure presented here allows to detect shape variation at a very fine scale, and solves the drawbacks of using cranial samples, thus increasing the possibilities of using collection specimens and alive animals for exploring dorsal head shape variation and its evolutionary and ecological implications in urodeles. J. Morphol. 278:475-485, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
A screening tool for delineating subregions of steady recharge within groundwater models
Dickinson, Jesse; Ferré, T.P.A.; Bakker, Mark; Crompton, Becky
2014-01-01
We have developed a screening method for simplifying groundwater models by delineating areas within the domain that can be represented using steady-state groundwater recharge. The screening method is based on an analytical solution for the damping of sinusoidal infiltration variations in homogeneous soils in the vadose zone. The damping depth is defined as the depth at which the flux variation damps to 5% of the variation at the land surface. Groundwater recharge may be considered steady where the damping depth is above the depth of the water table. The analytical solution approximates the vadose zone diffusivity as constant, and we evaluated when this approximation is reasonable. We evaluated the analytical solution through comparison of the damping depth computed by the analytic solution with the damping depth simulated by a numerical model that allows variable diffusivity. This comparison showed that the screening method conservatively identifies areas of steady recharge and is more accurate when water content and diffusivity are nearly constant. Nomograms of the damping factor (the ratio of the flux amplitude at any depth to the amplitude at the land surface) and the damping depth were constructed for clay and sand for periodic variations between 1 and 365 d and flux means and amplitudes from nearly 0 to 1 × 10−3 m d−1. We applied the screening tool to Central Valley, California, to identify areas of steady recharge. A MATLAB script was developed to compute the damping factor for any soil and any sinusoidal flux variation.
Tarafder, Abhijit; Iraneta, Pamela; Guiochon, Georges; Kaczmarski, Krzysztof; Poe, Donald P
2014-10-31
We propose to use constant enthalpy or isenthalpic diagrams as a tool to estimate the extent of the temperature variations caused by the mobile phase pressure drop along a chromatographic column, e.g. of its cooling in supercritical fluid and its heating in ultra-performance liquid chromatography. Temperature strongly affects chromatographic phenomena. Any of its variations inside the column, whether intended or not, can lead to significant changes in separation performance. Although instruments use column ovens in order to keep constant the column temperature, operating conditions leading to a high pressure drop may cause significant variations of the column temperature, both in the axial and the radial directions, from the set value. Different ways of measuring these temperature variations are available but they are too inconvenient to be employed in many practical situations. In contrast, the thermodynamic plot-based method that we describe here can easily be used with only a ruler and a pencil. They should be helpful in developing methods or in analyzing results in analytical laboratories. Although the most effective application area for this approach should be SFC (supercritical fluid chromatography), it can be applied to any chromatographic conditions in which temperature variations take place along the column due to the pressure drop, e.g. in ultra-high pressure liquid chromatography (UHPLC). The method proposed here is applicable to isocractic conditions only. Copyright © 2014 Elsevier B.V. All rights reserved.
Valavanis, Ioannis K; Mougiakakou, Stavroula G; Grimaldi, Keith A; Nikita, Konstantina S
2010-09-08
Obesity is a multifactorial trait, which comprises an independent risk factor for cardiovascular disease (CVD). The aim of the current work is to study the complex etiology beneath obesity and identify genetic variations and/or factors related to nutrition that contribute to its variability. To this end, a set of more than 2300 white subjects who participated in a nutrigenetics study was used. For each subject a total of 63 factors describing genetic variants related to CVD (24 in total), gender, and nutrition (38 in total), e.g. average daily intake in calories and cholesterol, were measured. Each subject was categorized according to body mass index (BMI) as normal (BMI ≤ 25) or overweight (BMI > 25). Two artificial neural network (ANN) based methods were designed and used towards the analysis of the available data. These corresponded to i) a multi-layer feed-forward ANN combined with a parameter decreasing method (PDM-ANN), and ii) a multi-layer feed-forward ANN trained by a hybrid method (GA-ANN) which combines genetic algorithms and the popular back-propagation training algorithm. PDM-ANN and GA-ANN were comparatively assessed in terms of their ability to identify the most important factors among the initial 63 variables describing genetic variations, nutrition and gender, able to classify a subject into one of the BMI related classes: normal and overweight. The methods were designed and evaluated using appropriate training and testing sets provided by 3-fold Cross Validation (3-CV) resampling. Classification accuracy, sensitivity, specificity and area under receiver operating characteristics curve were utilized to evaluate the resulted predictive ANN models. The most parsimonious set of factors was obtained by the GA-ANN method and included gender, six genetic variations and 18 nutrition-related variables. The corresponding predictive model was characterized by a mean accuracy equal of 61.46% in the 3-CV testing sets. The ANN based methods revealed factors that interactively contribute to obesity trait and provided predictive models with a promising generalization ability. In general, results showed that ANNs and their hybrids can provide useful tools for the study of complex traits in the context of nutrigenetics.
Yavaş, Gökhan; Koyutürk, Mehmet; Gould, Meetha P; McMahon, Sarah; LaFramboise, Thomas
2014-03-05
With the advent of paired-end high throughput sequencing, it is now possible to identify various types of structural variation on a genome-wide scale. Although many methods have been proposed for structural variation detection, most do not provide precise boundaries for identified variants. In this paper, we propose a new method, Distribution Based detection of Duplication Boundaries (DB2), for accurate detection of tandem duplication breakpoints, an important class of structural variation, with high precision and recall. Our computational experiments on simulated data show that DB2 outperforms state-of-the-art methods in terms of finding breakpoints of tandem duplications, with a higher positive predictive value (precision) in calling the duplications' presence. In particular, DB2's prediction of tandem duplications is correct 99% of the time even for very noisy data, while narrowing down the space of possible breakpoints within a margin of 15 to 20 bps on the average. Most of the existing methods provide boundaries in ranges that extend to hundreds of bases with lower precision values. Our method is also highly robust to varying properties of the sequencing library and to the sizes of the tandem duplications, as shown by its stable precision, recall and mean boundary mismatch performance. We demonstrate our method's efficacy using both simulated paired-end reads, and those generated from a melanoma sample and two ovarian cancer samples. Newly discovered tandem duplications are validated using PCR and Sanger sequencing. Our method, DB2, uses discordantly aligned reads, taking into account the distribution of fragment length to predict tandem duplications along with their breakpoints on a donor genome. The proposed method fine tunes the breakpoint calls by applying a novel probabilistic framework that incorporates the empirical fragment length distribution to score each feasible breakpoint. DB2 is implemented in Java programming language and is freely available at http://mendel.gene.cwru.edu/laframboiselab/software.php.
Single fiber lignin distributions based on the density gradient column method
Brian Boyer; Alan W. Rudie
2007-01-01
The density gradient column method was used to determine the effects of uniform and non-uniform pulping processes on variation in individual fiber lignin concentrations of the resulting pulps. A density gradient column uses solvents of different densities and a mixing process to produce a column of liquid with a smooth transition from higher density at the bottom to...
Total-variation based velocity inversion with Bregmanized operator splitting algorithm
NASA Astrophysics Data System (ADS)
Zand, Toktam; Gholami, Ali
2018-04-01
Many problems in applied geophysics can be formulated as a linear inverse problem. The associated problems, however, are large-scale and ill-conditioned. Therefore, regularization techniques are needed to be employed for solving them and generating a stable and acceptable solution. We consider numerical methods for solving such problems in this paper. In order to tackle the ill-conditioning of the problem we use blockiness as a prior information of the subsurface parameters and formulate the problem as a constrained total variation (TV) regularization. The Bregmanized operator splitting (BOS) algorithm as a combination of the Bregman iteration and the proximal forward backward operator splitting method is developed to solve the arranged problem. Two main advantages of this new algorithm are that no matrix inversion is required and that a discrepancy stopping criterion is used to stop the iterations, which allow efficient solution of large-scale problems. The high performance of the proposed TV regularization method is demonstrated using two different experiments: 1) velocity inversion from (synthetic) seismic data which is based on Born approximation, 2) computing interval velocities from RMS velocities via Dix formula. Numerical examples are presented to verify the feasibility of the proposed method for high-resolution velocity inversion.
Wendland, Edson; Gomes, Luis H; Troeger, Uwe
2015-01-01
The contribution of recharge to regional groundwater flow systems is essential information required to establish sustainable water resources management. The objective of this work was to determine the groundwater outflow in the Ribeirão da Onça Basin using a water balance model of the saturated soil zone. The basin is located in the outcrop region of the Guarani Aquifer System (GAS). The water balance method involved the determination of direct recharge values, groundwater storage variation and base flow. The direct recharge was determined by the water table fluctuation method (WTF). The base flow was calculated by the hydrograph separation method, which was generated by a rain-flow model supported by biweekly streamflow measurements in the control section. Undisturbed soil samples were collected at depths corresponding to the variation zone of the groundwater level to determine the specific yield of the soil (drainable porosity). Water balances were performed in the saturated zone for the hydrological years from February 2004 to January 2007. The direct recharge ranged from 14.0% to 38.0%, and groundwater outflow from 0.4% to 2.4% of the respective rainfall during the same period.
Qiu, Weiliang; Sandberg, Michael A; Rosner, Bernard
2018-05-31
Retinitis pigmentosa is one of the most common forms of inherited retinal degeneration. The electroretinogram (ERG) can be used to determine the severity of retinitis pigmentosa-the lower the ERG amplitude, the more severe the disease is. In practice for career, lifestyle, and treatment counseling, it is of interest to predict the ERG amplitude of a patient at a future time. One approach is prediction based on the average rate of decline for individual patients. However, there is considerable variation both in initial amplitude and in rate of decline. In this article, we propose an empirical Bayes (EB) approach to incorporate the variations in initial amplitude and rate of decline for the prediction of ERG amplitude at the individual level. We applied the EB method to a collection of ERGs from 898 patients with 3 or more visits over 5 or more years of follow-up tested in the Berman-Gund Laboratory and observed that the predicted values at the last (kth) visit obtained by using the proposed method based on data for the first k-1 visits are highly correlated with the observed values at the kth visit (Spearman correlation =0.93) and have a higher correlation with the observed values than those obtained based on either the population average decline rate or those obtained based on the individual decline rate. The mean square errors for predicted values obtained by the EB method are also smaller than those predicted by the other methods. Copyright © 2018 John Wiley & Sons, Ltd.
Li, Shou-Li; Vasemägi, Anti; Ramula, Satu
2016-01-01
Background and Aims Assessing the demographic consequences of genetic variation is fundamental to invasion biology. However, genetic and demographic approaches are rarely combined to explore the effects of genetic variation on invasive populations in natural environments. This study combined population genetics, demographic data and a greenhouse experiment to investigate the consequences of genetic variation for the population fitness of the perennial, invasive herb Lupinus polyphyllus. Methods Genetic and demographic data were collected from 37 L. polyphyllus populations representing different latitudes in Finland, and genetic variation was characterized based on 13 microsatellite loci. Associations between genetic variation and population size, population density, latitude and habitat were investigated. Genetic variation was then explored in relation to four fitness components (establishment, survival, growth, fecundity) measured at the population level, and the long-term population growth rate (λ). For a subset of populations genetic variation was also examined in relation to the temporal variability of λ. A further assessment was made of the role of natural selection in the observed variation of certain fitness components among populations under greenhouse conditions. Key Results It was found that genetic variation correlated positively with population size, particularly at higher latitudes, and differed among habitat types. Average seedling establishment per population increased with genetic variation in the field, but not under greenhouse conditions. Quantitative genetic divergence (QST) based on seedling establishment in the greenhouse was smaller than allelic genetic divergence (F′ST), indicating that unifying selection has a prominent role in this fitness component. Genetic variation was not associated with average survival, growth or fecundity measured at the population level, λ or its variability. Conclusions The study suggests that although genetic variation may facilitate plant invasions by increasing seedling establishment, it may not necessarily affect the long-term population growth rate. Therefore, established invasions may be able to grow equally well regardless of their genetic diversity. PMID:26420202
Amirjani, Amirmostafa; Fatmehsari, Davoud Haghshenas
2018-01-01
In this work, a rapid and straightforward method was developed for colorimetric determination of ammonia using smartphones. The mechanisms is based on the manipulation of the surface plasmon band of silver nanoparticles (AgNPs) via the formation of Ag (NH 3 ) 2 + complex. This complex decreases the amount of AgNPs in the solution and consequently, the color intensity of the colloidal system decreases. Not only the variation in color intensity of the solution can be tracked by a UV-vis spectrophotometer, but also a smartphone can be employed to monitor the color intensity variation by RGB analysis. Ammonia, in the concentration range of 10-1000mgL -1 , was successfully measured spectrophotometrically (UV-vis spectrophotometer) and colorimetrically (RGB measurement) with the detection limit of 180 and 200mgL -1 , respectively. Linear relationships were also developed for both methods. Also, the response time of the developed colorimetric sensor was around 20s. Both of the colorimetric and spectrophotometric methods showed a reliable performance for determination of ammonia in the real samples. Copyright © 2017 Elsevier B.V. All rights reserved.
Quantitative Detection of Cracks in Steel Using Eddy Current Pulsed Thermography.
Shi, Zhanqun; Xu, Xiaoyu; Ma, Jiaojiao; Zhen, Dong; Zhang, Hao
2018-04-02
Small cracks are common defects in steel and often lead to catastrophic accidents in industrial applications. Various nondestructive testing methods have been investigated for crack detection; however, most current methods focus on qualitative crack identification and image processing. In this study, eddy current pulsed thermography (ECPT) was applied for quantitative crack detection based on derivative analysis of temperature variation. The effects of the incentive parameters on the temperature variation were analyzed in the simulation study. The crack profile and position are identified in the thermal image based on the Canny edge detection algorithm. Then, one or more trajectories are determined through the crack profile in order to determine the crack boundary through its temperature distribution. The slope curve along the trajectory is obtained. Finally, quantitative analysis of the crack sizes was performed by analyzing the features of the slope curves. The experimental verification showed that the crack sizes could be quantitatively detected with errors of less than 1%. Therefore, the proposed ECPT method was demonstrated to be a feasible and effective nondestructive approach for quantitative crack detection.
Yamagata, Koichi; Yamanishi, Ayako; Kokubu, Chikara; Takeda, Junji; Sese, Jun
2016-05-05
An important challenge in cancer genomics is precise detection of structural variations (SVs) by high-throughput short-read sequencing, which is hampered by the high false discovery rates of existing analysis tools. Here, we propose an accurate SV detection method named COSMOS, which compares the statistics of the mapped read pairs in tumor samples with isogenic normal control samples in a distinct asymmetric manner. COSMOS also prioritizes the candidate SVs using strand-specific read-depth information. Performance tests on modeled tumor genomes revealed that COSMOS outperformed existing methods in terms of F-measure. We also applied COSMOS to an experimental mouse cell-based model, in which SVs were induced by genome engineering and gamma-ray irradiation, followed by polymerase chain reaction-based confirmation. The precision of COSMOS was 84.5%, while the next best existing method was 70.4%. Moreover, the sensitivity of COSMOS was the highest, indicating that COSMOS has great potential for cancer genome analysis. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
NASA Astrophysics Data System (ADS)
Chatzistergos, Theodosios; Ermolli, Ilaria; Solanki, Sami K.; Krivova, Natalie A.
2018-01-01
Context. Historical Ca II K spectroheliograms (SHG) are unique in representing long-term variations of the solar chromospheric magnetic field. They usually suffer from numerous problems and lack photometric calibration. Thus accurate processing of these data is required to get meaningful results from their analysis. Aims: In this paper we aim at developing an automatic processing and photometric calibration method that provides precise and consistent results when applied to historical SHG. Methods: The proposed method is based on the assumption that the centre-to-limb variation of the intensity in quiet Sun regions does not vary with time. We tested the accuracy of the proposed method on various sets of synthetic images that mimic problems encountered in historical observations. We also tested our approach on a large sample of images randomly extracted from seven different SHG archives. Results: The tests carried out on the synthetic data show that the maximum relative errors of the method are generally <6.5%, while the average error is <1%, even if rather poor quality observations are considered. In the absence of strong artefacts the method returns images that differ from the ideal ones by <2% in any pixel. The method gives consistent values for both plage and network areas. We also show that our method returns consistent results for images from different SHG archives. Conclusions: Our tests show that the proposed method is more accurate than other methods presented in the literature. Our method can also be applied to process images from photographic archives of solar observations at other wavelengths than Ca II K.
Lu, Tzong-Shi; Yiao, Szu-Yu; Lim, Kenneth; Jensen, Roderick V; Hsiao, Li-Li
2010-07-01
The identification of differences in protein expression resulting from methodical variations is an essential component to the interpretation of true, biologically significant results. We used the Lowry and Bradford methods- two most commonly used methods for protein quantification, to assess whether differential protein expressions are a result of true biological or methodical variations. MATERIAL #ENTITYSTARTX00026; Differential protein expression patterns was assessed by western blot following protein quantification by the Lowry and Bradford methods. We have observed significant variations in protein concentrations following assessment with the Lowry versus Bradford methods, using identical samples. Greater variations in protein concentration readings were observed over time and in samples with higher concentrations, with the Bradford method. Identical samples quantified using both methods yielded significantly different expression patterns on Western blot. We show for the first time that methodical variations observed in these protein assay techniques, can potentially translate into differential protein expression patterns, that can be falsely taken to be biologically significant. Our study therefore highlights the pivotal need to carefully consider methodical approaches to protein quantification in techniques that report quantitative differences.
Maikusa, Norihide; Yamashita, Fumio; Tanaka, Kenichiro; Abe, Osamu; Kawaguchi, Atsushi; Kabasawa, Hiroyuki; Chiba, Shoma; Kasahara, Akihiro; Kobayashi, Nobuhisa; Yuasa, Tetsuya; Sato, Noriko; Matsuda, Hiroshi; Iwatsubo, Takeshi
2013-06-01
Serial magnetic resonance imaging (MRI) images acquired from multisite and multivendor MRI scanners are widely used in measuring longitudinal structural changes in the brain. Precise and accurate measurements are important in understanding the natural progression of neurodegenerative disorders such as Alzheimer's disease. However, geometric distortions in MRI images decrease the accuracy and precision of volumetric or morphometric measurements. To solve this problem, the authors suggest a commercially available phantom-based distortion correction method that accommodates the variation in geometric distortion within MRI images obtained with multivendor MRI scanners. The authors' method is based on image warping using a polynomial function. The method detects fiducial points within a phantom image using phantom analysis software developed by the Mayo Clinic and calculates warping functions for distortion correction. To quantify the effectiveness of the authors' method, the authors corrected phantom images obtained from multivendor MRI scanners and calculated the root-mean-square (RMS) of fiducial errors and the circularity ratio as evaluation values. The authors also compared the performance of the authors' method with that of a distortion correction method based on a spherical harmonics description of the generic gradient design parameters. Moreover, the authors evaluated whether this correction improves the test-retest reproducibility of voxel-based morphometry in human studies. A Wilcoxon signed-rank test with uncorrected and corrected images was performed. The root-mean-square errors and circularity ratios for all slices significantly improved (p < 0.0001) after the authors' distortion correction. Additionally, the authors' method was significantly better than a distortion correction method based on a description of spherical harmonics in improving the distortion of root-mean-square errors (p < 0.001 and 0.0337, respectively). Moreover, the authors' method reduced the RMS error arising from gradient nonlinearity more than gradwarp methods. In human studies, the coefficient of variation of voxel-based morphometry analysis of the whole brain improved significantly from 3.46% to 2.70% after distortion correction of the whole gray matter using the authors' method (Wilcoxon signed-rank test, p < 0.05). The authors proposed a phantom-based distortion correction method to improve reproducibility in longitudinal structural brain analysis using multivendor MRI. The authors evaluated the authors' method for phantom images in terms of two geometrical values and for human images in terms of test-retest reproducibility. The results showed that distortion was corrected significantly using the authors' method. In human studies, the reproducibility of voxel-based morphometry analysis for the whole gray matter significantly improved after distortion correction using the authors' method.
Quasi-periodic changes in the 3D solar anisotropy of Galactic cosmic rays for 1965-2014
NASA Astrophysics Data System (ADS)
Modzelewska, R.; Alania, M. V.
2018-01-01
Aims: We study features of the 3D solar anisotropy of Galactic cosmic rays (GCR) for 1965-2014 (almost five solar cycles, cycles 20-24). We analyze the 27-day variations of the 2D GCR anisotropy in the ecliptic plane and the north-south anisotropy normal to the ecliptic plane. We study the dependence of the 27-day variation of the 3D GCR anisotropy on the solar cycle and solar magnetic cycle. We demonstrate that the 27-day variations of the GCR intensity and anisotropy can be used as an important tool to study solar wind, solar activity, and heliosphere. Methods: We used the components Ar, Aϕ and At of the 3D GCR anisotropy that were found based on hourly data of neutron monitors (NMs) and muon telescopes (MTs) using the harmonic analyses and spectrographic methods. We corrected the 2D diurnal ( 24-h) variation of the GCR intensity for the influence of the Earth magnetic field. We derived the north-south component of the GCR anisotropy based on the GG index, which is calculated as the difference in GCR intensities of the Nagoya multidirectional MTs. Results: We show that the behavior of the 27-day variation of the 3D anisotropy verifies a stable long-lived active heliolongitude on the Sun. This illustrates the usefulness of the 27-day variation of the GCR anisotropy as a unique proxy to study solar wind, solar activity, and heliosphere. We distinguish a tendency of the 22-yr changes in amplitude of the 27-day variation of the 2D anisotropy that is connected with the solar magnetic cycle. We demonstrate that the amplitudes of the 27-day variation of the north-south component of the anisotropy vary with the 11-yr solar cycle, but a dependence of the solar magnetic polarity can hardly be recognized. We show that the 27-day recurrences of the GG index and the At component are highly positively correlated, and both are highly correlated with the By component of the heliospheric magnetic field.
Conomos, Matthew P; Miller, Michael B; Thornton, Timothy A
2015-05-01
Population structure inference with genetic data has been motivated by a variety of applications in population genetics and genetic association studies. Several approaches have been proposed for the identification of genetic ancestry differences in samples where study participants are assumed to be unrelated, including principal components analysis (PCA), multidimensional scaling (MDS), and model-based methods for proportional ancestry estimation. Many genetic studies, however, include individuals with some degree of relatedness, and existing methods for inferring genetic ancestry fail in related samples. We present a method, PC-AiR, for robust population structure inference in the presence of known or cryptic relatedness. PC-AiR utilizes genome-screen data and an efficient algorithm to identify a diverse subset of unrelated individuals that is representative of all ancestries in the sample. The PC-AiR method directly performs PCA on the identified ancestry representative subset and then predicts components of variation for all remaining individuals based on genetic similarities. In simulation studies and in applications to real data from Phase III of the HapMap Project, we demonstrate that PC-AiR provides a substantial improvement over existing approaches for population structure inference in related samples. We also demonstrate significant efficiency gains, where a single axis of variation from PC-AiR provides better prediction of ancestry in a variety of structure settings than using 10 (or more) components of variation from widely used PCA and MDS approaches. Finally, we illustrate that PC-AiR can provide improved population stratification correction over existing methods in genetic association studies with population structure and relatedness. © 2015 WILEY PERIODICALS, INC.
Chemical Fingerprinting of Materials Developed Due to Environmental Issues
NASA Technical Reports Server (NTRS)
Smith, Doris A.; McCool, A. (Technical Monitor)
2000-01-01
Instrumental chemical analysis methods are developed and used to chemically fingerprint new and modified External Tank materials made necessary by changing environmental requirements. Chemical fingerprinting can detect and diagnose variations in material composition. To chemically characterize each material, fingerprint methods are selected from an extensive toolbox based on the material's chemistry and the ability of the specific methods to detect the material's critical ingredients. Fingerprint methods have been developed for a variety of materials including Thermal Protection System foams, adhesives, primers, and composites.
NASA Astrophysics Data System (ADS)
Fillion, Anthony; Bocquet, Marc; Gratton, Serge
2018-04-01
The analysis in nonlinear variational data assimilation is the solution of a non-quadratic minimization. Thus, the analysis efficiency relies on its ability to locate a global minimum of the cost function. If this minimization uses a Gauss-Newton (GN) method, it is critical for the starting point to be in the attraction basin of a global minimum. Otherwise the method may converge to a local extremum, which degrades the analysis. With chaotic models, the number of local extrema often increases with the temporal extent of the data assimilation window, making the former condition harder to satisfy. This is unfortunate because the assimilation performance also increases with this temporal extent. However, a quasi-static (QS) minimization may overcome these local extrema. It accomplishes this by gradually injecting the observations in the cost function. This method was introduced by Pires et al. (1996) in a 4D-Var context. We generalize this approach to four-dimensional strong-constraint nonlinear ensemble variational (EnVar) methods, which are based on both a nonlinear variational analysis and the propagation of dynamical error statistics via an ensemble. This forces one to consider the cost function minimizations in the broader context of cycled data assimilation algorithms. We adapt this QS approach to the iterative ensemble Kalman smoother (IEnKS), an exemplar of nonlinear deterministic four-dimensional EnVar methods. Using low-order models, we quantify the positive impact of the QS approach on the IEnKS, especially for long data assimilation windows. We also examine the computational cost of QS implementations and suggest cheaper algorithms.
Day, Ryan; Joo, Hyun; Chavan, Archana; Lennox, Kristin P.; Chen, Ann; Dahl, David B.; Vannucci, Marina; Tsai, Jerry W.
2012-01-01
As an alternative to the common template based protein structure prediction methods based on main-chain position, a novel side-chain centric approach has been developed. Together with a Bayesian loop modeling procedure and a combination scoring function, the Stone Soup algorithm was applied to the CASP9 set of template based modeling targets. Although the method did not generate as large of perturbations to the template structures as necessary, the analysis of the results gives unique insights into the differences in packing between the target structures and their templates. Considerable variation in packing is found between target and template structures even when the structures are close, and this variation is found due to 2 and 3 body packing interactions. Outside the inherent restrictions in packing representation of the PDB, the first steps in correctly defining those regions of variable packing have been mapped primarily to local interactions, as the packing at the secondary and tertiary structure are largely conserved. Of the scoring functions used, a loop scoring function based on water structure exhibited some promise for discrimination. These results present a clear structural path for further development of a side-chain centered approach to template based modeling. PMID:23266765
Day, Ryan; Joo, Hyun; Chavan, Archana C; Lennox, Kristin P; Chen, Y Ann; Dahl, David B; Vannucci, Marina; Tsai, Jerry W
2013-02-01
As an alternative to the common template based protein structure prediction methods based on main-chain position, a novel side-chain centric approach has been developed. Together with a Bayesian loop modeling procedure and a combination scoring function, the Stone Soup algorithm was applied to the CASP9 set of template based modeling targets. Although the method did not generate as large of perturbations to the template structures as necessary, the analysis of the results gives unique insights into the differences in packing between the target structures and their templates. Considerable variation in packing is found between target and template structures even when the structures are close, and this variation is found due to 2 and 3 body packing interactions. Outside the inherent restrictions in packing representation of the PDB, the first steps in correctly defining those regions of variable packing have been mapped primarily to local interactions, as the packing at the secondary and tertiary structure are largely conserved. Of the scoring functions used, a loop scoring function based on water structure exhibited some promise for discrimination. These results present a clear structural path for further development of a side-chain centered approach to template based modeling. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Fard, Ali M.; Gardecki, Joseph A.; Ughi, Giovanni J.; Hyun, Chulho; Tearney, Guillermo J.
2016-02-01
Intravascular optical coherence tomography (OCT) is a high-resolution catheter-based imaging method that provides three-dimensional microscopic images of coronary artery in vivo, facilitating coronary artery disease treatment decisions based on detailed morphology. Near-infrared spectroscopy (NIRS) has proven to be a powerful tool for identification of lipid-rich plaques inside the coronary walls. We have recently demonstrated a dual-modality intravascular imaging technology that integrates OCT and NIRS into one imaging catheter using a two-fiber arrangement and a custom-made dual-channel fiber rotary junction. It therefore enables simultaneous acquisition of microstructural and composition information at 100 frames/second for improved diagnosis of coronary lesions. The dual-modality OCT-NIRS system employs a single wavelength-swept light source for both OCT and NIRS modalities. It subsequently uses a high-speed photoreceiver to detect the NIRS spectrum in the time domain. Although use of one light source greatly simplifies the system configuration, such light source exhibits pulse-to-pulse wavelength and intensity variation due to mechanical scanning of the wavelength. This can be in particular problematic for NIRS modality and sacrifices the reliability of the acquired spectra. In order to address this challenge, here we developed a robust data acquisition and processing method that compensates for the spectral variations of the wavelength-swept light source. The proposed method extracts the properties of the light source, i.e., variation period and amplitude from a reference spectrum and subsequently calibrates the NIRS datasets. We have applied this method on datasets obtained from cadaver human coronary arteries using a polygon-scanning (1230-1350nm) OCT system, operating at 100,000 sweeps per second. The results suggest that our algorithm accurately and robustly compensates the spectral variations and visualizes the dual-modality OCT-NIRS images. These findings are therefore crucial for the practical application and clinical translation of dual-modality intravascular OCT-NIRS imaging when the same swept sources are used for both OCT and spectroscopy.
NASA Technical Reports Server (NTRS)
Bosilovich, Michael G.; Chern, Jiun-Dar
2005-01-01
An atmospheric general circulation model simulation for 1948-1997 of the water budgets for the MacKenzie, Mississippi and Amazon River basins is presented. In addition to the water budget, we include passive tracers to identify the geographic sources of water for the basins, and the analysis focuses on the mechanisms contributing to precipitation recycling in each basin. While each basin s precipitation recycling has a strong dependency on evaporation during the mean annual cycle, the interannual variability of the recycling shows important relationships with the atmospheric circulation. The MacKenzie River basin has only a weak interannual dependency on evaporation, where the variations in zonal moisture transport from the Pacific Ocean can affect the basin water cycle. On the other hand, the Mississippi River basin has strong interannual dependencies on evaporation. While the precipitation recycling weakens with increased low level jet intensity, the evaporation variations exert stronger influence in providing water vapor for convective precipitation at the convective cloud base. High precipitation recycling is also found to be partly connected to warm SSTs in the tropical Pacific Ocean. The Amazon River basin evaporation exhibits small interannual variations, so that the interannual variations of precipitation recycling are related to atmospheric moisture transport from the tropical south Atlantic Ocean. Increasing SSTs over the 50-year period are causing increased easterly transport across the basin. As moisture transport increases, the Amazon precipitation recycling decreases (without real time varying vegetation changes). In addition, precipitation recycling from a bulk diagnostic method is compared to the passive tracer method used in the analysis. While the mean values are different, the interannual variations are comparable between each method. The methods also exhibit similar relationships to the terms of the basin scale water budgets.
Wellington, Gerrard M.; Fox, George E.; Toonen, Robert J.
2015-01-01
Morphological variation in the geographically widespread coral Porites lobata can make it difficult to distinguish from other massive congeneric species. This morphological variation could be attributed to geographic variability, phenotypic plasticity, or a combination of such factors. We examined genetic and microscopic morphological variability in P. lobata samples from the Galápagos, Easter Island, Tahiti, Fiji, Rarotonga, and Australia. Panamanian P. evermanni specimens were used as a previously established distinct outgroup against which to test genetic and morphological methods of discrimination. We employed a molecular analysis of variance (AMOVA) based on ribosomal internal transcribed spacer region (ITS) sequence, principal component analysis (PCA) of skeletal landmarks, and Mantel tests to compare genetic and morphological variation. Both genetic and morphometric methods clearly distinguished P. lobata and P. evermanni, while significant genetic and morphological variance was attributed to differences among geographic regions for P. lobata. Mantel tests indicate a correlation between genetic and morphological variation for P. lobata across the Pacific. Here we highlight landmark morphometric measures that correlate well with genetic differences, showing promise for resolving species of Porites, one of the most ubiquitous yet challenging to identify architects of coral reefs. PMID:25674364
Bhopal, R S
1991-11-01
Demonstration of geographical variations in disease can yield powerful insight into the disease pathway, particularly for environmentally acquired conditions, but only if the many problems of data interpretation can be solved. This paper presents the framework, methods and principles guiding a study of the geographical epidemiology of Legionnaires' Disease in Scotland. A case-list was constructed and disease incidence rates were calculated by geographical area; these showed variation. Five categories of explanation for the variation were identified: short-term fluctuations of incidence in time masquerading as differences by place; artefact; and differences in host-susceptibility, agent virulence, or environment. The methods used to study these explanations, excepting agent virulence, are described, with an emphasis on the use of previously existing data to test hypotheses. Examples include the use of mortality, census and hospital morbidity data to assess the artefact and host-susceptibility explanations; and the use of ratios of serology tests to disease to examine the differential testing hypothesis. The reasoning and process by which the environmental focus of the study was narrowed and the technique for relating the geographical pattern of disease to the putative source are outlined. This framework allows the researcher to plan for the parallel collection of the data necessary both to demonstrate geographical variation and to point to the likely explanation.
Atmospheric pressure, density, temperature and wind variations between 50 and 200 km
NASA Technical Reports Server (NTRS)
Justus, C. G.; Woodrum, A.
1972-01-01
Data on atmospheric pressure, density, temperature and winds between 50 and 200 km were collected from sources including Meteorological Rocket Network data, ROBIN falling sphere data, grenade release and pitot tube data, meteor winds, chemical release winds, satellite data, and others. These data were analyzed by a daily difference method and results on the distribution statistics, magnitude, and spatial structure of the irregular atmospheric variations are presented. Time structures of the irregular variations were determined by the analysis of residuals from harmonic analysis of time series data. The observed height variations of irregular winds and densities are found to be in accord with a theoretical relation between these two quantities. The latitude variations (at 50 - 60 km height) show an increasing trend with latitude. A possible explanation of the unusually large irregular wind magnitudes of the White Sands MRN data is given in terms of mountain wave generation by the Sierra Nevada range about 1000 km west of White Sands. An analytical method is developed which, based on an analogy of the irregular motion field with axisymmetric turbulence, allows measured or model correlation or structure functions to be used to evaluate the effective frequency spectra of scalar and vector quantities of a spacecraft moving at any speed and at any trajectory elevation angle.
Higher order total variation regularization for EIT reconstruction.
Gong, Bo; Schullcke, Benjamin; Krueger-Ziolek, Sabine; Zhang, Fan; Mueller-Lisse, Ullrich; Moeller, Knut
2018-01-08
Electrical impedance tomography (EIT) attempts to reveal the conductivity distribution of a domain based on the electrical boundary condition. This is an ill-posed inverse problem; its solution is very unstable. Total variation (TV) regularization is one of the techniques commonly employed to stabilize reconstructions. However, it is well known that TV regularization induces staircase effects, which are not realistic in clinical applications. To reduce such artifacts, modified TV regularization terms considering a higher order differential operator were developed in several previous studies. One of them is called total generalized variation (TGV) regularization. TGV regularization has been successively applied in image processing in a regular grid context. In this study, we adapted TGV regularization to the finite element model (FEM) framework for EIT reconstruction. Reconstructions using simulation and clinical data were performed. First results indicate that, in comparison to TV regularization, TGV regularization promotes more realistic images. Graphical abstract Reconstructed conductivity changes located on selected vertical lines. For each of the reconstructed images as well as the ground truth image, conductivity changes located along the selected left and right vertical lines are plotted. In these plots, the notation GT in the legend stands for ground truth, TV stands for total variation method, and TGV stands for total generalized variation method. Reconstructed conductivity distributions from the GREIT algorithm are also demonstrated.
Computational Cardiac Anatomy Using MRI
Beg, Mirza Faisal; Helm, Patrick A.; McVeigh, Elliot; Miller, Michael I.; Winslow, Raimond L.
2005-01-01
Ventricular geometry and fiber orientation may undergo global or local remodeling in cardiac disease. However, there are as yet no mathematical and computational methods for quantifying variation of geometry and fiber orientation or the nature of their remodeling in disease. Toward this goal, a landmark and image intensity-based large deformation diffeomorphic metric mapping (LDDMM) method to transform heart geometry into common coordinates for quantification of shape and form was developed. Two automated landmark placement methods for modeling tissue deformations expected in different cardiac pathologies are presented. The transformations, computed using the combined use of landmarks and image intensities, yields high-registration accuracy of heart anatomies even in the presence of significant variation of cardiac shape and form. Once heart anatomies have been registered, properties of tissue geometry and cardiac fiber orientation in corresponding regions of different hearts may be quantified. PMID:15508155
Fast Low-Rank Bayesian Matrix Completion With Hierarchical Gaussian Prior Models
NASA Astrophysics Data System (ADS)
Yang, Linxiao; Fang, Jun; Duan, Huiping; Li, Hongbin; Zeng, Bing
2018-06-01
The problem of low rank matrix completion is considered in this paper. To exploit the underlying low-rank structure of the data matrix, we propose a hierarchical Gaussian prior model, where columns of the low-rank matrix are assumed to follow a Gaussian distribution with zero mean and a common precision matrix, and a Wishart distribution is specified as a hyperprior over the precision matrix. We show that such a hierarchical Gaussian prior has the potential to encourage a low-rank solution. Based on the proposed hierarchical prior model, a variational Bayesian method is developed for matrix completion, where the generalized approximate massage passing (GAMP) technique is embedded into the variational Bayesian inference in order to circumvent cumbersome matrix inverse operations. Simulation results show that our proposed method demonstrates superiority over existing state-of-the-art matrix completion methods.
Quantum theory of multiscale coarse-graining.
Han, Yining; Jin, Jaehyeok; Wagner, Jacob W; Voth, Gregory A
2018-03-14
Coarse-grained (CG) models serve as a powerful tool to simulate molecular systems at much longer temporal and spatial scales. Previously, CG models and methods have been built upon classical statistical mechanics. The present paper develops a theory and numerical methodology for coarse-graining in quantum statistical mechanics, by generalizing the multiscale coarse-graining (MS-CG) method to quantum Boltzmann statistics. A rigorous derivation of the sufficient thermodynamic consistency condition is first presented via imaginary time Feynman path integrals. It identifies the optimal choice of CG action functional and effective quantum CG (qCG) force field to generate a quantum MS-CG (qMS-CG) description of the equilibrium system that is consistent with the quantum fine-grained model projected onto the CG variables. A variational principle then provides a class of algorithms for optimally approximating the qMS-CG force fields. Specifically, a variational method based on force matching, which was also adopted in the classical MS-CG theory, is generalized to quantum Boltzmann statistics. The qMS-CG numerical algorithms and practical issues in implementing this variational minimization procedure are also discussed. Then, two numerical examples are presented to demonstrate the method. Finally, as an alternative strategy, a quasi-classical approximation for the thermal density matrix expressed in the CG variables is derived. This approach provides an interesting physical picture for coarse-graining in quantum Boltzmann statistical mechanics in which the consistency with the quantum particle delocalization is obviously manifest, and it opens up an avenue for using path integral centroid-based effective classical force fields in a coarse-graining methodology.
Application of EOF/PCA-based methods in the post-processing of GRACE derived water variations
NASA Astrophysics Data System (ADS)
Forootan, Ehsan; Kusche, Jürgen
2010-05-01
Two problems that users of monthly GRACE gravity field solutions face are 1) the presence of correlated noise in the Stokes coefficients that increases with harmonic degree and causes ‘striping', and 2) the fact that different physical signals are overlaid and difficult to separate from each other in the data. These problems are termed the signal-noise separation problem and the signal-signal separation problem. Methods that are based on principal component analysis and empirical orthogonal functions (PCA/EOF) have been frequently proposed to deal with these problems for GRACE. However, different strategies have been applied to different (spatial: global/regional, spectral: global/order-wise, geoid/equivalent water height) representations of the GRACE level 2 data products, leading to differing results and a general feeling that PCA/EOF-based methods are to be applied ‘with care'. In addition, it is known that conventional EOF/PCA methods force separated modes to be orthogonal, and that, on the other hand, to either EOFs or PCs an arbitrary orthogonal rotation can be applied. The aim of this paper is to provide a common theoretical framework and to study the application of PCA/EOF-based methods as a signal separation tool due to post-process GRACE data products. In order to investigate and illustrate the applicability of PCA/EOF-based methods, we have employed them on GRACE level 2 monthly solutions based on the Center for Space Research, University of Texas (CSR/UT) RL04 products and on the ITG-GRACE03 solutions from the University of Bonn, and on various representations of them. Our results show that EOF modes do reveal the dominating annual, semiannual and also long-periodic signals in the global water storage variations, but they also show how choosing different strategies changes the outcome and may lead to unexpected results.
Zhe, Shandian; Xu, Zenglin; Qi, Yuan; Yu, Peng
2014-01-01
A key step for Alzheimer's disease (AD) study is to identify associations between genetic variations and intermediate phenotypes (e.g., brain structures). At the same time, it is crucial to develop a noninvasive means for AD diagnosis. Although these two tasks-association discovery and disease diagnosis-have been treated separately by a variety of approaches, they are tightly coupled due to their common biological basis. We hypothesize that the two tasks can potentially benefit each other by a joint analysis, because (i) the association study discovers correlated biomarkers from different data sources, which may help improve diagnosis accuracy, and (ii) the disease status may help identify disease-sensitive associations between genetic variations and MRI features. Based on this hypothesis, we present a new sparse Bayesian approach for joint association study and disease diagnosis. In this approach, common latent features are extracted from different data sources based on sparse projection matrices and used to predict multiple disease severity levels based on Gaussian process ordinal regression; in return, the disease status is used to guide the discovery of relationships between the data sources. The sparse projection matrices not only reveal the associations but also select groups of biomarkers related to AD. To learn the model from data, we develop an efficient variational expectation maximization algorithm. Simulation results demonstrate that our approach achieves higher accuracy in both predicting ordinal labels and discovering associations between data sources than alternative methods. We apply our approach to an imaging genetics dataset of AD. Our joint analysis approach not only identifies meaningful and interesting associations between genetic variations, brain structures, and AD status, but also achieves significantly higher accuracy for predicting ordinal AD stages than the competing methods.
NASA Astrophysics Data System (ADS)
Leirião, Sílvia; He, Xin; Christiansen, Lars; Andersen, Ole B.; Bauer-Gottwein, Peter
2009-02-01
SummaryTotal water storage change in the subsurface is a key component of the global, regional and local water balances. It is partly responsible for temporal variations of the earth's gravity field in the micro-Gal (1 μGal = 10 -8 m s -2) range. Measurements of temporal gravity variations can thus be used to determine the water storage change in the hydrological system. A numerical method for the calculation of temporal gravity changes from the output of hydrological models is developed. Gravity changes due to incremental prismatic mass storage in the hydrological model cells are determined to give an accurate 3D gravity effect. The method is implemented in MATLAB and can be used jointly with any hydrological simulation tool. The method is composed of three components: the prism formula, the MacMillan formula and the point-mass approximation. With increasing normalized distance between the storage prism and the measurement location the algorithm switches first from the prism equation to the MacMillan formula and finally to the simple point-mass approximation. The method was used to calculate the gravity signal produced by an aquifer pump test. Results are in excellent agreement with the direct numerical integration of the Theis well solution and the semi-analytical results presented in [Damiata, B.N., and Lee, T.-C., 2006. Simulated gravitational response to hydraulic testing of unconfined aquifers. Journal of Hydrology 318, 348-359]. However, the presented method can be used to forward calculate hydrology-induced temporal variations in gravity from any hydrological model, provided earth curvature effects can be neglected. The method allows for the routine assimilation of ground-based gravity data into hydrological models.
Fuglsang, Karsten; Pedersen, Niels Hald; Larsen, Anna Warberg; Astrup, Thomas Fruergaard
2014-02-01
A dedicated sampling and measurement method was developed for long-term measurements of biogenic and fossil-derived CO(2) from thermal waste-to-energy processes. Based on long-term sampling of CO(2) and (14)C determination, plant-specific emission factors can be determined more accurately, and the annual emission of fossil CO(2) from waste-to-energy plants can be monitored according to carbon trading schemes and renewable energy certificates. Weekly and monthly measurements were performed at five Danish waste incinerators. Significant variations between fractions of biogenic CO(2) emitted were observed, not only over time, but also between plants. From the results of monthly samples at one plant, the annual mean fraction of biogenic CO(2) was found to be 69% of the total annual CO(2) emissions. From weekly samples, taken every 3 months at the five plants, significant seasonal variations in biogenic CO(2) emissions were observed (between 56% and 71% biogenic CO(2)). These variations confirmed that biomass fractions in the waste can vary considerably, not only from day to day but also from month to month. An uncertainty budget for the measurement method itself showed that the expanded uncertainty of the method was ± 4.0 pmC (95 % confidence interval) at 62 pmC. The long-term sampling method was found to be useful for waste incinerators for determination of annual fossil and biogenic CO(2) emissions with relatively low uncertainty.
Variational-based segmentation of bio-pores in tomographic images
NASA Astrophysics Data System (ADS)
Bauer, Benjamin; Cai, Xiaohao; Peth, Stephan; Schladitz, Katja; Steidl, Gabriele
2017-01-01
X-ray computed tomography (CT) combined with a quantitative analysis of the resulting volume images is a fruitful technique in soil science. However, the variations in X-ray attenuation due to different soil components keep the segmentation of single components within these highly heterogeneous samples a challenging problem. Particularly demanding are bio-pores due to their elongated shape and the low gray value difference to the surrounding soil structure. Recently, variational models in connection with algorithms from convex optimization were successfully applied for image segmentation. In this paper we apply these methods for the first time for the segmentation of bio-pores in CT images of soil samples. We introduce a novel convex model which enforces smooth boundaries of bio-pores and takes the varying attenuation values in the depth into account. Segmentation results are reported for different real-world 3D data sets as well as for simulated data. These results are compared with two gray value thresholding methods, namely indicator kriging and a global thresholding procedure, and with a morphological approach. Pros and cons of the methods are assessed by considering geometric features of the segmented bio-pore systems. The variational approach features well-connected smooth pores while not detecting smaller or shallower pores. This is an advantage in cases where the main bio-pores network is of interest and where infillings, e.g., excrements of earthworms, would result in losing pore connections as observed for the other thresholding methods.
Mathematical model of rolling an elastic wheel over deformable support base
NASA Astrophysics Data System (ADS)
Volskaia, V. N.; Zhileykin, M. M.; Zakharov, A. Y.
2018-02-01
One of the main direction of economic growth in Russia remains to be a speedy development of north and northeast regions that are the constituents of the 60 percent of the country territory. The further development of these territories requires new methods and technologies for solving transport and technological problems when off-road transportation of cargoes and people is conducting. One of the fundamental methods of patency prediction is imitation modeling of wheeled vehicles movement in different operating conditions. Both deformable properties of tires and physical and mechanical properties of the ground: normal tire deflection and gauge depth; variation of contact patch area depending on the load and pressure of air in the tire; existence of hysteresis losses in the tire material which are influencing on the rolling resistance due to friction processes between tire and ground in the contact patch; existence of the tangential reaction from the ground by entire contact area influence on the tractive patency. Nowadays there are two main trends in theoretical research of interaction wheeled propulsion device with ground: analytical method involving mathematical description of explored process and finite element method based on computational modeling. Mathematical models of interaction tire with the ground are used both in processes of interaction individual wheeled propulsion device with ground and researches of mobile vehicle dynamical models operated in specific road and climate conditions. One of the most significant imperfection of these models is the description of interaction wheel with flat deformable support base whereas profile of real support base surface has essential height of unevenness which is commensurate with radius of the wheel. The description of processes taking place in the ground under influence of the wheeled propulsion device using the finite element method is relatively new but most applicable lately. The application of this method allows to provide the most accurate description of the interaction process of a wheeled propulsion devices and the ground, also this method allows to define tension in the ground, deformation of the ground and the tire and ground’s compression. However, the high laboriousness of computations is essential shortcoming of that method therefore it’s hard to use these models as part of the general motion model of multi-axis wheeled vehicles. The purpose of this research is the elaboration of mathematical model of elastic wheel rolling over deformable rough support base taking into account the contact patch deformation. The mathematical model of rectilinear rolling an elastic wheel over rough deformable support base, taking into account variation of contact patch area and variation in the direction of the radial and tangential reactions also load bearing capacity of the ground, is developed. The efficiency of developed mathematical model of rectilinear rolling an elastic wheel over rough deformable support base is proved by the simulation methods.
Comments on the variational modified-hypernetted-chain theory for simple fluids
NASA Astrophysics Data System (ADS)
Rosenfeld, Yaakov
1986-02-01
The variational modified-hypernetted-chain (VMHNC) theory, based on the approximation of universality of the bridge functions, is reformulated. The new formulation includes recent calculations by Lado and by Lado, Foiles, and Ashcroft, as two stages in a systematic approach which is analyzed. A variational iterative procedure for solving the exact (diagrammatic) equations for the fluid structure which is formally identical to the VMHNC is described, featuring the theory of simple classical fluids as a one-iteration theory. An accurate method for calculating the pair structure for a given potential and for inverting structure factor data in order to obtain the potential and the thermodynamic functions, follows from our analysis.
Variability of the proton-to-electron mass ratio on cosmological scales
NASA Astrophysics Data System (ADS)
Wendt, M.; Reimers, D.
2008-10-01
The search for a possible variation of fundamental physical constants isnewsworthy more than ever. A multitude of methods were developed. So far theonly seemingly significant indication of a cosmological variation existsfor the proton-to-electron massratio as stated by Reinhold et al. [1]. The measuredindication of variation is based on the combined analysis of H2 absorptionsystems in the spectra of Q0405-443 and Q0347-383 at zabs=2.595 and zabs=3.025, respectively. The high resolution data of the latteris reanalyzed in this work to examine the influence of different fittingprocedures and further potential nonconformities. This analysis cannotreproduce the significance achieved by the previous works.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Estep, Donald
2015-11-30
This project addressed the challenge of predictive computational analysis of strongly coupled, highly nonlinear multiphysics systems characterized by multiple physical phenomena that span a large range of length- and time-scales. Specifically, the project was focused on computational estimation of numerical error and sensitivity analysis of computational solutions with respect to variations in parameters and data. In addition, the project investigated the use of accurate computational estimates to guide efficient adaptive discretization. The project developed, analyzed and evaluated new variational adjoint-based techniques for integration, model, and data error estimation/control and sensitivity analysis, in evolutionary multiphysics multiscale simulations.
Supplemental heating of deposition tooling shields
Ohlhausen, James A.; Peebles, Diane E.; Hunter, John A.; Eckelmeyer, Kenneth H.
2000-01-01
A method of reducing particle generation from the thin coating deposited on the internal surfaces of a deposition chamber which undergoes temperature variation greater than 100.degree. C. comprising maintaining the temperature variation of the internal surfaces low enough during the process cycle to keep thermal expansion stresses between the coating and the surfaces under 500 MPa. For titanium nitride deposited on stainless steel, this means keeping temperature variations under approximately 70.degree. C. in a chamber that may be heated to over 350.degree. C. during a typical processing operation. Preferably, a supplemental heater is mounted behind the upper shield and controlled by a temperature sensitive element which provides feedback control based on the temperature of the upper shield.
Flexible, multi-measurement guided wave damage detection under varying temperatures
NASA Astrophysics Data System (ADS)
Douglass, Alexander C. S.; Harley, Joel B.
2018-04-01
Temperature compensation in structural health monitoring helps identify damage in a structure by removing data variations due to environmental conditions, such as temperature. Stretch-based methods are one of the most commonly used temperature compensation methods. To account for variations in temperature, stretch-based methods optimally stretch signals in time to optimally match a measurement to a baseline. All of the data is then compared with the single baseline to determine the presence of damage. Yet, for these methods to be effective, the measurement and the baseline must satisfy the inherent assumptions of the temperature compensation method. In many scenarios, these assumptions are wrong, the methods generate error, and damage detection fails. To improve damage detection, a multi-measurement damage detection method is introduced. By using each measurement in the dataset as a baseline, error caused by imperfect temperature compensation is reduced. The multi-measurement method increases the detection effectiveness of our damage metric, or damage indicator, over time and reduces the presence of additional peaks caused by temperature that could be mistaken for damage. By using many baselines, the variance of the damage indicator is reduced and the effects from damage are amplified. Notably, the multi-measurement improves damage detection over single-measurement methods. This is demonstrated through an increase in the maximum of our damage signature from 0.55 to 0.95 (where large values, up to a maximum of one, represent a statistically significant change in the data due to damage).
Robust range estimation with a monocular camera for vision-based forward collision warning system.
Park, Ki-Yeong; Hwang, Sun-Young
2014-01-01
We propose a range estimation method for vision-based forward collision warning systems with a monocular camera. To solve the problem of variation of camera pitch angle due to vehicle motion and road inclination, the proposed method estimates virtual horizon from size and position of vehicles in captured image at run-time. The proposed method provides robust results even when road inclination varies continuously on hilly roads or lane markings are not seen on crowded roads. For experiments, a vision-based forward collision warning system has been implemented and the proposed method is evaluated with video clips recorded in highway and urban traffic environments. Virtual horizons estimated by the proposed method are compared with horizons manually identified, and estimated ranges are compared with measured ranges. Experimental results confirm that the proposed method provides robust results both in highway and in urban traffic environments.
Robust Range Estimation with a Monocular Camera for Vision-Based Forward Collision Warning System
2014-01-01
We propose a range estimation method for vision-based forward collision warning systems with a monocular camera. To solve the problem of variation of camera pitch angle due to vehicle motion and road inclination, the proposed method estimates virtual horizon from size and position of vehicles in captured image at run-time. The proposed method provides robust results even when road inclination varies continuously on hilly roads or lane markings are not seen on crowded roads. For experiments, a vision-based forward collision warning system has been implemented and the proposed method is evaluated with video clips recorded in highway and urban traffic environments. Virtual horizons estimated by the proposed method are compared with horizons manually identified, and estimated ranges are compared with measured ranges. Experimental results confirm that the proposed method provides robust results both in highway and in urban traffic environments. PMID:24558344
Effect of Surveillance Method on Reported Characteristics of Lyme Disease, Connecticut, 1996–2007
Nelson, Randall S.; Cartter, Matthew L.
2012-01-01
To determine the effect of changing public health surveillance methods on the reported epidemiology of Lyme disease, we analyzed Connecticut data for 1996–2007. Data were stratified by 4 surveillance methods and compared. A total of 87,174 reports were received that included 79,896 potential cases. Variations based on surveillance methods were seen. Cases reported through physician-based surveillance were significantly more likely to be classified as confirmed; such case-patients were significantly more likely to have symptoms of erythema migrans only and to have illness onset during summer months. Case-patients reported through laboratory-based surveillance were significantly more likely to have late manifestations only and to be older. Use of multiple surveillance methods provided a more complete clinical and demographic description of cases but lacked efficiency. When interpreting data, changes in surveillance method must be considered. PMID:22304873
NASA Astrophysics Data System (ADS)
Stemkens, Bjorn; Glitzner, Markus; Kontaxis, Charis; de Senneville, Baudouin Denis; Prins, Fieke M.; Crijns, Sjoerd P. M.; Kerkmeijer, Linda G. W.; Lagendijk, Jan J. W.; van den Berg, Cornelis A. T.; Tijssen, Rob H. N.
2017-09-01
Stereotactic body radiation therapy (SBRT) has shown great promise in increasing local control rates for renal-cell carcinoma (RCC). Characterized by steep dose gradients and high fraction doses, these hypo-fractionated treatments are, however, prone to dosimetric errors as a result of variations in intra-fraction respiratory-induced motion, such as drifts and amplitude alterations. This may lead to significant variations in the deposited dose. This study aims to develop a method for calculating the accumulated dose for MRI-guided SBRT of RCC in the presence of intra-fraction respiratory variations and determine the effect of such variations on the deposited dose. For this, RCC SBRT treatments were simulated while the underlying anatomy was moving, based on motion information from three motion models with increasing complexity: (1) STATIC, in which static anatomy was assumed, (2) AVG-RESP, in which 4D-MRI phase-volumes were time-weighted, and (3) PCA, a method that generates 3D volumes with sufficient spatio-temporal resolution to capture respiration and intra-fraction variations. Five RCC patients and two volunteers were included and treatments delivery was simulated, using motion derived from subject-specific MR imaging. Motion was most accurately estimated using the PCA method with root-mean-squared errors of 2.7, 2.4, 1.0 mm for STATIC, AVG-RESP and PCA, respectively. The heterogeneous patient group demonstrated relatively large dosimetric differences between the STATIC and AVG-RESP, and the PCA reconstructed dose maps, with hotspots up to 40% of the D99 and an underdosed GTV in three out of the five patients. This shows the potential importance of including intra-fraction motion variations in dose calculations.
Learning-based stochastic object models for use in optimizing imaging systems
NASA Astrophysics Data System (ADS)
Dolly, Steven R.; Anastasio, Mark A.; Yu, Lifeng; Li, Hua
2017-03-01
It is widely known that the optimization of imaging systems based on objective, or task-based, measures of image quality via computer-simulation requires use of a stochastic object model (SOM). However, the development of computationally tractable SOMs that can accurately model the statistical variations in anatomy within a specified ensemble of patients remains a challenging task. Because they are established by use of image data corresponding a single patient, previously reported numerical anatomical models lack of the ability to accurately model inter- patient variations in anatomy. In certain applications, however, databases of high-quality volumetric images are available that can facilitate this task. In this work, a novel and tractable methodology for learning a SOM from a set of volumetric training images is developed. The proposed method is based upon geometric attribute distribution (GAD) models, which characterize the inter-structural centroid variations and the intra-structural shape variations of each individual anatomical structure. The GAD models are scalable and deformable, and constrained by their respective principal attribute variations learned from training data. By use of the GAD models, random organ shapes and positions can be generated and integrated to form an anatomical phantom. The randomness in organ shape and position will reflect the variability of anatomy present in the training data. To demonstrate the methodology, a SOM corresponding to the pelvis of an adult male was computed and a corresponding ensemble of phantoms was created. Additionally, computer-simulated X-ray projection images corresponding to the phantoms were computed, from which tomographic images were reconstructed.
Geng, Xiaobing; Xie, Zhenghui; Zhang, Lijun; Xu, Mei; Jia, Binghao
2018-03-01
An inverse source estimation method is proposed to reconstruct emission rates using local air concentration sampling data. It involves the nonlinear least squares-based ensemble four-dimensional variational data assimilation (NLS-4DVar) algorithm and a transfer coefficient matrix (TCM) created using FLEXPART, a Lagrangian atmospheric dispersion model. The method was tested by twin experiments and experiments with actual Cs-137 concentrations measured around the Fukushima Daiichi Nuclear Power Plant (FDNPP). Emission rates can be reconstructed sequentially with the progression of a nuclear accident, which is important in the response to a nuclear emergency. With pseudo observations generated continuously, most of the emission rates were estimated accurately, except under conditions when the wind blew off land toward the sea and at extremely slow wind speeds near the FDNPP. Because of the long duration of accidents and variability in meteorological fields, monitoring networks composed of land stations only in a local area are unable to provide enough information to support an emergency response. The errors in the estimation compared to the real observations from the FDNPP nuclear accident stemmed from a shortage of observations, lack of data control, and an inadequate atmospheric dispersion model without improvement and appropriate meteorological data. The proposed method should be developed further to meet the requirements of a nuclear emergency response. Copyright © 2017 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rahmi, Kinanti Aldilla, E-mail: kinanti.aldilla@ui.ac.id; Yudiarsah, Efta
By using tight binding Hamiltonian model, charge transport properties of poly(dA)-poly(dT) DNA in variation of backbone disorder and amplitude of base-pair twisting motion is studied. The DNA chain used is 32 base pairs long poly(dA)-poly(dT) molecule. The molecule is contacted to electrode at both ends. The influence of environment on charge transport in DNA is modeled as variation of backbone disorder. The twisting motion amplitude is taking into account by assuming that the twisting angle distributes following Gaussian distribution function with zero average and standard deviation proportional to square root of temperature and inversely proportional to the twisting motion frequency.more » The base-pair twisting motion influences both the onsite energy of the bases and electron hopping constant between bases. The charge transport properties are studied by calculating current using Landauer-Buttiker formula from transmission probabilities which is calculated by transfer matrix methods. The result shows that as the backbone disorder increases, the maximum current decreases. By decreasing the twisting motion frequency, the current increases rapidly at low voltage, but the current increases slower at higher voltage. The threshold voltage can increase or decrease with increasing backbone disorder and increasing twisting frequency.« less
On the Relationship between Variational Level Set-Based and SOM-Based Active Contours
Abdelsamea, Mohammed M.; Gnecco, Giorgio; Gaber, Mohamed Medhat; Elyan, Eyad
2015-01-01
Most Active Contour Models (ACMs) deal with the image segmentation problem as a functional optimization problem, as they work on dividing an image into several regions by optimizing a suitable functional. Among ACMs, variational level set methods have been used to build an active contour with the aim of modeling arbitrarily complex shapes. Moreover, they can handle also topological changes of the contours. Self-Organizing Maps (SOMs) have attracted the attention of many computer vision scientists, particularly in modeling an active contour based on the idea of utilizing the prototypes (weights) of a SOM to control the evolution of the contour. SOM-based models have been proposed in general with the aim of exploiting the specific ability of SOMs to learn the edge-map information via their topology preservation property and overcoming some drawbacks of other ACMs, such as trapping into local minima of the image energy functional to be minimized in such models. In this survey, we illustrate the main concepts of variational level set-based ACMs, SOM-based ACMs, and their relationship and review in a comprehensive fashion the development of their state-of-the-art models from a machine learning perspective, with a focus on their strengths and weaknesses. PMID:25960736
Caumes, Géraldine; Borrel, Alexandre; Abi Hussein, Hiba; Camproux, Anne-Claude; Regad, Leslie
2017-09-01
Small molecules interact with their protein target on surface cavities known as binding pockets. Pocket-based approaches are very useful in all of the phases of drug design. Their first step is estimating the binding pocket based on protein structure. The available pocket-estimation methods produce different pockets for the same target. The aim of this work is to investigate the effects of different pocket-estimation methods on the results of pocket-based approaches. We focused on the effect of three pocket-estimation methods on a pocket-ligand (PL) classification. This pocket-based approach is useful for understanding the correspondence between the pocket and ligand spaces and to develop pharmacological profiling models. We found pocket-estimation methods yield different binding pockets in terms of boundaries and properties. These differences are responsible for the variation in the PL classification results that can have an impact on the detected correspondence between pocket and ligand profiles. Thus, we highlighted the importance of the pocket-estimation method choice in pocket-based approaches. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Speaker-independent phoneme recognition with a binaural auditory image model
NASA Astrophysics Data System (ADS)
Francis, Keith Ivan
1997-09-01
This dissertation presents phoneme recognition techniques based on a binaural fusion of outputs of the auditory image model and subsequent azimuth-selective phoneme recognition in a noisy environment. Background information concerning speech variations, phoneme recognition, current binaural fusion techniques and auditory modeling issues is explained. The research is constrained to sources in the frontal azimuthal plane of a simulated listener. A new method based on coincidence detection of neural activity patterns from the auditory image model of Patterson is used for azimuth-selective phoneme recognition. The method is tested in various levels of noise and the results are reported in contrast to binaural fusion methods based on various forms of correlation to demonstrate the potential of coincidence- based binaural phoneme recognition. This method overcomes smearing of fine speech detail typical of correlation based methods. Nevertheless, coincidence is able to measure similarity of left and right inputs and fuse them into useful feature vectors for phoneme recognition in noise.
NASA Technical Reports Server (NTRS)
Lal, D.
1986-01-01
Temporal variations in cosmic ray intensity have been deduced from observations of products of interactions of cosmic ray particles in the Moon, meteorites, and the Earth. Of particular interest is a comparison between the information based on Earth and that based on other samples. Differences are expected at least due to: (1) differences in the extent of cosmic ray modulation, and (2) changes in the geomagnetic dipole field. Any information on the global changes in the terrestrial cosmic ray intensity is therefore of importance. In this paper a possible technique for detecting changes in cosmic ray intensity is presented. The method involves human intervention and is applicable for the past 10,000 yrs. Studies of changes over longer periods of time are possible if supplementary data on age and history of the sample are available using other methods. Also discussed are the possibilities of studying certain geophysical processes, e.g., erosion, weathering, tectonic events based on studies of certain cosmic ray-produced isotopes for the past several million years.
Robust 3D face landmark localization based on local coordinate coding.
Song, Mingli; Tao, Dacheng; Sun, Shengpeng; Chen, Chun; Maybank, Stephen J
2014-12-01
In the 3D facial animation and synthesis community, input faces are usually required to be labeled by a set of landmarks for parameterization. Because of the variations in pose, expression and resolution, automatic 3D face landmark localization remains a challenge. In this paper, a novel landmark localization approach is presented. The approach is based on local coordinate coding (LCC) and consists of two stages. In the first stage, we perform nose detection, relying on the fact that the nose shape is usually invariant under the variations in the pose, expression, and resolution. Then, we use the iterative closest points algorithm to find a 3D affine transformation that aligns the input face to a reference face. In the second stage, we perform resampling to build correspondences between the input 3D face and the training faces. Then, an LCC-based localization algorithm is proposed to obtain the positions of the landmarks in the input face. Experimental results show that the proposed method is comparable to state of the art methods in terms of its robustness, flexibility, and accuracy.
Hessian-based norm regularization for image restoration with biomedical applications.
Lefkimmiatis, Stamatios; Bourquard, Aurélien; Unser, Michael
2012-03-01
We present nonquadratic Hessian-based regularization methods that can be effectively used for image restoration problems in a variational framework. Motivated by the great success of the total-variation (TV) functional, we extend it to also include second-order differential operators. Specifically, we derive second-order regularizers that involve matrix norms of the Hessian operator. The definition of these functionals is based on an alternative interpretation of TV that relies on mixed norms of directional derivatives. We show that the resulting regularizers retain some of the most favorable properties of TV, i.e., convexity, homogeneity, rotation, and translation invariance, while dealing effectively with the staircase effect. We further develop an efficient minimization scheme for the corresponding objective functions. The proposed algorithm is of the iteratively reweighted least-square type and results from a majorization-minimization approach. It relies on a problem-specific preconditioned conjugate gradient method, which makes the overall minimization scheme very attractive since it can be applied effectively to large images in a reasonable computational time. We validate the overall proposed regularization framework through deblurring experiments under additive Gaussian noise on standard and biomedical images.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haab, Brian B.; Geierstanger, Bernhard H.; Michailidis, George
2005-08-01
Four different immunoassay and antibody microarray methods performed at four different sites were used to measure the levels of a broad range of proteins (N = 323 assays; 39, 88, 168, and 28 assays at the respective sites; 237 unique analytes) in the human serum and plasma reference specimens distributed by the Plasma Proteome Project (PPP) of the HUPO. The methods provided a means to (1) assess the level of systematic variation in protein abundances associated with blood preparation methods (serum, citrate-anticoagulated-plasma, EDTA-anticoagulated-plasma, or heparin-anticoagulated-plasma) and (2) evaluate the dependence on concentration of MS-based protein identifications from data sets usingmore » the HUPO specimens. Some proteins, particularly cytokines, had highly variable concentrations between the different sample preparations, suggesting specific effects of certain anticoagulants on the stability or availability of these proteins. The linkage of antibody-based measurements from 66 different analytes with the combined MS/MS data from 18 different laboratories showed that protein detection and the quality of MS data increased with analyte concentration. The conclusions from these initial analyses are that the optimal blood preparation method is variable between analytes and that the discovery of blood proteins by MS can be extended to concentrations below the ng/mL range under certain circumstances. Continued developments in antibody-based methods will further advance the scientific goals of the PPP.« less
Tenti, Lorenzo; Maynau, Daniel; Angeli, Celestino; Calzado, Carmen J
2016-07-21
A new strategy based on orthogonal valence-bond analysis of the wave function combined with intermediate Hamiltonian theory has been applied to the evaluation of the magnetic coupling constants in two AF systems. This approach provides both a quantitative estimate of the J value and a detailed analysis of the main physical mechanisms controlling the coupling, using a combined perturbative + variational scheme. The procedure requires a selection of the dominant excitations to be treated variationally. Two methods have been employed: a brute-force selection, using a logic similar to that of the CIPSI approach, or entanglement measures, which identify the most interacting orbitals in the system. Once a reduced set of excitations (about 300 determinants) is established, the interaction matrix is dressed at the second-order of perturbation by the remaining excitations of the CI space. The diagonalization of the dressed matrix provides J values in good agreement with experimental ones, at a very low-cost. This approach demonstrates the key role of d → d* excitations in the quantitative description of the magnetic coupling, as well as the importance of using an extended active space, including the bridging ligand orbitals, for the binuclear model of the intermediates of multicopper oxidases. The method is a promising tool for dealing with complex systems containing several active centers, as an alternative to both pure variational and DFT approaches.
Ground-based measurements of the solar diameter during the rising phase of solar cycle 24
NASA Astrophysics Data System (ADS)
Meftah, M.; Corbard, T.; Irbah, A.; Ikhlef, R.; Morand, F.; Renaud, C.; Hauchecorne, A.; Assus, P.; Borgnino, J.; Chauvineau, B.; Crepel, M.; Dalaudier, F.; Damé, L.; Djafer, D.; Fodil, M.; Lesueur, P.; Poiet, G.; Rouzé, M.; Sarkissian, A.; Ziad, A.; Laclare, F.
2014-09-01
Context. For the past thirty years, modern ground-based time-series of the solar radius have shown different apparent variations according to different instruments. The origins of these variations may result from the observer, the instrument, the atmosphere, or the Sun. Solar radius measurements have been made for a very long time and in different ways. Yet we see inconsistencies in the measurements. Numerous studies of solar radius variation appear in the literature, but with conflicting results. These measurement differences are certainly related to instrumental effects or atmospheric effects. Use of different methods (determination of the solar radius), instruments, and effects of Earth's atmosphere could explain the lack of consistency on the past measurements. A survey of the solar radius has been initiated in 1975 by Francis Laclare, at the Calern site of the Observatoire de la Côte d'Azur (OCA). Several efforts are currently made from space missions to obtain accurate solar astrometric measurements, for example, to probe the long-term variations of solar radius, their link with solar irradiance variations, and their influence on the Earth climate. Aims: The Picard program includes a ground-based observatory consisting of different instruments based at the Calern site (OCA, France). This set of instruments has been named "Picard Sol" and consists of a Ritchey-Chrétien telescope providing full-disk images of the Sun in five narrow-wavelength bandpasses (centered on 393.37, 535.7, 607.1, 782.2, and 1025.0 nm), a Sun-photometer that measures the properties of atmospheric aerosol, a pyranometer for estimating a global sky-quality index, a wide-field camera that detects the location of clouds, and a generalized daytime seeing monitor allowing us to measure the spatio-temporal parameters of the local turbulence. Picard Sol is meant to perpetuate valuable historical series of the solar radius and to initiate new time-series, in particular during solar cycle 24. Methods: We defined the solar radius by the inflection-point position of the solar-limb profiles taken at different angular positions of the image. Our results were corrected for the effects of refraction and turbulence by numerical methods. Results: From a dataset of more than 20 000 observations carried out between 2011 and 2013, we find a solar radius of 959.78 ± 0.19 arcsec (696 113 ± 138 km) at 535.7 nm after making all necessary corrections. For the other wavelengths in the solar continuum, we derive very similar results. The solar radius observed with the Solar Diameter Imager and Surface Mapper II during the period 2011-2013 shows variations shorter than 50 milli-arcsec that are out of phase with solar activity.
Frequency analysis of uncertain structures using imprecise probability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Modares, Mehdi; Bergerson, Joshua
2015-01-01
Two new methods for finite element based frequency analysis of a structure with uncertainty are developed. An imprecise probability formulation based on enveloping p-boxes is used to quantify the uncertainty present in the mechanical characteristics of the structure. For each element, independent variations are considered. Using the two developed methods, P-box Frequency Analysis (PFA) and Interval Monte-Carlo Frequency Analysis (IMFA), sharp bounds on natural circular frequencies at different probability levels are obtained. These methods establish a framework for handling incomplete information in structural dynamics. Numerical example problems are presented that illustrate the capabilities of the new methods along with discussionsmore » on their computational efficiency.« less
Scene-based method for spatial misregistration detection in hyperspectral imagery.
Dell'Endice, Francesco; Nieke, Jens; Schläpfer, Daniel; Itten, Klaus I
2007-05-20
Hyperspectral imaging (HSI) sensors suffer from spatial misregistration, an artifact that prevents the accurate acquisition of the spectra. Physical considerations let us assume that the influence of the spatial misregistration on the acquired data depends both on the wavelength and on the across-track position. A scene-based method, based on edge detection, is therefore proposed. Such a procedure measures the variation on the spatial location of an edge between its various monochromatic projections, giving an estimation for spatial misregistration, and also allowing identification of misalignments. The method has been applied to several hyperspectral sensors, either prism, or grating-based designs. The results confirm the dependence assumptions on lambda and theta, spectral wavelength and across-track pixel, respectively. Suggestions are also given to correct for spatial misregistration.
General method of solving the Schroedinger equation of atoms and molecules
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakatsuji, Hiroshi
2005-12-15
We propose a general method of solving the Schroedinger equation of atoms and molecules. We first construct the wave function having the exact structure, using the ICI (iterative configuration or complement interaction) method and then optimize the variables involved by the variational principle. Based on the scaled Schroedinger equation and related principles, we can avoid the singularity problem of atoms and molecules and formulate a general method of calculating the exact wave functions in an analytical expansion form. We choose initial function {psi}{sub 0} and scaling g function, and then the ICI method automatically generates the wave function that hasmore » the exact structure by using the Hamiltonian of the system. The Hamiltonian contains all the information of the system. The free ICI method provides a flexible and variationally favorable procedure of constructing the exact wave function. We explain the computational procedure of the analytical ICI method routinely performed in our laboratory. Simple examples are given using hydrogen atom for the nuclear singularity case, the Hooke's atom for the electron singularity case, and the helium atom for both cases.« less