Measurement System Analyses - Gauge Repeatability and Reproducibility Methods
NASA Astrophysics Data System (ADS)
Cepova, Lenka; Kovacikova, Andrea; Cep, Robert; Klaput, Pavel; Mizera, Ondrej
2018-02-01
The submitted article focuses on a detailed explanation of the average and range method (Automotive Industry Action Group, Measurement System Analysis approach) and of the honest Gauge Repeatability and Reproducibility method (Evaluating the Measurement Process approach). The measured data (thickness of plastic parts) were evaluated by both methods and their results were compared on the basis of numerical evaluation. Both methods were additionally compared and their advantages and disadvantages were discussed. One difference between both methods is the calculation of variation components. The AIAG method calculates the variation components based on standard deviation (then a sum of variation components does not give 100 %) and the honest GRR study calculates the variation components based on variance, where the sum of all variation components (part to part variation, EV & AV) gives the total variation of 100 %. Acceptance of both methods among the professional society, future use, and acceptance by manufacturing industry were also discussed. Nowadays, the AIAG is the leading method in the industry.
Lu, Tzong-Shi; Yiao, Szu-Yu; Lim, Kenneth; Jensen, Roderick V; Hsiao, Li-Li
2010-07-01
The identification of differences in protein expression resulting from methodical variations is an essential component to the interpretation of true, biologically significant results. We used the Lowry and Bradford methods- two most commonly used methods for protein quantification, to assess whether differential protein expressions are a result of true biological or methodical variations. MATERIAL #ENTITYSTARTX00026; Differential protein expression patterns was assessed by western blot following protein quantification by the Lowry and Bradford methods. We have observed significant variations in protein concentrations following assessment with the Lowry versus Bradford methods, using identical samples. Greater variations in protein concentration readings were observed over time and in samples with higher concentrations, with the Bradford method. Identical samples quantified using both methods yielded significantly different expression patterns on Western blot. We show for the first time that methodical variations observed in these protein assay techniques, can potentially translate into differential protein expression patterns, that can be falsely taken to be biologically significant. Our study therefore highlights the pivotal need to carefully consider methodical approaches to protein quantification in techniques that report quantitative differences.
NASA Astrophysics Data System (ADS)
Lü, Xiaozhou; Xie, Kai; Xue, Dongfeng; Zhang, Feng; Qi, Liang; Tao, Yebo; Li, Teng; Bao, Weimin; Wang, Songlin; Li, Xiaoping; Chen, Renjie
2017-10-01
Micro-capacitance sensors are widely applied in industrial applications for the measurement of mechanical variations. The measurement accuracy of micro-capacitance sensors is highly dependent on the capacitance measurement circuit. To overcome the inability of commonly used methods to directly measure capacitance variation and deal with the conflict between the measurement range and accuracy, this paper presents a capacitance variation measurement method which is able to measure the output capacitance variation (relative value) of the micro-capacitance sensor with a continuously variable measuring range. We present the principles and analyze the non-ideal factors affecting this method. To implement the method, we developed a capacitance variation measurement circuit and carried out experiments to test the circuit. The result shows that the circuit is able to measure a capacitance variation range of 0-700 pF linearly with a maximum relative accuracy of 0.05% and a capacitance range of 0-2 nF (with a baseline capacitance of 1 nF) with a constant resolution of 0.03%. The circuit is proposed as a new method to measure capacitance and is expected to have applications in micro-capacitance sensors for measuring capacitance variation with a continuously variable measuring range.
Comparison of variational real-space representations of the kinetic energy operator
NASA Astrophysics Data System (ADS)
Skylaris, Chris-Kriton; Diéguez, Oswaldo; Haynes, Peter D.; Payne, Mike C.
2002-08-01
We present a comparison of real-space methods based on regular grids for electronic structure calculations that are designed to have basis set variational properties, using as a reference the conventional method of finite differences (a real-space method that is not variational) and the reciprocal-space plane-wave method which is fully variational. We find that a definition of the finite-difference method [P. Maragakis, J. Soler, and E. Kaxiras, Phys. Rev. B 64, 193101 (2001)] satisfies one of the two properties of variational behavior at the cost of larger errors than the conventional finite-difference method. On the other hand, a technique which represents functions in a number of plane waves which is independent of system size closely follows the plane-wave method and therefore also the criteria for variational behavior. Its application is only limited by the requirement of having functions strictly localized in regions of real space, but this is a characteristic of an increasing number of modern real-space methods, as they are designed to have a computational cost that scales linearly with system size.
Variation block-based genomics method for crop plants.
Kim, Yul Ho; Park, Hyang Mi; Hwang, Tae-Young; Lee, Seuk Ki; Choi, Man Soo; Jho, Sungwoong; Hwang, Seungwoo; Kim, Hak-Min; Lee, Dongwoo; Kim, Byoung-Chul; Hong, Chang Pyo; Cho, Yun Sung; Kim, Hyunmin; Jeong, Kwang Ho; Seo, Min Jung; Yun, Hong Tai; Kim, Sun Lim; Kwon, Young-Up; Kim, Wook Han; Chun, Hye Kyung; Lim, Sang Jong; Shin, Young-Ah; Choi, Ik-Young; Kim, Young Sun; Yoon, Ho-Sung; Lee, Suk-Ha; Lee, Sunghoon
2014-06-15
In contrast with wild species, cultivated crop genomes consist of reshuffled recombination blocks, which occurred by crossing and selection processes. Accordingly, recombination block-based genomics analysis can be an effective approach for the screening of target loci for agricultural traits. We propose the variation block method, which is a three-step process for recombination block detection and comparison. The first step is to detect variations by comparing the short-read DNA sequences of the cultivar to the reference genome of the target crop. Next, sequence blocks with variation patterns are examined and defined. The boundaries between the variation-containing sequence blocks are regarded as recombination sites. All the assumed recombination sites in the cultivar set are used to split the genomes, and the resulting sequence regions are termed variation blocks. Finally, the genomes are compared using the variation blocks. The variation block method identified recurring recombination blocks accurately and successfully represented block-level diversities in the publicly available genomes of 31 soybean and 23 rice accessions. The practicality of this approach was demonstrated by the identification of a putative locus determining soybean hilum color. We suggest that the variation block method is an efficient genomics method for the recombination block-level comparison of crop genomes. We expect that this method will facilitate the development of crop genomics by bringing genomics technologies to the field of crop breeding.
Speckle reduction in optical coherence tomography by adaptive total variation method
NASA Astrophysics Data System (ADS)
Wu, Tong; Shi, Yaoyao; Liu, Youwen; He, Chongjun
2015-12-01
An adaptive total variation method based on the combination of speckle statistics and total variation restoration is proposed and developed for reducing speckle noise in optical coherence tomography (OCT) images. The statistical distribution of the speckle noise in OCT image is investigated and measured. With the measured parameters such as the mean value and variance of the speckle noise, the OCT image is restored by the adaptive total variation restoration method. The adaptive total variation restoration algorithm was applied to the OCT images of a volunteer's hand skin, which showed effective speckle noise reduction and image quality improvement. For image quality comparison, the commonly used median filtering method was also applied to the same images to reduce the speckle noise. The measured results demonstrate the superior performance of the adaptive total variation restoration method in terms of image signal-to-noise ratio, equivalent number of looks, contrast-to-noise ratio, and mean square error.
NASA Technical Reports Server (NTRS)
Ibrahim, A. H.; Tiwari, S. N.; Smith, R. E.
1997-01-01
Variational methods (VM) sensitivity analysis employed to derive the costate (adjoint) equations, the transversality conditions, and the functional sensitivity derivatives. In the derivation of the sensitivity equations, the variational methods use the generalized calculus of variations, in which the variable boundary is considered as the design function. The converged solution of the state equations together with the converged solution of the costate equations are integrated along the domain boundary to uniquely determine the functional sensitivity derivatives with respect to the design function. The application of the variational methods to aerodynamic shape optimization problems is demonstrated for internal flow problems at supersonic Mach number range. The study shows, that while maintaining the accuracy of the functional sensitivity derivatives within the reasonable range for engineering prediction purposes, the variational methods show a substantial gain in computational efficiency, i.e., computer time and memory, when compared with the finite difference sensitivity analysis.
NASA Astrophysics Data System (ADS)
Wang, Yi-Hong; Wu, Guo-Cheng; Baleanu, Dumitru
2013-10-01
The variational iteration method is newly used to construct various integral equations of fractional order. Some iterative schemes are proposed which fully use the method and the predictor-corrector approach. The fractional Bagley-Torvik equation is then illustrated as an example of multi-order and the results show the efficiency of the variational iteration method's new role.
Using the Screened Coulomb Potential to Illustrate the Variational Method
ERIC Educational Resources Information Center
Zuniga, Jose; Bastida, Adolfo; Requena, Alberto
2012-01-01
The screened Coulomb potential, or Yukawa potential, is used to illustrate the application of the single and linear variational methods. The trial variational functions are expressed in terms of Slater-type functions, for which the integrals needed to carry out the variational calculations are easily evaluated in closed form. The variational…
Variational estimate method for solving autonomous ordinary differential equations
NASA Astrophysics Data System (ADS)
Mungkasi, Sudi
2018-04-01
In this paper, we propose a method for solving first-order autonomous ordinary differential equation problems using a variational estimate formulation. The variational estimate is constructed with a Lagrange multiplier which is chosen optimally, so that the formulation leads to an accurate solution to the problem. The variational estimate is an integral form, which can be computed using a computer software. As the variational estimate is an explicit formula, the solution is easy to compute. This is a great advantage of the variational estimate formulation.
Wagner Mackenzie, Brett; Waite, David W; Taylor, Michael W
2015-01-01
The human gut contains dense and diverse microbial communities which have profound influences on human health. Gaining meaningful insights into these communities requires provision of high quality microbial nucleic acids from human fecal samples, as well as an understanding of the sources of variation and their impacts on the experimental model. We present here a systematic analysis of commonly used microbial DNA extraction methods, and identify significant sources of variation. Five extraction methods (Human Microbiome Project protocol, MoBio PowerSoil DNA Isolation Kit, QIAamp DNA Stool Mini Kit, ZR Fecal DNA MiniPrep, phenol:chloroform-based DNA isolation) were evaluated based on the following criteria: DNA yield, quality and integrity, and microbial community structure based on Illumina amplicon sequencing of the V4 region of bacterial and archaeal 16S rRNA genes. Our results indicate that the largest portion of variation within the model was attributed to differences between subjects (biological variation), with a smaller proportion of variation associated with DNA extraction method (technical variation) and intra-subject variation. A comprehensive understanding of the potential impact of technical variation on the human gut microbiota will help limit preventable bias, enabling more accurate diversity estimates.
The Effects of Predator Evolution and Genetic Variation on Predator-Prey Population-Level Dynamics.
Cortez, Michael H; Patel, Swati
2017-07-01
This paper explores how predator evolution and the magnitude of predator genetic variation alter the population-level dynamics of predator-prey systems. We do this by analyzing a general eco-evolutionary predator-prey model using four methods: Method 1 identifies how eco-evolutionary feedbacks alter system stability in the fast and slow evolution limits; Method 2 identifies how the amount of standing predator genetic variation alters system stability; Method 3 identifies how the phase lags in predator-prey cycles depend on the amount of genetic variation; and Method 4 determines conditions for different cycle shapes in the fast and slow evolution limits using geometric singular perturbation theory. With these four methods, we identify the conditions under which predator evolution alters system stability and shapes of predator-prey cycles, and how those effect depend on the amount of genetic variation in the predator population. We discuss the advantages and disadvantages of each method and the relations between the four methods. This work shows how the four methods can be used in tandem to make general predictions about eco-evolutionary dynamics and feedbacks.
NASA Astrophysics Data System (ADS)
Wang, Min
2017-06-01
This paper aims to establish the Tikhonov regularization method for generalized mixed variational inequalities in Banach spaces. For this purpose, we firstly prove a very general existence result for generalized mixed variational inequalities, provided that the mapping involved has the so-called mixed variational inequality property and satisfies a rather weak coercivity condition. Finally, we establish the Tikhonov regularization method for generalized mixed variational inequalities. Our findings extended the results for the generalized variational inequality problem (for short, GVIP( F, K)) in R^n spaces (He in Abstr Appl Anal, 2012) to the generalized mixed variational inequality problem (for short, GMVIP(F,φ , K)) in reflexive Banach spaces. On the other hand, we generalized the corresponding results for the generalized mixed variational inequality problem (for short, GMVIP(F,φ ,K)) in R^n spaces (Fu and He in J Sichuan Norm Univ (Nat Sci) 37:12-17, 2014) to reflexive Banach spaces.
Introduction to the Special Issue on Advancing Methods for Analyzing Dialect Variation.
Clopper, Cynthia G
2017-07-01
Documenting and analyzing dialect variation is traditionally the domain of dialectology and sociolinguistics. However, modern approaches to acoustic analysis of dialect variation have their roots in Peterson and Barney's [(1952). J. Acoust. Soc. Am. 24, 175-184] foundational work on the acoustic analysis of vowels that was published in the Journal of the Acoustical Society of America (JASA) over 6 decades ago. Although Peterson and Barney (1952) were not primarily concerned with dialect variation, their methods laid the groundwork for the acoustic methods that are still used by scholars today to analyze vowel variation within and across languages. In more recent decades, a number of methodological advances in the study of vowel variation have been published in JASA, including work on acoustic vowel overlap and vowel normalization. The goal of this special issue was to honor that tradition by bringing together a set of papers describing the application of emerging acoustic, articulatory, and computational methods to the analysis of dialect variation in vowels and beyond.
NASA Astrophysics Data System (ADS)
Li, Shuo; Wang, Hui; Wang, Liyong; Yu, Xiangzhou; Yang, Le
2018-01-01
The uneven illumination phenomenon reduces the quality of remote sensing image and causes interference in the subsequent processing and applications. A variational method based on Retinex with double-norm hybrid constraints for uneven illumination correction is proposed. The L1 norm and the L2 norm are adopted to constrain the textures and details of reflectance image and the smoothness of the illumination image, respectively. The problem of separating the illumination image from the reflectance image is transformed into the optimal solution of the variational model. In order to accelerate the solution, the split Bregman method is used to decompose the variational model into three subproblems, which are calculated by alternate iteration. Two groups of experiments are implemented on two synthetic images and three real remote sensing images. Compared with the variational Retinex method with single-norm constraint and the Mask method, the proposed method performs better in both visual evaluation and quantitative measurements. The proposed method can effectively eliminate the uneven illumination while maintaining the textures and details of the remote sensing image. Moreover, the proposed method using split Bregman method is more than 10 times faster than the method with the steepest descent method.
Experimental studies of breaking of elastic tired wheel under variable normal load
NASA Astrophysics Data System (ADS)
Fedotov, A. I.; Zedgenizov, V. G.; Ovchinnikova, N. I.
2017-10-01
The paper analyzes the braking of a vehicle wheel subjected to disturbances of normal load variations. Experimental tests and methods for developing test modes as sinusoidal force disturbances of the normal wheel load were used. Measuring methods for digital and analogue signals were used as well. Stabilization of vehicle wheel braking subjected to disturbances of normal load variations is a topical issue. The paper suggests a method for analyzing wheel braking processes under disturbances of normal load variations. A method to control wheel baking processes subjected to disturbances of normal load variations was developed.
Scaling up functional traits for ecosystem services with remote sensing: concepts and methods.
Abelleira Martínez, Oscar J; Fremier, Alexander K; Günter, Sven; Ramos Bendaña, Zayra; Vierling, Lee; Galbraith, Sara M; Bosque-Pérez, Nilsa A; Ordoñez, Jenny C
2016-07-01
Ecosystem service-based management requires an accurate understanding of how human modification influences ecosystem processes and these relationships are most accurate when based on functional traits. Although trait variation is typically sampled at local scales, remote sensing methods can facilitate scaling up trait variation to regional scales needed for ecosystem service management. We review concepts and methods for scaling up plant and animal functional traits from local to regional spatial scales with the goal of assessing impacts of human modification on ecosystem processes and services. We focus our objectives on considerations and approaches for (1) conducting local plot-level sampling of trait variation and (2) scaling up trait variation to regional spatial scales using remotely sensed data. We show that sampling methods for scaling up traits need to account for the modification of trait variation due to land cover change and species introductions. Sampling intraspecific variation, stratification by land cover type or landscape context, or inference of traits from published sources may be necessary depending on the traits of interest. Passive and active remote sensing are useful for mapping plant phenological, chemical, and structural traits. Combining these methods can significantly improve their capacity for mapping plant trait variation. These methods can also be used to map landscape and vegetation structure in order to infer animal trait variation. Due to high context dependency, relationships between trait variation and remotely sensed data are not directly transferable across regions. We end our review with a brief synthesis of issues to consider and outlook for the development of these approaches. Research that relates typical functional trait metrics, such as the community-weighted mean, with remote sensing data and that relates variation in traits that cannot be remotely sensed to other proxies is needed. Our review narrows the gap between functional trait and remote sensing methods for ecosystem service management.
Variational method for integrating radial gradient field
NASA Astrophysics Data System (ADS)
Legarda-Saenz, Ricardo; Brito-Loeza, Carlos; Rivera, Mariano; Espinosa-Romero, Arturo
2014-12-01
We propose a variational method for integrating information obtained from circular fringe pattern. The proposed method is a suitable choice for objects with radial symmetry. First, we analyze the information contained in the fringe pattern captured by the experimental setup and then move to formulate the problem of recovering the wavefront using techniques from calculus of variations. The performance of the method is demonstrated by numerical experiments with both synthetic and real data.
NASA Astrophysics Data System (ADS)
Kowalski, Dariusz
2017-06-01
The paper deals with the method to identify internal stresses in two-dimensional steel members. Steel members were investigated in the delivery stage and after assembly, by means of electric-arc welding. In order to perform the member assessment two methods to identify the stress variation were applied. The first is a non-destructive measurement method employing local external magnetic field and to detecting the induced voltage, including Barkhausen noise The analysis of the latter allows to assess internal stresses in a surface layer of the material. The second method, essential in the paper, is a semi-trepanation Mathar method of tensometric strain variation measurement in the course of a controlled void-making in the material. Variation of internal stress distribution in the material led to the choice of welding technology to join. The assembly process altered the actual stresses and made up new stresses, triggering post-welding stresses as a response for the excessive stress variation.
Estimation and Partitioning of Heritability in Human Populations using Whole Genome Analysis Methods
Vinkhuyzen, Anna AE; Wray, Naomi R; Yang, Jian; Goddard, Michael E; Visscher, Peter M
2014-01-01
Understanding genetic variation of complex traits in human populations has moved from the quantification of the resemblance between close relatives to the dissection of genetic variation into the contributions of individual genomic loci. But major questions remain unanswered: how much phenotypic variation is genetic, how much of the genetic variation is additive and what is the joint distribution of effect size and allele frequency at causal variants? We review and compare three whole-genome analysis methods that use mixed linear models (MLM) to estimate genetic variation, using the relationship between close or distant relatives based on pedigree or SNPs. We discuss theory, estimation procedures, bias and precision of each method and review recent advances in the dissection of additive genetic variation of complex traits in human populations that are based upon the application of MLM. Using genome wide data, SNPs account for far more of the genetic variation than the highly significant SNPs associated with a trait, but they do not account for all of the genetic variance estimated by pedigree based methods. We explain possible reasons for this ‘missing’ heritability. PMID:23988118
Second-order variational equations for N-body simulations
NASA Astrophysics Data System (ADS)
Rein, Hanno; Tamayo, Daniel
2016-07-01
First-order variational equations are widely used in N-body simulations to study how nearby trajectories diverge from one another. These allow for efficient and reliable determinations of chaos indicators such as the Maximal Lyapunov characteristic Exponent (MLE) and the Mean Exponential Growth factor of Nearby Orbits (MEGNO). In this paper we lay out the theoretical framework to extend the idea of variational equations to higher order. We explicitly derive the differential equations that govern the evolution of second-order variations in the N-body problem. Going to second order opens the door to new applications, including optimization algorithms that require the first and second derivatives of the solution, like the classical Newton's method. Typically, these methods have faster convergence rates than derivative-free methods. Derivatives are also required for Riemann manifold Langevin and Hamiltonian Monte Carlo methods which provide significantly shorter correlation times than standard methods. Such improved optimization methods can be applied to anything from radial-velocity/transit-timing-variation fitting to spacecraft trajectory optimization to asteroid deflection. We provide an implementation of first- and second-order variational equations for the publicly available REBOUND integrator package. Our implementation allows the simultaneous integration of any number of first- and second-order variational equations with the high-accuracy IAS15 integrator. We also provide routines to generate consistent and accurate initial conditions without the need for finite differencing.
2018-04-01
systems containing ionized gases. 2. Gibbs Method in the Integral Form As per the Gibbs general methodology , based on the concept of heterogeneous...ARL-TR-8348 ● APR 2018 US Army Research Laboratory The Gibbs Variational Method in Thermodynamics of Equilibrium Plasma: 1...ARL-TR-8348 ● APR 2018 US Army Research Laboratory The Gibbs Variational Method in Thermodynamics of Equilibrium Plasma: 1. General
The Schwinger Variational Method
NASA Technical Reports Server (NTRS)
Huo, Winifred M.
1995-01-01
Variational methods have proven invaluable in theoretical physics and chemistry, both for bound state problems and for the study of collision phenomena. For collisional problems they can be grouped into two types: those based on the Schroedinger equation and those based on the Lippmann-Schwinger equation. The application of the Schwinger variational (SV) method to e-molecule collisions and photoionization has been reviewed previously. The present chapter discusses the implementation of the SV method as applied to e-molecule collisions.
ERIC Educational Resources Information Center
Quinn, Terry; Rai, Sanjay
2012-01-01
The method of variation of parameters can be found in most undergraduate textbooks on differential equations. The method leads to solutions of the non-homogeneous equation of the form y = u[subscript 1]y[subscript 1] + u[subscript 2]y[subscript 2], a sum of function products using solutions to the homogeneous equation y[subscript 1] and…
Total generalized variation-regularized variational model for single image dehazing
NASA Astrophysics Data System (ADS)
Shu, Qiao-Ling; Wu, Chuan-Sheng; Zhong, Qiu-Xiang; Liu, Ryan Wen
2018-04-01
Imaging quality is often significantly degraded under hazy weather condition. The purpose of this paper is to recover the latent sharp image from its hazy version. It is well known that the accurate estimation of depth information could assist in improving dehazing performance. In this paper, a detail-preserving variational model was proposed to simultaneously estimate haze-free image and depth map. In particular, the total variation (TV) and total generalized variation (TGV) regularizers were introduced to restrain haze-free image and depth map, respectively. The resulting nonsmooth optimization problem was efficiently solved using the alternating direction method of multipliers (ADMM). Comprehensive experiments have been conducted on realistic datasets to compare our proposed method with several state-of-the-art dehazing methods. Results have illustrated the superior performance of the proposed method in terms of visual quality evaluation.
Variational formulation of high performance finite elements: Parametrized variational principles
NASA Technical Reports Server (NTRS)
Felippa, Carlos A.; Militello, Carmello
1991-01-01
High performance elements are simple finite elements constructed to deliver engineering accuracy with coarse arbitrary grids. This is part of a series on the variational basis of high-performance elements, with emphasis on those constructed with the free formulation (FF) and assumed natural strain (ANS) methods. Parametrized variational principles that provide a foundation for the FF and ANS methods, as well as for a combination of both are presented.
Song, Junqiang; Leng, Hongze; Lu, Fengshun
2014-01-01
We present a new numerical method to get the approximate solutions of fractional differential equations. A new operational matrix of integration for fractional-order Legendre functions (FLFs) is first derived. Then a modified variational iteration formula which can avoid “noise terms” is constructed. Finally a numerical method based on variational iteration method (VIM) and FLFs is developed for fractional differential equations (FDEs). Block-pulse functions (BPFs) are used to calculate the FLFs coefficient matrices of the nonlinear terms. Five examples are discussed to demonstrate the validity and applicability of the technique. PMID:24511303
Iterative Nonlocal Total Variation Regularization Method for Image Restoration
Xu, Huanyu; Sun, Quansen; Luo, Nan; Cao, Guo; Xia, Deshen
2013-01-01
In this paper, a Bregman iteration based total variation image restoration algorithm is proposed. Based on the Bregman iteration, the algorithm splits the original total variation problem into sub-problems that are easy to solve. Moreover, non-local regularization is introduced into the proposed algorithm, and a method to choose the non-local filter parameter locally and adaptively is proposed. Experiment results show that the proposed algorithms outperform some other regularization methods. PMID:23776560
Accurate sparse-projection image reconstruction via nonlocal TV regularization.
Zhang, Yi; Zhang, Weihua; Zhou, Jiliu
2014-01-01
Sparse-projection image reconstruction is a useful approach to lower the radiation dose; however, the incompleteness of projection data will cause degeneration of imaging quality. As a typical compressive sensing method, total variation has obtained great attention on this problem. Suffering from the theoretical imperfection, total variation will produce blocky effect on smooth regions and blur edges. To overcome this problem, in this paper, we introduce the nonlocal total variation into sparse-projection image reconstruction and formulate the minimization problem with new nonlocal total variation norm. The qualitative and quantitative analyses of numerical as well as clinical results demonstrate the validity of the proposed method. Comparing to other existing methods, our method more efficiently suppresses artifacts caused by low-rank reconstruction and reserves structure information better.
Application of Variational Methods to the Thermal Entrance Region of Ducts
NASA Technical Reports Server (NTRS)
Sparrow, E. M.; Siegel. R.
1960-01-01
A variational method is presented for solving eigenvalue problems which arise in connection with the analysis of convective heat transfer in the thermal entrance region of ducts. Consideration is given, to both situations where the temperature profile depends upon one cross-sectional coordinate (e.g. circular tube) or upon two cross-sectional coordinates (e.g. rectangular duct). The variational method is illustrated and verified by application to laminar heat transfer in a circular tube and a parallel-plate channel, and good agreement with existing numerical solutions is attained. Then, application is made to laminar heat transfer in a square duct as a check, an alternate computation for the square duct is made using a method indicated by Misaps and Pohihausen. The variational method can, in principle, also be applied to problems in turbulent heat transfer.
Study of weak solutions for parabolic variational inequalities with nonstandard growth conditions.
Dong, Yan
2018-01-01
In this paper, we study the degenerate parabolic variational inequality problem in a bounded domain. First, the weak solutions of the variational inequality are defined. Second, the existence and uniqueness of the solutions in the weak sense are proved by using the penalty method and the reduction method.
NASA Technical Reports Server (NTRS)
Roth, Don J.; Farmer, Donald A.
1998-01-01
Abrasive cut-off wheels are at times unintentionally manufactured with nonuniformity that is difficult to identify and sufficiently characterize without time-consuming, destructive examination. One particular nonuniformity is a density variation condition occurring around the wheel circumference or along the radius, or both. This density variation, depending on its severity, can cause wheel warpage and wheel vibration resulting in unacceptable performance and perhaps premature failure of the wheel. Conventional nondestructive evaluation methods such as ultrasonic c-scan imaging and film radiography are inaccurate in their attempts at characterizing the density variation because a superimposing thickness variation exists as well in the wheel. In this article, the single transducer thickness-independent ultrasonic imaging method, developed specifically to allow more accurate characterization of aerospace components, is shown to precisely characterize the extent of the density variation in a cut-off wheel having a superimposing thickness variation. The method thereby has potential as an effective quality control tool in the abrasives industry for the wheel manufacturer.
A novel iterative scheme and its application to differential equations.
Khan, Yasir; Naeem, F; Šmarda, Zdeněk
2014-01-01
The purpose of this paper is to employ an alternative approach to reconstruct the standard variational iteration algorithm II proposed by He, including Lagrange multiplier, and to give a simpler formulation of Adomian decomposition and modified Adomian decomposition method in terms of newly proposed variational iteration method-II (VIM). Through careful investigation of the earlier variational iteration algorithm and Adomian decomposition method, we find unnecessary calculations for Lagrange multiplier and also repeated calculations involved in each iteration, respectively. Several examples are given to verify the reliability and efficiency of the method.
A Comparison of Cut Scores Using Multiple Standard Setting Methods.
ERIC Educational Resources Information Center
Impara, James C.; Plake, Barbara S.
This paper reports the results of using several alternative methods of setting cut scores. The methods used were: (1) a variation of the Angoff method (1971); (2) a variation of the borderline group method; and (3) an advanced impact method (G. Dillon, 1996). The results discussed are from studies undertaken to set the cut scores for fourth grade…
Solution of the Time-Dependent Schrödinger Equation by the Laplace Transform Method
Lin, S. H.; Eyring, H.
1971-01-01
The time-dependent Schrödinger equation for two quite general types of perturbation has been solved by introducing the Laplace transforms to eliminate the time variable. The resulting time-independent differential equation can then be solved by the perturbation method, the variation method, the variation-perturbation method, and other methods. PMID:16591898
Juárez, M; Polvillo, O; Contò, M; Ficco, A; Ballico, S; Failla, S
2008-05-09
Four different extraction-derivatization methods commonly used for fatty acid analysis in meat (in situ or one-step method, saponification method, classic method and a combination of classic extraction and saponification derivatization) were tested. The in situ method had low recovery and variation. The saponification method showed the best balance between recovery, precision, repeatability and reproducibility. The classic method had high recovery and acceptable variation values, except for the polyunsaturated fatty acids, showing higher variation than the former methods. The combination of extraction and methylation steps had great recovery values, but the precision, repeatability and reproducibility were not acceptable. Therefore the saponification method would be more convenient for polyunsaturated fatty acid analysis, whereas the in situ method would be an alternative for fast analysis. However the classic method would be the method of choice for the determination of the different lipid classes.
NASA Astrophysics Data System (ADS)
Huang, Z.; Chen, Q.; Shen, Y.; Chen, Q.; Liu, X.
2017-09-01
Variational pansharpening can enhance the spatial resolution of a hyperspectral (HS) image using a high-resolution panchromatic (PAN) image. However, this technology may lead to spectral distortion that obviously affect the accuracy of data analysis. In this article, we propose an improved variational method for HS image pansharpening with the constraint of spectral difference minimization. We extend the energy function of the classic variational pansharpening method by adding a new spectral fidelity term. This fidelity term is designed following the definition of spectral angle mapper, which means that for every pixel, the spectral difference value of any two bands in the HS image is in equal proportion to that of the two corresponding bands in the pansharpened image. Gradient descent method is adopted to find the optimal solution of the modified energy function, and the pansharpened image can be reconstructed. Experimental results demonstrate that the constraint of spectral difference minimization is able to preserve the original spectral information well in HS images, and reduce the spectral distortion effectively. Compared to original variational method, our method performs better in both visual and quantitative evaluation, and achieves a good trade-off between spatial and spectral information.
Takahashi, M; Tango, T
2001-05-01
As methods for estimating excess mortality associated with influenza-epidemic, the Serfling's cyclical regression model and the Kawai and Fukutomi model with seasonal indices have been proposed. Excess mortality under the old definition (i.e., the number of deaths actually recorded in excess of the number expected on the basis of past seasonal experience) covers the random error for that portion of variation regarded as due to chance. In addition, it disregards the range of random variation of mortality with the season. In this paper, we propose a new definition of excess mortality associated with influenza-epidemics and a new estimation method, considering these questions with the Kawai and Fukutomi method. The new definition of excess mortality and a novel method for its estimation were generated as follows. Factors bringing about variation in mortality in months with influenza-epidemics may be divided into two groups: 1. Influenza itself, 2. others (practically random variation). The range of variation of mortality due to the latter (normal range) can be estimated from the range for months in the absence of influenza-epidemics. Excess mortality is defined as death over the normal range. A new definition of excess mortality associated with influenza-epidemics and an estimation method are proposed. The new method considers variation in mortality in months in the absence of influenza-epidemics. Consequently, it provides reasonable estimates of excess mortality by separating the portion of random variation. Further, it is a characteristic that the proposed estimate can be used as a criterion of statistical significance test.
NASA Astrophysics Data System (ADS)
Norros, Veera; Laine, Marko; Lignell, Risto; Thingstad, Frede
2017-10-01
Methods for extracting empirically and theoretically sound parameter values are urgently needed in aquatic ecosystem modelling to describe key flows and their variation in the system. Here, we compare three Bayesian formulations for mechanistic model parameterization that differ in their assumptions about the variation in parameter values between various datasets: 1) global analysis - no variation, 2) separate analysis - independent variation and 3) hierarchical analysis - variation arising from a shared distribution defined by hyperparameters. We tested these methods, using computer-generated and empirical data, coupled with simplified and reasonably realistic plankton food web models, respectively. While all methods were adequate, the simulated example demonstrated that a well-designed hierarchical analysis can result in the most accurate and precise parameter estimates and predictions, due to its ability to combine information across datasets. However, our results also highlighted sensitivity to hyperparameter prior distributions as an important caveat of hierarchical analysis. In the more complex empirical example, hierarchical analysis was able to combine precise identification of parameter values with reasonably good predictive performance, although the ranking of the methods was less straightforward. We conclude that hierarchical Bayesian analysis is a promising tool for identifying key ecosystem-functioning parameters and their variation from empirical datasets.
Identification of Vibrotactile Patterns Encoding Obstacle Distance Information.
Kim, Yeongmi; Harders, Matthias; Gassert, Roger
2015-01-01
Delivering distance information of nearby obstacles from sensors embedded in a white cane-in addition to the intrinsic mechanical feedback from the cane-can aid the visually impaired in ambulating independently. Haptics is a common modality for conveying such information to cane users, typically in the form of vibrotactile signals. In this context, we investigated the effect of tactile rendering methods, tactile feedback configurations and directions of tactile flow on the identification of obstacle distance. Three tactile rendering methods with temporal variation only, spatio-temporal variation and spatial/temporal/intensity variation were investigated for two vibration feedback configurations. Results showed a significant interaction between tactile rendering method and feedback configuration. Spatio-temporal variation generally resulted in high correct identification rates for both feedback configurations. In the case of the four-finger vibration, tactile rendering with spatial/temporal/intensity variation also resulted in high distance identification rate. Further, participants expressed their preference for the four-finger vibration over the single-finger vibration in a survey. Both preferred rendering methods with spatio-temporal variation and spatial/temporal/intensity variation for the four-finger vibration could convey obstacle distance information with low workload. Overall, the presented findings provide valuable insights and guidance for the design of haptic displays for electronic travel aids for the visually impaired.
Survey Shows Variation in Ph.D. Methods Training.
ERIC Educational Resources Information Center
Steeves, Leslie; And Others
1983-01-01
Reports on a 1982 survey of journalism graduate studies indicating considerable variation in research methods requirements and emphases in 23 universities offering doctoral degrees in mass communication. (HOD)
Plate equations for piezoelectrically actuated flexural mode ultrasound transducers.
Perçin, Gökhan
2003-01-01
This paper considers variational methods to derive two-dimensional plate equations for piezoelectrically actuated flexural mode ultrasound transducers. In the absence of analytical expressions for the equivalent circuit parameters of a flexural mode transducer, it is difficult to calculate its optimal parameters and dimensions, and to choose suitable materials. The influence of coupling between flexural and extensional deformation, and coupling between the structure and the acoustic volume on the dynamic response of piezoelectrically actuated flexural mode transducer is analyzed using variational methods. Variational methods are applied to derive two-dimensional plate equations for the transducer, and to calculate the coupled electromechanical field variables. In these methods, the variations across the thickness direction vanish by using the stress resultants. Thus, two-dimensional plate equations for a stepwise laminated circular plate are obtained.
NASA Astrophysics Data System (ADS)
Maksimyuk, V. A.; Storozhuk, E. A.; Chernyshenko, I. S.
2012-11-01
Variational finite-difference methods of solving linear and nonlinear problems for thin and nonthin shells (plates) made of homogeneous isotropic (metallic) and orthotropic (composite) materials are analyzed and their classification principles and structure are discussed. Scalar and vector variational finite-difference methods that implement the Kirchhoff-Love hypotheses analytically or algorithmically using Lagrange multipliers are outlined. The Timoshenko hypotheses are implemented in a traditional way, i.e., analytically. The stress-strain state of metallic and composite shells of complex geometry is analyzed numerically. The numerical results are presented in the form of graphs and tables and used to assess the efficiency of using the variational finite-difference methods to solve linear and nonlinear problems of the statics of shells (plates)
NASA Astrophysics Data System (ADS)
Damay, Nicolas; Forgez, Christophe; Bichat, Marie-Pierre; Friedrich, Guy
2016-11-01
The entropy-variation of a battery is responsible for heat generation or consumption during operation and its prior measurement is mandatory for developing a thermal model. It is generally done through the potentiometric method which is considered as a reference. However, it requires several days or weeks to get a look-up table with a 5 or 10% SoC (State of Charge) resolution. In this study, a calorimetric method based on the inversion of a thermal model is proposed for the fast estimation of a nearly continuous curve of entropy-variation. This is achieved by separating the heats produced while charging and discharging the battery. The entropy-variation is then deduced from the extracted entropic heat. The proposed method is validated by comparing the results obtained with several current rates to measurements made with the potentiometric method.
Quantification of the Barkhausen noise method for the evaluation of time-dependent degradation
NASA Astrophysics Data System (ADS)
Kim, Dong-Won; Kwon, Dongil
2003-02-01
The Barkhausen noise (BN) method has long been applied to measure the bulk magnetic properties of magnetic materials. Recently, this important nondestructive testing (NDT) method has been applied to evaluate microstructure, stress distribution analysis, fatigue, creep and fracture characteristics. Until now the BN method has been used only qualitatively in evaluating the variation of BN with variations in material properties. For this reason, few NDT methods have been applied in industrial plants and laboratories. The present investigation studied the coercive force and BN while varying the microstructure of ultrafine-grained steels and SA508 cl.3 steels. This variation was carried out according to the second heat-treatment condition with rolling of ultrafine-grained steels and the simulated time-dependent degradation of SA 508 cl.3 steels. An attempt was also made to quantify BN from the relationship between the velocity of magnetic domain walls and the retarding force, using the coercive force of the domain wall movement. The microstructure variation was analyzed according to time-dependent degradation. Fracture toughness was evaluated quantitatively by measuring the BN from two intermediary parameters; grain size and distribution of nonmagnetic particles. From these measurements, the variation of microstructure and fracture toughness can be directly evaluated by the BN method as an accurate in situ NDT method.
Augmented classical least squares multivariate spectral analysis
Haaland, David M.; Melgaard, David K.
2004-02-03
A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.
Augmented Classical Least Squares Multivariate Spectral Analysis
Haaland, David M.; Melgaard, David K.
2005-07-26
A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.
Augmented Classical Least Squares Multivariate Spectral Analysis
Haaland, David M.; Melgaard, David K.
2005-01-11
A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.
NASA Astrophysics Data System (ADS)
Qiu, Tianlong; Zhang, Libin; Zhang, Tao; Bai, Yucen; Yang, Hongsheng
2014-07-01
There is substantial individual variation in the growth rates of sea cucumber Apostichopus japonicus individuals. This necessitates additional work to grade the seed stock and lengthens the production period. We evaluated the influence of three culture methods (free-mixed, isolated-mixed, isolated-alone) on individual variation in growth and assessed the relationship between feeding, energy conversion efficiency, and individual growth variation in individually cultured sea cucumbers. Of the different culture methods, animals grew best when reared in the isolated-mixed treatment (i.e., size classes were held separately), though there was no difference in individual variation in growth between rearing treatment groups. The individual variation in growth was primarily attributed to genetic factors. The difference in food conversion efficiency caused by genetic differences among individuals was thought to be the origin of the variance. The level of individual growth variation may be altered by interactions among individuals and environmental heterogeneity. Our results suggest that, in addition to traditional seed grading, design of a new kind of substrate that changes the spatial distribution of sea cucumbers would effectively enhance growth and reduce individual variation in growth of sea cucumbers in culture.
Statistics, Uncertainty, and Transmitted Variation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wendelberger, Joanne Roth
2014-11-05
The field of Statistics provides methods for modeling and understanding data and making decisions in the presence of uncertainty. When examining response functions, variation present in the input variables will be transmitted via the response function to the output variables. This phenomenon can potentially have significant impacts on the uncertainty associated with results from subsequent analysis. This presentation will examine the concept of transmitted variation, its impact on designed experiments, and a method for identifying and estimating sources of transmitted variation in certain settings.
Reconstruction of fluorophore concentration variation in dynamic fluorescence molecular tomography.
Zhang, Xuanxuan; Liu, Fei; Zuo, Simin; Shi, Junwei; Zhang, Guanglei; Bai, Jing; Luo, Jianwen
2015-01-01
Dynamic fluorescence molecular tomography (DFMT) is a potential approach for drug delivery, tumor detection, diagnosis, and staging. The purpose of DFMT is to quantify the changes of fluorescent agents in the bodies, which offer important information about the underlying physiological processes. However, the conventional method requires that the fluorophore concentrations to be reconstructed are stationary during the data collection period. As thus, it cannot offer the dynamic information of fluorophore concentration variation within the data collection period. In this paper, a method is proposed to reconstruct the fluorophore concentration variation instead of the fluorophore concentration through a linear approximation. The fluorophore concentration variation rate is introduced by the linear approximation as a new unknown term to be reconstructed and is used to obtain the time courses of fluorophore concentration. Simulation and phantom studies are performed to validate the proposed method. The results show that the method is able to reconstruct the fluorophore concentration variation rates and the time courses of fluorophore concentration with relative errors less than 0.0218.
Nielsen, Marie Katrine Klose; Johansen, Sys Stybe; Linnet, Kristian
2014-01-01
Assessment of total uncertainty of analytical methods for the measurements of drugs in human hair has mainly been derived from the analytical variation. However, in hair analysis several other sources of uncertainty will contribute to the total uncertainty. Particularly, in segmental hair analysis pre-analytical variations associated with the sampling and segmentation may be significant factors in the assessment of the total uncertainty budget. The aim of this study was to develop and validate a method for the analysis of 31 common drugs in hair using ultra-performance liquid chromatography-tandem mass spectrometry (UHPLC-MS/MS) with focus on the assessment of both the analytical and pre-analytical sampling variations. The validated method was specific, accurate (80-120%), and precise (CV≤20%) across a wide linear concentration range from 0.025-25 ng/mg for most compounds. The analytical variation was estimated to be less than 15% for almost all compounds. The method was successfully applied to 25 segmented hair specimens from deceased drug addicts showing a broad pattern of poly-drug use. The pre-analytical sampling variation was estimated from the genuine duplicate measurements of two bundles of hair collected from each subject after subtraction of the analytical component. For the most frequently detected analytes, the pre-analytical variation was estimated to be 26-69%. Thus, the pre-analytical variation was 3-7 folds larger than the analytical variation (7-13%) and hence the dominant component in the total variation (29-70%). The present study demonstrated the importance of including the pre-analytical variation in the assessment of the total uncertainty budget and in the setting of the 95%-uncertainty interval (±2CVT). Excluding the pre-analytical sampling variation could significantly affect the interpretation of results from segmental hair analysis. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Yoshida, Hiroyuki; Shibata, Hiroko; Izutsu, Ken-Ichi; Goda, Yukihiro
2017-01-01
The current Japanese Ministry of Health Labour and Welfare (MHLW)'s Guideline for Bioequivalence Studies of Generic Products uses averaged dissolution rates for the assessment of dissolution similarity between test and reference formulations. This study clarifies how the application of model-independent multivariate confidence region procedure (Method B), described in the European Medical Agency and U.S. Food and Drug Administration guidelines, affects similarity outcomes obtained empirically from dissolution profiles with large variations in individual dissolution rates. Sixty-one datasets of dissolution profiles for immediate release, oral generic, and corresponding innovator products that showed large variation in individual dissolution rates in generic products were assessed on their similarity by using the f 2 statistics defined in the MHLW guidelines (MHLW f 2 method) and two different Method B procedures, including a bootstrap method applied with f 2 statistics (BS method) and a multivariate analysis method using the Mahalanobis distance (MV method). The MHLW f 2 and BS methods provided similar dissolution similarities between reference and generic products. Although a small difference in the similarity assessment may be due to the decrease in the lower confidence interval for expected f 2 values derived from the large variation in individual dissolution rates, the MV method provided results different from those obtained through MHLW f 2 and BS methods. Analysis of actual dissolution data for products with large individual variations would provide valuable information towards an enhanced understanding of these methods and their possible incorporation in the MHLW guidelines.
NASA Astrophysics Data System (ADS)
Zhong, Qiu-Xiang; Wu, Chuan-Sheng; Shu, Qiao-Ling; Liu, Ryan Wen
2018-04-01
Image deblurring under impulse noise is a typical ill-posed problem which requires regularization methods to guarantee high-quality imaging. L1-norm data-fidelity term and total variation (TV) regularizer have been combined to contribute the popular regularization method. However, the TV-regularized variational image deblurring model often suffers from the staircase-like artifacts leading to image quality degradation. To enhance image quality, the detailpreserving total generalized variation (TGV) was introduced to replace TV to eliminate the undesirable artifacts. The resulting nonconvex optimization problem was effectively solved using the alternating direction method of multipliers (ADMM). In addition, an automatic method for selecting spatially adapted regularization parameters was proposed to further improve deblurring performance. Our proposed image deblurring framework is able to remove blurring and impulse noise effects while maintaining the image edge details. Comprehensive experiments have been conducted to demonstrate the superior performance of our proposed method over several state-of-the-art image deblurring methods.
NASA Technical Reports Server (NTRS)
Roth, Don J.
1996-01-01
This article describes a single transducer ultrasonic imaging method that eliminates the effect of plate thickness variation in the image. The method thus isolates ultrasonic variations due to material microstructure. The use of this method can result in significant cost savings because the ultrasonic image can be interpreted correctly without the need for machining to achieve precise thickness uniformity during nondestructive evaluations of material development. The method is based on measurement of ultrasonic velocity. Images obtained using the thickness-independent methodology are compared with conventional velocity and c-scan echo peak amplitude images for monolithic ceramic (silicon nitride), metal matrix composite and polymer matrix composite materials. It was found that the thickness-independent ultrasonic images reveal and quantify correctly areas of global microstructural (pore and fiber volume fraction) variation due to the elimination of thickness effects. The thickness-independent ultrasonic imaging method described in this article is currently being commercialized under a cooperative agreement between NASA Lewis Research Center and Sonix, Inc.
Alternative to the Palatini method: A new variational principle
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goenner, Hubert
2010-06-15
A variational principle is suggested within Riemannian geometry, in which an auxiliary metric and the Levi Civita connection are varied independently. The auxiliary metric plays the role of a Lagrange multiplier and introduces nonminimal coupling of matter to the curvature scalar. The field equations are 2nd order PDEs and easier to handle than those following from the so-called Palatini method. Moreover, in contrast to the latter method, no gradients of the matter variables appear. In cosmological modeling, the physics resulting from the alternative variational principle will differ from the modeling using the standard Palatini method.
Hints of correlation between broad-line and radio variations for 3C 120
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, H. T.; Bai, J. M.; Li, S. K.
2014-01-01
In this paper, we investigate the correlation between broad-line and radio variations for the broad-line radio galaxy 3C 120. By the z-transformed discrete correlation function method and the model-independent flux randomization/random subset selection (FR/RSS) Monte Carlo method, we find that broad Hβ line variations lead the 15 GHz variations. The FR/RSS method shows that the Hβ line variations lead the radio variations by a factor of τ{sub ob} = 0.34 ± 0.01 yr. This time lag can be used to locate the position of the emitting region of radio outbursts in the jet, on the order of ∼5 lt-yr frommore » the central engine. This distance is much larger than the size of the broad-line region. The large separation of the radio outburst emitting region from the broad-line region will observably influence the gamma-ray emission in 3C 120.« less
Boareto, Marcelo; Cesar, Jonatas; Leite, Vitor B P; Caticha, Nestor
2015-01-01
We introduce Supervised Variational Relevance Learning (Suvrel), a variational method to determine metric tensors to define distance based similarity in pattern classification, inspired in relevance learning. The variational method is applied to a cost function that penalizes large intraclass distances and favors small interclass distances. We find analytically the metric tensor that minimizes the cost function. Preprocessing the patterns by doing linear transformations using the metric tensor yields a dataset which can be more efficiently classified. We test our methods using publicly available datasets, for some standard classifiers. Among these datasets, two were tested by the MAQC-II project and, even without the use of further preprocessing, our results improve on their performance.
A Finite Mixture Method for Outlier Detection and Robustness in Meta-Analysis
ERIC Educational Resources Information Center
Beath, Ken J.
2014-01-01
When performing a meta-analysis unexplained variation above that predicted by within study variation is usually modeled by a random effect. However, in some cases, this is not sufficient to explain all the variation because of outlier or unusual studies. A previously described method is to define an outlier as a study requiring a higher random…
A variationally coupled FE-BE method for elasticity and fracture mechanics
NASA Technical Reports Server (NTRS)
Lu, Y. Y.; Belytschko, T.; Liu, W. K.
1991-01-01
A new method for coupling finite element and boundary element subdomains in elasticity and fracture mechanics problems is described. The essential feature of this new method is that a single variational statement is obtained for the entire domain, and in this process the terms associated with tractions on the interfaces between the subdomains are eliminated. This provides the additional advantage that the ambiguities associated with the matching of discontinuous tractions are circumvented. The method leads to a direct procedure for obtaining the discrete equations for the coupled problem without any intermediate steps. In order to evaluate this method and compare it with previous methods, a patch test for coupled procedures has been devised. Evaluation of this variationally coupled method and other methods, such as stiffness coupling and constraint traction matching coupling, shows that this method is substantially superior. Solutions for a series of fracture mechanics problems are also reported to illustrate the effectiveness of this method.
NASA Astrophysics Data System (ADS)
Libarir, K.; Zerarka, A.
2018-05-01
Exact eigenspectra and eigenfunctions of the Dirac quantum equation are established using the semi-inverse variational method. This method improves of a considerable manner the efficiency and accuracy of results compared with the other usual methods much argued in the literature. Some applications for different state configurations are proposed to concretize the method.
NASA Astrophysics Data System (ADS)
Li, Y. Chao; Ding, Q.; Gao, Y.; Ran, L. Ling; Yang, J. Ru; Liu, C. Yu; Wang, C. Hui; Sun, J. Feng
2014-07-01
This paper proposes a novel method of multi-beam laser heterodyne measurement for Young modulus. Based on Doppler effect and heterodyne technology, loaded the information of length variation to the frequency difference of the multi-beam laser heterodyne signal by the frequency modulation of the oscillating mirror, this method can obtain many values of length variation caused by mass variation after the multi-beam laser heterodyne signal demodulation simultaneously. Processing these values by weighted-average, it can obtain length variation accurately, and eventually obtain value of Young modulus of the sample by the calculation. This novel method is used to simulate measurement for Young modulus of wire under different mass by MATLAB, the obtained result shows that the relative measurement error of this method is just 0.3%.
Aghamohammadi, Amirhossein; Ang, Mei Choo; A Sundararajan, Elankovan; Weng, Ng Kok; Mogharrebi, Marzieh; Banihashem, Seyed Yashar
2018-01-01
Visual tracking in aerial videos is a challenging task in computer vision and remote sensing technologies due to appearance variation difficulties. Appearance variations are caused by camera and target motion, low resolution noisy images, scale changes, and pose variations. Various approaches have been proposed to deal with appearance variation difficulties in aerial videos, and amongst these methods, the spatiotemporal saliency detection approach reported promising results in the context of moving target detection. However, it is not accurate for moving target detection when visual tracking is performed under appearance variations. In this study, a visual tracking method is proposed based on spatiotemporal saliency and discriminative online learning methods to deal with appearance variations difficulties. Temporal saliency is used to represent moving target regions, and it was extracted based on the frame difference with Sauvola local adaptive thresholding algorithms. The spatial saliency is used to represent the target appearance details in candidate moving regions. SLIC superpixel segmentation, color, and moment features can be used to compute feature uniqueness and spatial compactness of saliency measurements to detect spatial saliency. It is a time consuming process, which prompted the development of a parallel algorithm to optimize and distribute the saliency detection processes that are loaded into the multi-processors. Spatiotemporal saliency is then obtained by combining the temporal and spatial saliencies to represent moving targets. Finally, a discriminative online learning algorithm was applied to generate a sample model based on spatiotemporal saliency. This sample model is then incrementally updated to detect the target in appearance variation conditions. Experiments conducted on the VIVID dataset demonstrated that the proposed visual tracking method is effective and is computationally efficient compared to state-of-the-art methods.
2018-01-01
Visual tracking in aerial videos is a challenging task in computer vision and remote sensing technologies due to appearance variation difficulties. Appearance variations are caused by camera and target motion, low resolution noisy images, scale changes, and pose variations. Various approaches have been proposed to deal with appearance variation difficulties in aerial videos, and amongst these methods, the spatiotemporal saliency detection approach reported promising results in the context of moving target detection. However, it is not accurate for moving target detection when visual tracking is performed under appearance variations. In this study, a visual tracking method is proposed based on spatiotemporal saliency and discriminative online learning methods to deal with appearance variations difficulties. Temporal saliency is used to represent moving target regions, and it was extracted based on the frame difference with Sauvola local adaptive thresholding algorithms. The spatial saliency is used to represent the target appearance details in candidate moving regions. SLIC superpixel segmentation, color, and moment features can be used to compute feature uniqueness and spatial compactness of saliency measurements to detect spatial saliency. It is a time consuming process, which prompted the development of a parallel algorithm to optimize and distribute the saliency detection processes that are loaded into the multi-processors. Spatiotemporal saliency is then obtained by combining the temporal and spatial saliencies to represent moving targets. Finally, a discriminative online learning algorithm was applied to generate a sample model based on spatiotemporal saliency. This sample model is then incrementally updated to detect the target in appearance variation conditions. Experiments conducted on the VIVID dataset demonstrated that the proposed visual tracking method is effective and is computationally efficient compared to state-of-the-art methods. PMID:29438421
27 CFR 22.22 - Alternate methods or procedures; and emergency variations from requirements.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 27 Alcohol, Tobacco Products and Firearms 1 2012-04-01 2012-04-01 false Alternate methods or procedures; and emergency variations from requirements. 22.22 Section 22.22 Alcohol, Tobacco Products and... OF TAX-FREE ALCOHOL Administrative Provisions Authorities § 22.22 Alternate methods or procedures...
27 CFR 22.22 - Alternate methods or procedures; and emergency variations from requirements.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 27 Alcohol, Tobacco Products and Firearms 1 2014-04-01 2014-04-01 false Alternate methods or procedures; and emergency variations from requirements. 22.22 Section 22.22 Alcohol, Tobacco Products and... OF TAX-FREE ALCOHOL Administrative Provisions Authorities § 22.22 Alternate methods or procedures...
27 CFR 22.22 - Alternate methods or procedures; and emergency variations from requirements.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 27 Alcohol, Tobacco Products and Firearms 1 2011-04-01 2011-04-01 false Alternate methods or procedures; and emergency variations from requirements. 22.22 Section 22.22 Alcohol, Tobacco Products and... OF TAX-FREE ALCOHOL Administrative Provisions Authorities § 22.22 Alternate methods or procedures...
27 CFR 22.22 - Alternate methods or procedures; and emergency variations from requirements.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 27 Alcohol, Tobacco Products and Firearms 1 2013-04-01 2013-04-01 false Alternate methods or procedures; and emergency variations from requirements. 22.22 Section 22.22 Alcohol, Tobacco Products and... OF TAX-FREE ALCOHOL Administrative Provisions Authorities § 22.22 Alternate methods or procedures...
27 CFR 22.22 - Alternate methods or procedures; and emergency variations from requirements.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 27 Alcohol, Tobacco Products and Firearms 1 2010-04-01 2010-04-01 false Alternate methods or procedures; and emergency variations from requirements. 22.22 Section 22.22 Alcohol, Tobacco Products and... OF TAX-FREE ALCOHOL Administrative Provisions Authorities § 22.22 Alternate methods or procedures...
Resolving Rapid Variation in Energy for Particle Transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haut, Terry Scot; Ahrens, Cory Douglas; Jonko, Alexandra
2016-08-23
Resolving the rapid variation in energy in neutron and thermal radiation transport is needed for the predictive simulation capability in high-energy density physics applications. Energy variation is difficult to resolve due to rapid variations in cross sections and opacities caused by quantized energy levels in the nuclei and electron clouds. In recent work, we have developed a new technique to simultaneously capture slow and rapid variations in the opacities and the solution using homogenization theory, which is similar to multiband (MB) and to the finite-element with discontiguous support (FEDS) method, but does not require closure information. We demonstrated the accuracymore » and efficiency of the method for a variety of problems. We are researching how to extend the method to problems with multiple materials and the same material but with different temperatures and densities. In this highlight, we briefly describe homogenization theory and some results.« less
Some problems in applications of the linear variational method
NASA Astrophysics Data System (ADS)
Pupyshev, Vladimir I.; Montgomery, H. E.
2015-09-01
The linear variational method is a standard computational method in quantum mechanics and quantum chemistry. As taught in most classes, the general guidance is to include as many basis functions as practical in the variational wave function. However, if it is desired to study the patterns of energy change accompanying the change of system parameters such as the shape and strength of the potential energy, the problem becomes more complicated. We use one-dimensional systems with a particle in a rectangular or in a harmonic potential confined in an infinite rectangular box to illustrate situations where a variational calculation can give incorrect results. These situations result when the energy of the lowest eigenvalue is strongly dependent on the parameters that describe the shape and strength of the potential. The numerical examples described in this work are provided as cautionary notes for practitioners of numerical variational calculations.
Make no mistake—errors can be controlled*
Hinckley, C
2003-01-01
Traditional quality control methods identify "variation" as the enemy. However, the control of variation by itself can never achieve the remarkably low non-conformance rates of world class quality leaders. Because the control of variation does not achieve the highest levels of quality, an inordinate focus on these techniques obscures key quality improvement opportunities and results in unnecessary pain and suffering for patients, and embarrassment, litigation, and loss of revenue for healthcare providers. Recent experience has shown that mistakes are the most common cause of problems in health care as well as in other industrial environments. Excessive product and process complexity contributes to both excessive variation and unnecessary mistakes. The best methods for controlling variation, mistakes, and complexity are each a form of mistake proofing. Using these mistake proofing techniques, virtually every mistake and non-conformance can be controlled at a fraction of the cost of traditional quality control methods. PMID:14532368
Variational Algorithms for Test Particle Trajectories
NASA Astrophysics Data System (ADS)
Ellison, C. Leland; Finn, John M.; Qin, Hong; Tang, William M.
2015-11-01
The theory of variational integration provides a novel framework for constructing conservative numerical methods for magnetized test particle dynamics. The retention of conservation laws in the numerical time advance captures the correct qualitative behavior of the long time dynamics. For modeling the Lorentz force system, new variational integrators have been developed that are both symplectic and electromagnetically gauge invariant. For guiding center test particle dynamics, discretization of the phase-space action principle yields multistep variational algorithms, in general. Obtaining the desired long-term numerical fidelity requires mitigation of the multistep method's parasitic modes or applying a discretization scheme that possesses a discrete degeneracy to yield a one-step method. Dissipative effects may be modeled using Lagrange-D'Alembert variational principles. Numerical results will be presented using a new numerical platform that interfaces with popular equilibrium codes and utilizes parallel hardware to achieve reduced times to solution. This work was supported by DOE Contract DE-AC02-09CH11466.
A parametric method for assessing diversification-rate variation in phylogenetic trees.
Shah, Premal; Fitzpatrick, Benjamin M; Fordyce, James A
2013-02-01
Phylogenetic hypotheses are frequently used to examine variation in rates of diversification across the history of a group. Patterns of diversification-rate variation can be used to infer underlying ecological and evolutionary processes responsible for patterns of cladogenesis. Most existing methods examine rate variation through time. Methods for examining differences in diversification among groups are more limited. Here, we present a new method, parametric rate comparison (PRC), that explicitly compares diversification rates among lineages in a tree using a variety of standard statistical distributions. PRC can identify subclades of the tree where diversification rates are at variance with the remainder of the tree. A randomization test can be used to evaluate how often such variance would appear by chance alone. The method also allows for comparison of diversification rate among a priori defined groups. Further, the application of the PRC method is not restricted to monophyletic groups. We examined the performance of PRC using simulated data, which showed that PRC has acceptable false-positive rates and statistical power to detect rate variation. We apply the PRC method to the well-studied radiation of North American Plethodon salamanders, and support the inference that the large-bodied Plethodon glutinosus clade has a higher historical rate of diversification compared to other Plethodon salamanders. © 2012 The Author(s). Evolution© 2012 The Society for the Study of Evolution.
Hamilton's Principle and Approximate Solutions to Problems in Classical Mechanics
ERIC Educational Resources Information Center
Schlitt, D. W.
1977-01-01
Shows how to use the Ritz method for obtaining approximate solutions to problems expressed in variational form directly from the variational equation. Application of this method to classical mechanics is given. (MLH)
NASA Technical Reports Server (NTRS)
Roth, Don J.; Seebo, Jeffrey P.; Winfree, William P.
2008-01-01
This article describes a noncontact single-sided terahertz electromagnetic measurement and imaging method that simultaneously characterizes microstructural (egs. spatially-lateral density) and thickness variation in dielectric (insulating) materials. The method was demonstrated for two materials-Space Shuttle External Tank sprayed-on foam insulation and a silicon nitride ceramic. It is believed that this method can be used as an inspection method for current and future NASA thermal protection system and other dielectric material inspection applications, where microstructural and thickness variation require precision mapping. Scale-up to more complex shapes such as cylindrical structures and structures with beveled regions would appear to be feasible.
Nonperturbative calculations in the framework of variational perturbation theory in QCD
NASA Astrophysics Data System (ADS)
Solovtsova, O. P.
2017-07-01
We discuss applications of the method based on the variational perturbation theory to perform calculations down to the lowest energy scale. The variational series is different from the conventional perturbative expansion and can be used to go beyond the weak-coupling regime. We apply this method to investigate the Borel representation of the light Adler function constructed from the τ data and to determine the residual condensates. It is shown that within the method suggested the optimal values of these lower dimension condensates are close to zero.
Zhao, Jiang Yan; Xie, Ping; Sang, Yan Fang; Xui, Qiang Qiang; Wu, Zi Yi
2018-04-01
Under the influence of both global climate change and frequent human activities, the variability of second-moment in hydrological time series become obvious, indicating changes in the consistency of hydrological data samples. Therefore, the traditional hydrological series analysis methods, which only consider the variability of mean values, are not suitable for handling all hydrological non-consistency problems. Traditional synthetic duration curve methods for the design of the lowest navigable water level, based on the consistency of samples, would cause more risks to navigation, especially under low water level in dry seasons. Here, we detected both mean variation and variance variation using the hydrological variation diagnosis system. Furthermore, combing the principle of decomposition and composition of time series, we proposed the synthetic duration curve method for designing the lowest navigable water level with inconsistent characters in dry seasons. With the Yunjinghong Station in the Lancang River Basin as an example, we analyzed its designed water levels in the present, the distant past and the recent past, as well as the differences among three situations (i.e., considering second moment variation, only considering mean variation, not considering any variation). Results showed that variability of the second moment changed the trend of designed water levels alteration in the Yunjinghong Station. When considering the first two moments or just considering the mean variation, the difference ofdesigned water levels was as bigger as -1.11 m. When considering the first two moments or not, the difference of designed water levels was as bigger as -1.01 m. Our results indicated the strong effects of variance variation on the designed water levels, and highlighted the importance of the second moment variation analysis for the channel planning and design.
CNV-seq, a new method to detect copy number variation using high-throughput sequencing.
Xie, Chao; Tammi, Martti T
2009-03-06
DNA copy number variation (CNV) has been recognized as an important source of genetic variation. Array comparative genomic hybridization (aCGH) is commonly used for CNV detection, but the microarray platform has a number of inherent limitations. Here, we describe a method to detect copy number variation using shotgun sequencing, CNV-seq. The method is based on a robust statistical model that describes the complete analysis procedure and allows the computation of essential confidence values for detection of CNV. Our results show that the number of reads, not the length of the reads is the key factor determining the resolution of detection. This favors the next-generation sequencing methods that rapidly produce large amount of short reads. Simulation of various sequencing methods with coverage between 0.1x to 8x show overall specificity between 91.7 - 99.9%, and sensitivity between 72.2 - 96.5%. We also show the results for assessment of CNV between two individual human genomes.
Removal of ring artifacts in microtomography by characterization of scintillator variations.
Vågberg, William; Larsson, Jakob C; Hertz, Hans M
2017-09-18
Ring artifacts reduce image quality in tomography, and arise from faulty detector calibration. In microtomography, we have identified that ring artifacts can arise due to high-spatial frequency variations in the scintillator thickness. Such variations are normally removed by a flat-field correction. However, as the spectrum changes, e.g. due to beam hardening, the detector response varies non-uniformly introducing ring artifacts that persist after flat-field correction. In this paper, we present a method to correct for ring artifacts from variations in scintillator thickness by using a simple method to characterize the local scintillator response. The method addresses the actual physical cause of the ring artifacts, in contrary to many other ring artifact removal methods which rely only on image post-processing. By applying the technique to an experimental phantom tomography, we show that ring artifacts are strongly reduced compared to only making a flat-field correction.
Localization of a variational particle smoother
NASA Astrophysics Data System (ADS)
Morzfeld, M.; Hodyss, D.; Poterjoy, J.
2017-12-01
Given the success of 4D-variational methods (4D-Var) in numerical weather prediction,and recent efforts to merge ensemble Kalman filters with 4D-Var,we consider a method to merge particle methods and 4D-Var.This leads us to revisit variational particle smoothers (varPS).We study the collapse of varPS in high-dimensional problemsand show how it can be prevented by weight-localization.We test varPS on the Lorenz'96 model of dimensionsn=40, n=400, and n=2000.In our numerical experiments, weight localization prevents the collapse of the varPS,and we note that the varPS yields results comparable to ensemble formulations of 4D-variational methods,while it outperforms EnKF with tuned localization and inflation,and the localized standard particle filter.Additional numerical experiments suggest that using localized weights in varPS may not yield significant advantages over unweighted or linearizedsolutions in near-Gaussian problems.
NASA Astrophysics Data System (ADS)
Li, Yan-Chao; Wang, Chun-Hui; Qu, Yang; Gao, Long; Cong, Hai-Fang; Yang, Yan-Ling; Gao, Jie; Wang, Ao-You
2011-01-01
This paper proposes a novel method of multi-beam laser heterodyne measurement for metal linear expansion coefficient. Based on the Doppler effect and heterodyne technology, the information is loaded of length variation to the frequency difference of the multi-beam laser heterodyne signal by the frequency modulation of the oscillating mirror, this method can obtain many values of length variation caused by temperature variation after the multi-beam laser heterodyne signal demodulation simultaneously. Processing these values by weighted-average, it can obtain length variation accurately, and eventually obtain the value of linear expansion coefficient of metal by the calculation. This novel method is used to simulate measurement for linear expansion coefficient of metal rod under different temperatures by MATLAB, the obtained result shows that the relative measurement error of this method is just 0.4%.
Wu, Gary D; Lewis, James D; Hoffmann, Christian; Chen, Ying-Yu; Knight, Rob; Bittinger, Kyle; Hwang, Jennifer; Chen, Jun; Berkowsky, Ronald; Nessel, Lisa; Li, Hongzhe; Bushman, Frederic D
2010-07-30
Intense interest centers on the role of the human gut microbiome in health and disease, but optimal methods for analysis are still under development. Here we present a study of methods for surveying bacterial communities in human feces using 454/Roche pyrosequencing of 16S rRNA gene tags. We analyzed fecal samples from 10 individuals and compared methods for storage, DNA purification and sequence acquisition. To assess reproducibility, we compared samples one cm apart on a single stool specimen for each individual. To analyze storage methods, we compared 1) immediate freezing at -80 degrees C, 2) storage on ice for 24 or 3) 48 hours. For DNA purification methods, we tested three commercial kits and bead beating in hot phenol. Variations due to the different methodologies were compared to variation among individuals using two approaches--one based on presence-absence information for bacterial taxa (unweighted UniFrac) and the other taking into account their relative abundance (weighted UniFrac). In the unweighted analysis relatively little variation was associated with the different analytical procedures, and variation between individuals predominated. In the weighted analysis considerable variation was associated with the purification methods. Particularly notable was improved recovery of Firmicutes sequences using the hot phenol method. We also carried out surveys of the effects of different 454 sequencing methods (FLX versus Titanium) and amplification of different 16S rRNA variable gene segments. Based on our findings we present recommendations for protocols to collect, process and sequence bacterial 16S rDNA from fecal samples--some major points are 1) if feasible, bead-beating in hot phenol or use of the PSP kit improves recovery; 2) storage methods can be adjusted based on experimental convenience; 3) unweighted (presence-absence) comparisons are less affected by lysis method.
An Analysis of Periodic Components in BL Lac Object S5 0716 +714 with MUSIC Method
NASA Astrophysics Data System (ADS)
Tang, J.
2012-01-01
Multiple signal classification (MUSIC) algorithms are introduced to the estimation of the period of variation of BL Lac objects.The principle of MUSIC spectral analysis method and theoretical analysis of the resolution of frequency spectrum using analog signals are included. From a lot of literatures, we have collected a lot of effective observation data of BL Lac object S5 0716 + 714 in V, R, I bands from 1994 to 2008. The light variation periods of S5 0716 +714 are obtained by means of the MUSIC spectral analysis method and periodogram spectral analysis method. There exist two major periods: (3.33±0.08) years and (1.24±0.01) years for all bands. The estimation of the period of variation of the algorithm based on the MUSIC spectral analysis method is compared with that of the algorithm based on the periodogram spectral analysis method. It is a super-resolution algorithm with small data length, and could be used to detect the period of variation of weak signals.
Optimal Variational Asymptotic Method for Nonlinear Fractional Partial Differential Equations.
Baranwal, Vipul K; Pandey, Ram K; Singh, Om P
2014-01-01
We propose optimal variational asymptotic method to solve time fractional nonlinear partial differential equations. In the proposed method, an arbitrary number of auxiliary parameters γ 0, γ 1, γ 2,… and auxiliary functions H 0(x), H 1(x), H 2(x),… are introduced in the correction functional of the standard variational iteration method. The optimal values of these parameters are obtained by minimizing the square residual error. To test the method, we apply it to solve two important classes of nonlinear partial differential equations: (1) the fractional advection-diffusion equation with nonlinear source term and (2) the fractional Swift-Hohenberg equation. Only few iterations are required to achieve fairly accurate solutions of both the first and second problems.
NASA Technical Reports Server (NTRS)
Horvitz, M. A.; Schoeller, D. A.
2001-01-01
The doubly labeled water method for measuring total energy expenditure is subject to error from natural variations in the background 2H and 18O in body water. There is disagreement as to whether the variations in background abundances of the two stable isotopes covary and what relative doses of 2H and 18O minimize the impact of variation on the precision of the method. We have performed two studies to investigate the amount and covariance of the background variations. These were a study of urine collected weekly from eight subjects who remained in the Madison, WI locale for 6 wk and frequent urine samples from 14 subjects during round-trip travel to a locale > or = 500 miles from Madison, WI. Background variation in excess of analytical error was detected in six of the eight nontravelers, and covariance was demonstrated in four subjects. Background variation was detected in all 14 travelers, and covariance was demonstrated in 11 subjects. The median slopes of the regression lines of delta2H vs. delta18O were 6 and 7, respectively. Modeling indicated that 2H and 18O doses yielding a 6:1 ratio of final enrichments should minimize this error introduced to the doubly labeled water method.
Horvitz, M A; Schoeller, D A
2001-06-01
The doubly labeled water method for measuring total energy expenditure is subject to error from natural variations in the background 2H and 18O in body water. There is disagreement as to whether the variations in background abundances of the two stable isotopes covary and what relative doses of 2H and 18O minimize the impact of variation on the precision of the method. We have performed two studies to investigate the amount and covariance of the background variations. These were a study of urine collected weekly from eight subjects who remained in the Madison, WI locale for 6 wk and frequent urine samples from 14 subjects during round-trip travel to a locale > or = 500 miles from Madison, WI. Background variation in excess of analytical error was detected in six of the eight nontravelers, and covariance was demonstrated in four subjects. Background variation was detected in all 14 travelers, and covariance was demonstrated in 11 subjects. The median slopes of the regression lines of delta2H vs. delta18O were 6 and 7, respectively. Modeling indicated that 2H and 18O doses yielding a 6:1 ratio of final enrichments should minimize this error introduced to the doubly labeled water method.
An improved correlation method for determining the period of a torsion pendulum
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo Jie; Wang Dianhong
Considering variation of environment temperature and unhomogeneity of background gravitational field, an improved correlation method was proposed to determine the variational period of a torsion pendulum with high precision. The result of processing experimental data shows that the uncertainty of determining the period with this method has been improved about twofolds than traditional correlation method, which is significant for the determination of gravitational constant with time-of-swing method.
Partitioning sources of variation in vertebrate species richness
Boone, R.B.; Krohn, W.B.
2000-01-01
Aim: To explore biogeographic patterns of terrestrial vertebrates in Maine, USA using techniques that would describe local and spatial correlations with the environment. Location: Maine, USA. Methods: We delineated the ranges within Maine (86,156 km2) of 275 species using literature and expert review. Ranges were combined into species richness maps, and compared to geomorphology, climate, and woody plant distributions. Methods were adapted that compared richness of all vertebrate classes to each environmental correlate, rather than assessing a single explanatory theory. We partitioned variation in species richness into components using tree and multiple linear regression. Methods were used that allowed for useful comparisons between tree and linear regression results. For both methods we partitioned variation into broad-scale (spatially autocorrelated) and fine-scale (spatially uncorrelated) explained and unexplained components. By partitioning variance, and using both tree and linear regression in analyses, we explored the degree of variation in species richness for each vertebrate group that Could be explained by the relative contribution of each environmental variable. Results: In tree regression, climate variation explained richness better (92% of mean deviance explained for all species) than woody plant variation (87%) and geomorphology (86%). Reptiles were highly correlated with environmental variation (93%), followed by mammals, amphibians, and birds (each with 84-82% deviance explained). In multiple linear regression, climate was most closely associated with total vertebrate richness (78%), followed by woody plants (67%) and geomorphology (56%). Again, reptiles were closely correlated with the environment (95%), followed by mammals (73%), amphibians (63%) and birds (57%). Main conclusions: Comparing variation explained using tree and multiple linear regression quantified the importance of nonlinear relationships and local interactions between species richness and environmental variation, identifying the importance of linear relationships between reptiles and the environment, and nonlinear relationships between birds and woody plants, for example. Conservation planners should capture climatic variation in broad-scale designs; temperatures may shift during climate change, but the underlying correlations between the environment and species richness will presumably remain.
Methods of determining complete sensor requirements for autonomous mobility
NASA Technical Reports Server (NTRS)
Curtis, Steven A. (Inventor)
2012-01-01
A method of determining complete sensor requirements for autonomous mobility of an autonomous system includes computing a time variation of each behavior of a set of behaviors of the autonomous system, determining mobility sensitivity to each behavior of the autonomous system, and computing a change in mobility based upon the mobility sensitivity to each behavior and the time variation of each behavior. The method further includes determining the complete sensor requirements of the autonomous system through analysis of the relative magnitude of the change in mobility, the mobility sensitivity to each behavior, and the time variation of each behavior, wherein the relative magnitude of the change in mobility, the mobility sensitivity to each behavior, and the time variation of each behavior are characteristic of the stability of the autonomous system.
The Schwinger Variational Method
NASA Technical Reports Server (NTRS)
Huo, Winifred M.
1995-01-01
Variational methods have proven invaluable in theoretical physics and chemistry, both for bound state problems and for the study of collision phenomena. The application of the Schwinger variational (SV) method to e-molecule collisions and molecular photoionization has been reviewed previously. The present chapter discusses the implementation of the SV method as applied to e-molecule collisions. Since this is not a review of cross section data, cross sections are presented only to server as illustrative examples. In the SV method, the correct boundary condition is automatically incorporated through the use of Green's function. Thus SV calculations can employ basis functions with arbitrary boundary conditions. The iterative Schwinger method has been used extensively to study molecular photoionization. For e-molecule collisions, it is used at the static exchange level to study elastic scattering and coupled with the distorted wave approximation to study electronically inelastic scattering.
Factor Retention in Exploratory Factor Analysis: A Comparison of Alternative Methods.
ERIC Educational Resources Information Center
Mumford, Karen R.; Ferron, John M.; Hines, Constance V.; Hogarty, Kristine Y.; Kromrey, Jeffery D.
This study compared the effectiveness of 10 methods of determining the number of factors to retain in exploratory common factor analysis. The 10 methods included the Kaiser rule and a modified Kaiser criterion, 3 variations of parallel analysis, 4 regression-based variations of the scree procedure, and the minimum average partial procedure. The…
NASA Astrophysics Data System (ADS)
Ponte Castañeda, Pedro
2016-11-01
This paper presents a variational method for estimating the effective constitutive response of composite materials with nonlinear constitutive behavior. The method is based on a stationary variational principle for the macroscopic potential in terms of the corresponding potential of a linear comparison composite (LCC) whose properties are the trial fields in the variational principle. When used in combination with estimates for the LCC that are exact to second order in the heterogeneity contrast, the resulting estimates for the nonlinear composite are also guaranteed to be exact to second-order in the contrast. In addition, the new method allows full optimization with respect to the properties of the LCC, leading to estimates that are fully stationary and exhibit no duality gaps. As a result, the effective response and field statistics of the nonlinear composite can be estimated directly from the appropriately optimized linear comparison composite. By way of illustration, the method is applied to a porous, isotropic, power-law material, and the results are found to compare favorably with earlier bounds and estimates. However, the basic ideas of the method are expected to work for broad classes of composites materials, whose effective response can be given appropriate variational representations, including more general elasto-plastic and soft hyperelastic composites and polycrystals.
Bjorgaard, J. A.; Velizhanin, K. A.; Tretiak, S.
2015-08-06
This study describes variational energy expressions and analytical excited state energy gradients for time-dependent self-consistent field methods with polarizable solvent effects. Linear response, vertical excitation, and state-specific solventmodels are examined. Enforcing a variational ground stateenergy expression in the state-specific model is found to reduce it to the vertical excitation model. Variational excited state energy expressions are then provided for the linear response and vertical excitation models and analytical gradients are formulated. Using semiempiricalmodel chemistry, the variational expressions are verified by numerical and analytical differentiation with respect to a static external electric field. Lastly, analytical gradients are further tested by performingmore » microcanonical excited state molecular dynamics with p-nitroaniline.« less
Microfluidic-Based Measurement Method of Red Blood Cell Aggregation under Hematocrit Variations
2017-01-01
Red blood cell (RBC) aggregation and erythrocyte sedimentation rate (ESR) are considered to be promising biomarkers for effectively monitoring blood rheology at extremely low shear rates. In this study, a microfluidic-based measurement technique is suggested to evaluate RBC aggregation under hematocrit variations due to the continuous ESR. After the pipette tip is tightly fitted into an inlet port, a disposable suction pump is connected to the outlet port through a polyethylene tube. After dropping blood (approximately 0.2 mL) into the pipette tip, the blood flow can be started and stopped by periodically operating a pinch valve. To evaluate variations in RBC aggregation due to the continuous ESR, an EAI (Erythrocyte-sedimentation-rate Aggregation Index) is newly suggested, which uses temporal variations of image intensity. To demonstrate the proposed method, the dynamic characterization of the disposable suction pump is first quantitatively measured by varying the hematocrit levels and cavity volume of the suction pump. Next, variations in RBC aggregation and ESR are quantified by varying the hematocrit levels. The conventional aggregation index (AI) is maintained constant, unrelated to the hematocrit values. However, the EAI significantly decreased with respect to the hematocrit values. Thus, the EAI is more effective than the AI for monitoring variations in RBC aggregation due to the ESR. Lastly, the proposed method is employed to detect aggregated blood and thermally-induced blood. The EAI gradually increased as the concentration of a dextran solution increased. In addition, the EAI significantly decreased for thermally-induced blood. From this experimental demonstration, the proposed method is able to effectively measure variations in RBC aggregation due to continuous hematocrit variations, especially by quantifying the EAI. PMID:28878199
Pramudya, Ragita C; Seo, Han-Seok
2018-03-01
Temperatures of most hot or cold meal items change over the period of consumption, possibly influencing sensory perception of those items. Unlike temporal variations in sensory attributes, product temperature-induced variations have not received much attention. Using a Check-All-That-Apply (CATA) method, this study aimed to characterize variations in sensory attributes over a wide range of temperatures at which hot or cold foods and beverages may be consumed. Cooked milled rice, typically consumed at temperatures between 70 and 30°C in many rice-eating countries, was used as a target sample in this study. Two brands of long-grain milled rice were cooked and randomly presented at 70, 60, 50, 40, and 30°C. Thirty-five CATA terms for cooked milled rice were generated. Eighty-eight untrained panelists were asked to quickly select all the CATA terms that they considered appropriate to characterize sensory attributes of cooked rice samples presented at each temperature. Proportions of selection by panelists for 13 attributes significantly differed among the five temperature conditions. "Product temperature-dependent sensory-attribute variations" differed with two brands of milled rice grains. Such variations in sensory attributes, resulted from both product temperature and rice brand, were more pronounced among panelists who more frequently consumed rice. In conclusion, the CATA method can be useful for characterizing "product temperature-dependent sensory attribute variations" in cooked milled-rice samples. Further study is needed to examine whether the CATA method is also effective in capturing "product temperature-dependent sensory-attribute variations" in other hot or cold foods and beverages. Published by Elsevier Ltd.
An historical survey of computational methods in optimal control.
NASA Technical Reports Server (NTRS)
Polak, E.
1973-01-01
Review of some of the salient theoretical developments in the specific area of optimal control algorithms. The first algorithms for optimal control were aimed at unconstrained problems and were derived by using first- and second-variation methods of the calculus of variations. These methods have subsequently been recognized as gradient, Newton-Raphson, or Gauss-Newton methods in function space. A much more recent addition to the arsenal of unconstrained optimal control algorithms are several variations of conjugate-gradient methods. At first, constrained optimal control problems could only be solved by exterior penalty function methods. Later algorithms specifically designed for constrained problems have appeared. Among these are methods for solving the unconstrained linear quadratic regulator problem, as well as certain constrained minimum-time and minimum-energy problems. Differential-dynamic programming was developed from dynamic programming considerations. The conditional-gradient method, the gradient-projection method, and a couple of feasible directions methods were obtained as extensions or adaptations of related algorithms for finite-dimensional problems. Finally, the so-called epsilon-methods combine the Ritz method with penalty function techniques.
Techniques of orbital decay and long-term ephemeris prediction for satellites in earth orbit
NASA Technical Reports Server (NTRS)
Barry, B. F.; Pimm, R. S.; Rowe, C. K.
1971-01-01
In the special perturbation method, Cowell and variation-of-parameters formulations of the motion equations are implemented and numerically integrated. Variations in the orbital elements due to drag are computed using the 1970 Jacchia atmospheric density model, which includes the effects of semiannual variations, diurnal bulge, solar activity, and geomagnetic activity. In the general perturbation method, two-variable asymptotic series and automated manipulation capabilities are used to obtain analytical solutions to the variation-of-parameters equations. Solutions are obtained considering the effect of oblateness only and the combined effects of oblateness and drag. These solutions are then numerically evaluated by means of a FORTRAN program in which an updating scheme is used to maintain accurate epoch values of the elements. The atmospheric density function is approximated by a Fourier series in true anomaly, and the 1970 Jacchia model is used to periodically update the Fourier coefficients. The accuracy of both methods is demonstrated by comparing computed orbital elements to actual elements over time spans of up to 8 days for the special perturbation method and up to 356 days for the general perturbation method.
Dueck, Hannah; Eberwine, James; Kim, Junhyong
2016-02-01
There is a growing appreciation of the extent of transcriptome variation across individual cells of the same cell type. While expression variation may be a byproduct of, for example, dynamic or homeostatic processes, here we consider whether single-cell molecular variation per se might be crucial for population-level function. Under this hypothesis, molecular variation indicates a diversity of hidden functional capacities within an ensemble of identical cells, and this functional diversity facilitates collective behavior that would be inaccessible to a homogenous population. In reviewing this topic, we explore possible functions that might be carried by a heterogeneous ensemble of cells; however, this question has proven difficult to test, both because methods to manipulate molecular variation are limited and because it is complicated to define, and measure, population-level function. We consider several possible methods to further pursue the hypothesis that variation is function through the use of comparative analysis and novel experimental techniques. © 2015 The Authors. BioEssays published by WILEY Periodicals, Inc.
Yu, Hui; Qi, Dan; Li, Heng-da; Xu, Ke-xin; Yuan, Wei-jie
2012-03-01
Weak signal, low instrument signal-to-noise ratio, continuous variation of human physiological environment and the interferences from other components in blood make it difficult to extract the blood glucose information from near infrared spectrum in noninvasive blood glucose measurement. The floating-reference method, which analyses the effect of glucose concentration variation on absorption coefficient and scattering coefficient, gets spectrum at the reference point and the measurement point where the light intensity variations from absorption and scattering are counteractive and biggest respectively. By using the spectrum from reference point as reference, floating-reference method can reduce the interferences from variation of physiological environment and experiment circumstance. In the present paper, the effectiveness of floating-reference method working on improving prediction precision and stability was assessed through application experiments. The comparison was made between models whose data were processed with and without floating-reference method. The results showed that the root mean square error of prediction (RMSEP) decreased by 34.7% maximally. The floating-reference method could reduce the influences of changes of samples' state, instrument noises and drift, and improve the models' prediction precision and stability effectively.
A comparison of five methods for monitoring the precision of automated x-ray film processors.
Nickoloff, E L; Leo, F; Reese, M
1978-11-01
Five different methods for preparing sensitometric strips used to monitor the precision of automated film processors are compared. A method for determining the sensitivity of each system to processor variations is presented; the observed statistical variability is multiplied by the system response to temperature or chemical changes. Pre-exposed sensitometric strips required the use of accurate densitometers and stringent control limits to be effective. X-ray exposed sensitometric strips demonstrated large variations in the x-ray output (2 omega approximately equal to 8.0%) over a period of one month. Some light sensitometers were capable of detecting +/- 1.0 degrees F (+/- 0.6 degrees C) variations in developer temperature in the processor and/or about 10.0 ml of chemical contamination in the processor. Nevertheless, even the light sensitometers were susceptible to problems, e.g. film emulsion selection, line voltage variations, and latent image fading. Advantages and disadvantages of the various sensitometric methods are discussed.
The variational method in quantum mechanics: an elementary introduction
NASA Astrophysics Data System (ADS)
Borghi, Riccardo
2018-05-01
Variational methods in quantum mechanics are customarily presented as invaluable techniques to find approximate estimates of ground state energies. In the present paper a short catalogue of different celebrated potential distributions (both 1D and 3D), for which an exact and complete (energy and wavefunction) ground state determination can be achieved in an elementary way, is illustrated. No previous knowledge of calculus of variations is required. Rather, in all presented cases the exact energy functional minimization is achieved by using only a couple of simple mathematical tricks: ‘completion of square’ and integration by parts. This makes our approach particularly suitable for undergraduates. Moreover, the key role played by particle localization is emphasized through the entire analysis. This gentle introduction to the variational method could also be potentially attractive for more expert students as a possible elementary route toward a rather advanced topic on quantum mechanics: the factorization method. Such an unexpected connection is outlined in the final part of the paper.
Multigrid Solution of the Navier-Stokes Equations at Low Speeds with Large Temperature Variations
NASA Technical Reports Server (NTRS)
Sockol, Peter M.
2002-01-01
Multigrid methods for the Navier-Stokes equations at low speeds and large temperature variations are investigated. The compressible equations with time-derivative preconditioning and preconditioned flux-difference splitting of the inviscid terms are used. Three implicit smoothers have been incorporated into a common multigrid procedure. Both full coarsening and semi-coarsening with directional fine-grid defect correction have been studied. The resulting methods have been tested on four 2D laminar problems over a range of Reynolds numbers on both uniform and highly stretched grids. Two of the three methods show efficient and robust performance over the entire range of conditions. In addition none of the methods have any difficulty with the large temperature variations.
Zhang, Hanming; Wang, Linyuan; Yan, Bin; Li, Lei; Cai, Ailong; Hu, Guoen
2016-01-01
Total generalized variation (TGV)-based computed tomography (CT) image reconstruction, which utilizes high-order image derivatives, is superior to total variation-based methods in terms of the preservation of edge information and the suppression of unfavorable staircase effects. However, conventional TGV regularization employs l1-based form, which is not the most direct method for maximizing sparsity prior. In this study, we propose a total generalized p-variation (TGpV) regularization model to improve the sparsity exploitation of TGV and offer efficient solutions to few-view CT image reconstruction problems. To solve the nonconvex optimization problem of the TGpV minimization model, we then present an efficient iterative algorithm based on the alternating minimization of augmented Lagrangian function. All of the resulting subproblems decoupled by variable splitting admit explicit solutions by applying alternating minimization method and generalized p-shrinkage mapping. In addition, approximate solutions that can be easily performed and quickly calculated through fast Fourier transform are derived using the proximal point method to reduce the cost of inner subproblems. The accuracy and efficiency of the simulated and real data are qualitatively and quantitatively evaluated to validate the efficiency and feasibility of the proposed method. Overall, the proposed method exhibits reasonable performance and outperforms the original TGV-based method when applied to few-view problems.
Li, Xingyu; Plataniotis, Konstantinos N
2015-07-01
In digital histopathology, tasks of segmentation and disease diagnosis are achieved by quantitative analysis of image content. However, color variation in image samples makes it challenging to produce reliable results. This paper introduces a complete normalization scheme to address the problem of color variation in histopathology images jointly caused by inconsistent biopsy staining and nonstandard imaging condition. Method : Different from existing normalization methods that either address partial cause of color variation or lump them together, our method identifies causes of color variation based on a microscopic imaging model and addresses inconsistency in biopsy imaging and staining by an illuminant normalization module and a spectral normalization module, respectively. In evaluation, we use two public datasets that are representative of histopathology images commonly received in clinics to examine the proposed method from the aspects of robustness to system settings, performance consistency against achromatic pixels, and normalization effectiveness in terms of histological information preservation. As the saturation-weighted statistics proposed in this study generates stable and reliable color cues for stain normalization, our scheme is robust to system parameters and insensitive to image content and achromatic colors. Extensive experimentation suggests that our approach outperforms state-of-the-art normalization methods as the proposed method is the only approach that succeeds to preserve histological information after normalization. The proposed color normalization solution would be useful to mitigate effects of color variation in pathology images on subsequent quantitative analysis.
Laplace transform homotopy perturbation method for the approximation of variational problems.
Filobello-Nino, U; Vazquez-Leal, H; Rashidi, M M; Sedighi, H M; Perez-Sesma, A; Sandoval-Hernandez, M; Sarmiento-Reyes, A; Contreras-Hernandez, A D; Pereyra-Diaz, D; Hoyos-Reyes, C; Jimenez-Fernandez, V M; Huerta-Chua, J; Castro-Gonzalez, F; Laguna-Camacho, J R
2016-01-01
This article proposes the application of Laplace Transform-Homotopy Perturbation Method and some of its modifications in order to find analytical approximate solutions for the linear and nonlinear differential equations which arise from some variational problems. As case study we will solve four ordinary differential equations, and we will show that the proposed solutions have good accuracy, even we will obtain an exact solution. In the sequel, we will see that the square residual error for the approximate solutions, belongs to the interval [0.001918936920, 0.06334882582], which confirms the accuracy of the proposed methods, taking into account the complexity and difficulty of variational problems.
Compensation of flare-induced CD changes EUVL
Bjorkholm, John E [Pleasanton, CA; Stearns, Daniel G [Los Altos, CA; Gullikson, Eric M [Oakland, CA; Tichenor, Daniel A [Castro Valley, CA; Hector, Scott D [Oakland, CA
2004-11-09
A method for compensating for flare-induced critical dimensions (CD) changes in photolithography. Changes in the flare level results in undesirable CD changes. The method when used in extreme ultraviolet (EUV) lithography essentially eliminates the unwanted CD changes. The method is based on the recognition that the intrinsic level of flare for an EUV camera (the flare level for an isolated sub-resolution opaque dot in a bright field mask) is essentially constant over the image field. The method involves calculating the flare and its variation over the area of a patterned mask that will be imaged and then using mask biasing to largely eliminate the CD variations that the flare and its variations would otherwise cause. This method would be difficult to apply to optical or DUV lithography since the intrinsic flare for those lithographies is not constant over the image field.
Applications of He's semi-inverse method, ITEM and GGM to the Davey-Stewartson equation
NASA Astrophysics Data System (ADS)
Zinati, Reza Farshbaf; Manafian, Jalil
2017-04-01
We investigate the Davey-Stewartson (DS) equation. Travelling wave solutions were found. In this paper, we demonstrate the effectiveness of the analytical methods, namely, He's semi-inverse variational principle method (SIVPM), the improved tan(φ/2)-expansion method (ITEM) and generalized G'/G-expansion method (GGM) for seeking more exact solutions via the DS equation. These methods are direct, concise and simple to implement compared to other existing methods. The exact solutions containing four types solutions have been achieved. The results demonstrate that the aforementioned methods are more efficient than the Ansatz method applied by Mirzazadeh (2015). Abundant exact travelling wave solutions including solitons, kink, periodic and rational solutions have been found by the improved tan(φ/2)-expansion and generalized G'/G-expansion methods. By He's semi-inverse variational principle we have obtained dark and bright soliton wave solutions. Also, the obtained semi-inverse variational principle has profound implications in physical understandings. These solutions might play important role in engineering and physics fields. Moreover, by using Matlab, some graphical simulations were done to see the behavior of these solutions.
2010-01-01
Intense interest centers on the role of the human gut microbiome in health and disease, but optimal methods for analysis are still under development. Here we present a study of methods for surveying bacterial communities in human feces using 454/Roche pyrosequencing of 16S rRNA gene tags. We analyzed fecal samples from 10 individuals and compared methods for storage, DNA purification and sequence acquisition. To assess reproducibility, we compared samples one cm apart on a single stool specimen for each individual. To analyze storage methods, we compared 1) immediate freezing at -80°C, 2) storage on ice for 24 or 3) 48 hours. For DNA purification methods, we tested three commercial kits and bead beating in hot phenol. Variations due to the different methodologies were compared to variation among individuals using two approaches--one based on presence-absence information for bacterial taxa (unweighted UniFrac) and the other taking into account their relative abundance (weighted UniFrac). In the unweighted analysis relatively little variation was associated with the different analytical procedures, and variation between individuals predominated. In the weighted analysis considerable variation was associated with the purification methods. Particularly notable was improved recovery of Firmicutes sequences using the hot phenol method. We also carried out surveys of the effects of different 454 sequencing methods (FLX versus Titanium) and amplification of different 16S rRNA variable gene segments. Based on our findings we present recommendations for protocols to collect, process and sequence bacterial 16S rDNA from fecal samples--some major points are 1) if feasible, bead-beating in hot phenol or use of the PSP kit improves recovery; 2) storage methods can be adjusted based on experimental convenience; 3) unweighted (presence-absence) comparisons are less affected by lysis method. PMID:20673359
Numerical realization of the variational method for generating self-trapped beams
NASA Astrophysics Data System (ADS)
Duque, Erick I.; Lopez-Aguayo, Servando; Malomed, Boris A.
2018-03-01
We introduce a numerical variational method based on the Rayleigh-Ritz optimization principle for predicting two-dimensional self-trapped beams in nonlinear media. This technique overcomes the limitation of the traditional variational approximation in performing analytical Lagrangian integration and differentiation. Approximate soliton solutions of a generalized nonlinear Schr\\"odinger equation are obtained, demonstrating robustness of the beams of various types (fundamental, vortices, multipoles, azimuthons) in the course of their propagation. The algorithm offers possibilities to produce more sophisticated soliton profiles in general nonlinear models.
NASA Astrophysics Data System (ADS)
Longuevergne, Laurent; Scanlon, Bridget R.; Wilson, Clark R.
2010-11-01
The Gravity Recovery and Climate Experiment (GRACE) satellites provide observations of water storage variation at regional scales. However, when focusing on a region of interest, limited spatial resolution and noise contamination can cause estimation bias and spatial leakage, problems that are exacerbated as the region of interest approaches the GRACE resolution limit of a few hundred km. Reliable estimates of water storage variations in small basins require compromises between competing needs for noise suppression and spatial resolution. The objective of this study was to quantitatively investigate processing methods and their impacts on bias, leakage, GRACE noise reduction, and estimated total error, allowing solution of the trade-offs. Among the methods tested is a recently developed concentration algorithm called spatiospectral localization, which optimizes the basin shape description, taking into account limited spatial resolution. This method is particularly suited to retrieval of basin-scale water storage variations and is effective for small basins. To increase confidence in derived methods, water storage variations were calculated for both CSR (Center for Space Research) and GRGS (Groupe de Recherche de Géodésie Spatiale) GRACE products, which employ different processing strategies. The processing techniques were tested on the intensively monitored High Plains Aquifer (450,000 km2 area), where application of the appropriate optimal processing method allowed retrieval of water storage variations over a portion of the aquifer as small as ˜200,000 km2.
Hooper, Lisa M.; Weinfurt, Kevin P.; Cooper, Lisa A.; Mensh, Julie; Harless, William; Kuhajda, Melissa C.; Epstein, Steven A.
2009-01-01
Background Some primary care physicians provide less than optimal care for depression (Kessler et al., Journal of the American Medical Association 291, 2581–90, 2004). However, the literature is not unanimous on the best method to use in order to investigate this variation in care. To capture variations in physician behaviour and decision making in primary care settings, 32 interactive CD-ROM vignettes were constructed and tested. Aim and method The primary aim of this methods-focused paper was to review the extent to which our study method – an interactive CD-ROM patient vignette methodology – was effective in capturing variation in physician behaviour. Specifically, we examined the following questions: (a) Did the interactive CD-ROM technology work? (b) Did we create believable virtual patients? (c) Did the research protocol enable interviews (data collection) to be completed as planned? (d) To what extent was the targeted study sample size achieved? and (e) Did the study interview protocol generate valid and reliable quantitative data and rich, credible qualitative data? Findings Among a sample of 404 randomly selected primary care physicians, our voice-activated interactive methodology appeared to be effective. Specifically, our methodology – combining interactive virtual patient vignette technology, experimental design, and expansive open-ended interview protocol – generated valid explanations for variations in primary care physician practice patterns related to depression care. PMID:20463864
A tri-modality image fusion method for target delineation of brain tumors in radiotherapy.
Guo, Lu; Shen, Shuming; Harris, Eleanor; Wang, Zheng; Jiang, Wei; Guo, Yu; Feng, Yuanming
2014-01-01
To develop a tri-modality image fusion method for better target delineation in image-guided radiotherapy for patients with brain tumors. A new method of tri-modality image fusion was developed, which can fuse and display all image sets in one panel and one operation. And a feasibility study in gross tumor volume (GTV) delineation using data from three patients with brain tumors was conducted, which included images of simulation CT, MRI, and 18F-fluorodeoxyglucose positron emission tomography (18F-FDG PET) examinations before radiotherapy. Tri-modality image fusion was implemented after image registrations of CT+PET and CT+MRI, and the transparency weight of each modality could be adjusted and set by users. Three radiation oncologists delineated GTVs for all patients using dual-modality (MRI/CT) and tri-modality (MRI/CT/PET) image fusion respectively. Inter-observer variation was assessed by the coefficient of variation (COV), the average distance between surface and centroid (ADSC), and the local standard deviation (SDlocal). Analysis of COV was also performed to evaluate intra-observer volume variation. The inter-observer variation analysis showed that, the mean COV was 0.14(± 0.09) and 0.07(± 0.01) for dual-modality and tri-modality respectively; the standard deviation of ADSC was significantly reduced (p<0.05) with tri-modality; SDlocal averaged over median GTV surface was reduced in patient 2 (from 0.57 cm to 0.39 cm) and patient 3 (from 0.42 cm to 0.36 cm) with the new method. The intra-observer volume variation was also significantly reduced (p = 0.00) with the tri-modality method as compared with using the dual-modality method. With the new tri-modality image fusion method smaller inter- and intra-observer variation in GTV definition for the brain tumors can be achieved, which improves the consistency and accuracy for target delineation in individualized radiotherapy.
Hardy, A; Itzkowitz, M; Griffel, G
1989-05-15
A variational moment method is used to calculate propagation constants of 1-D optical waveguides with an arbitrary index profile. The method is applicable to 2-D waveguides as well, and the index profiles need not be symmetric. Examples are given for the lowest-order and the next higher-order modes and are compared with exact numerical solutions.
Some New Mathematical Methods for Variational Objective Analysis
NASA Technical Reports Server (NTRS)
Wahba, G.; Johnson, D. R.
1984-01-01
New and/or improved variational methods for simultaneously combining forecast, heterogeneous observational data, a priori climatology, and physics to obtain improved estimates of the initial state of the atmosphere for the purpose of numerical weather prediction are developed. Cross validated spline methods are applied to atmospheric data for the purpose of improved description and analysis of atmospheric phenomena such as the tropopause and frontal boundary surfaces.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Altunbas, Cem, E-mail: caltunbas@gmail.com; Lai, Chao-Jen; Zhong, Yuncheng
Purpose: In using flat panel detectors (FPD) for cone beam computed tomography (CBCT), pixel gain variations may lead to structured nonuniformities in projections and ring artifacts in CBCT images. Such gain variations can be caused by change in detector entrance exposure levels or beam hardening, and they are not accounted by conventional flat field correction methods. In this work, the authors presented a method to identify isolated pixel clusters that exhibit gain variations and proposed a pixel gain correction (PGC) method to suppress both beam hardening and exposure level dependent gain variations. Methods: To modulate both beam spectrum and entrancemore » exposure, flood field FPD projections were acquired using beam filters with varying thicknesses. “Ideal” pixel values were estimated by performing polynomial fits in both raw and flat field corrected projections. Residuals were calculated by taking the difference between measured and ideal pixel values to identify clustered image and FPD artifacts in flat field corrected and raw images, respectively. To correct clustered image artifacts, the ratio of ideal to measured pixel values in filtered images were utilized as pixel-specific gain correction factors, referred as PGC method, and they were tabulated as a function of pixel value in a look-up table. Results: 0.035% of detector pixels lead to clustered image artifacts in flat field corrected projections, where 80% of these pixels were traced back and linked to artifacts in the FPD. The performance of PGC method was tested in variety of imaging conditions and phantoms. The PGC method reduced clustered image artifacts and fixed pattern noise in projections, and ring artifacts in CBCT images. Conclusions: Clustered projection image artifacts that lead to ring artifacts in CBCT can be better identified with our artifact detection approach. When compared to the conventional flat field correction method, the proposed PGC method enables characterization of nonlinear pixel gain variations as a function of change in x-ray spectrum and intensity. Hence, it can better suppress image artifacts due to beam hardening as well as artifacts that arise from detector entrance exposure variation.« less
An efficient deterministic-probabilistic approach to modeling regional groundwater flow: 1. Theory
Yen, Chung-Cheng; Guymon, Gary L.
1990-01-01
An efficient probabilistic model is developed and cascaded with a deterministic model for predicting water table elevations in regional aquifers. The objective is to quantify model uncertainty where precise estimates of water table elevations may be required. The probabilistic model is based on the two-point probability method which only requires prior knowledge of uncertain variables mean and coefficient of variation. The two-point estimate method is theoretically developed and compared with the Monte Carlo simulation method. The results of comparisons using hypothetical determinisitic problems indicate that the two-point estimate method is only generally valid for linear problems where the coefficients of variation of uncertain parameters (for example, storage coefficient and hydraulic conductivity) is small. The two-point estimate method may be applied to slightly nonlinear problems with good results, provided coefficients of variation are small. In such cases, the two-point estimate method is much more efficient than the Monte Carlo method provided the number of uncertain variables is less than eight.
An Efficient Deterministic-Probabilistic Approach to Modeling Regional Groundwater Flow: 1. Theory
NASA Astrophysics Data System (ADS)
Yen, Chung-Cheng; Guymon, Gary L.
1990-07-01
An efficient probabilistic model is developed and cascaded with a deterministic model for predicting water table elevations in regional aquifers. The objective is to quantify model uncertainty where precise estimates of water table elevations may be required. The probabilistic model is based on the two-point probability method which only requires prior knowledge of uncertain variables mean and coefficient of variation. The two-point estimate method is theoretically developed and compared with the Monte Carlo simulation method. The results of comparisons using hypothetical determinisitic problems indicate that the two-point estimate method is only generally valid for linear problems where the coefficients of variation of uncertain parameters (for example, storage coefficient and hydraulic conductivity) is small. The two-point estimate method may be applied to slightly nonlinear problems with good results, provided coefficients of variation are small. In such cases, the two-point estimate method is much more efficient than the Monte Carlo method provided the number of uncertain variables is less than eight.
A First Step towards Variational Methods in Engineering
ERIC Educational Resources Information Center
Periago, Francisco
2003-01-01
In this paper, a didactical proposal is presented to introduce the variational methods for solving boundary value problems to engineering students. Starting from a couple of simple models arising in linear elasticity and heat diffusion, the concept of weak solution for these models is motivated and the existence, uniqueness and continuous…
A study on Marangoni convection by the variational iteration method
NASA Astrophysics Data System (ADS)
Karaoǧlu, Onur; Oturanç, Galip
2012-09-01
In this paper, we will consider the use of the variational iteration method and Padé approximant for finding approximate solutions for a Marangoni convection induced flow over a free surface due to an imposed temperature gradient. The solutions are compared with the numerical (fourth-order Runge Kutta) solutions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niu, S; Zhang, Y; Ma, J
Purpose: To investigate iterative reconstruction via prior image constrained total generalized variation (PICTGV) for spectral computed tomography (CT) using fewer projections while achieving greater image quality. Methods: The proposed PICTGV method is formulated as an optimization problem, which balances the data fidelity and prior image constrained total generalized variation of reconstructed images in one framework. The PICTGV method is based on structure correlations among images in the energy domain and high-quality images to guide the reconstruction of energy-specific images. In PICTGV method, the high-quality image is reconstructed from all detector-collected X-ray signals and is referred as the broad-spectrum image. Distinctmore » from the existing reconstruction methods applied on the images with first order derivative, the higher order derivative of the images is incorporated into the PICTGV method. An alternating optimization algorithm is used to minimize the PICTGV objective function. We evaluate the performance of PICTGV on noise and artifacts suppressing using phantom studies and compare the method with the conventional filtered back-projection method as well as TGV based method without prior image. Results: On the digital phantom, the proposed method outperforms the existing TGV method in terms of the noise reduction, artifacts suppression, and edge detail preservation. Compared to that obtained by the TGV based method without prior image, the relative root mean square error in the images reconstructed by the proposed method is reduced by over 20%. Conclusion: The authors propose an iterative reconstruction via prior image constrained total generalize variation for spectral CT. Also, we have developed an alternating optimization algorithm and numerically demonstrated the merits of our approach. Results show that the proposed PICTGV method outperforms the TGV method for spectral CT.« less
NASA Astrophysics Data System (ADS)
Verechagin, V.; Kris, R.; Schwarzband, I.; Milstein, A.; Cohen, B.; Shkalim, A.; Levy, S.; Price, D.; Bal, E.
2018-03-01
Over the years, mask and wafers defects dispositioning has become an increasingly challenging and time consuming task. With design rules getting smaller, OPC getting complex and scanner illumination taking on free-form shapes - the probability of a user to perform accurate and repeatable classification of defects detected by mask inspection tools into pass/fail bins is reducing. The critical challenging of mask defect metrology for small nodes ( < 30 nm) was reviewed in [1]. While Critical Dimension (CD) variation measurement is still the method of choice for determining a mask defect future impact on wafer, the high complexity of OPCs combined with high variability in pattern shapes poses a challenge for any automated CD variation measurement method. In this study, a novel approach for measurement generalization is presented. CD variation assessment performance is evaluated on multiple different complex shape patterns, and is benchmarked against an existing qualified measurement methodology.
A New Evaluation Method of Stored Heat Effect of Reinforced Concrete Wall of Cold Storage
NASA Astrophysics Data System (ADS)
Nomura, Tomohiro; Murakami, Yuji; Uchikawa, Motoyuki
Today it has become imperative to save energy by operating a refrigerator in a cold storage executed by external insulate reinforced concrete wall intermittently. The theme of the paper is to get the evaluation method to be capable of calculating, numerically, interval time for stopping the refrigerator, in applying reinforced concrete wall as source of stored heat. The experiments with the concrete models were performed in order to examine the time variation of internal temperature after refrigerator stopped. In addition, the simulation method with three dimensional unsteady FEM for personal-computer type was introduced for easily analyzing the internal temperature variation. Using this method, it is possible to obtain the time variation of internal temperature and to calculate the interval time for stopping the refrigerator.
Application of the moving frame method to deformed Willmore surfaces in space forms
NASA Astrophysics Data System (ADS)
Paragoda, Thanuja
2018-06-01
The main goal of this paper is to use the theory of exterior differential forms in deriving variations of the deformed Willmore energy in space forms and study the minimizers of the deformed Willmore energy in space forms. We derive both first and second order variations of deformed Willmore energy in space forms explicitly using moving frame method. We prove that the second order variation of deformed Willmore energy depends on the intrinsic Laplace Beltrami operator, the sectional curvature and some special operators along with mean and Gauss curvatures of the surface embedded in space forms, while the first order variation depends on the extrinsic Laplace Beltrami operator.
Farno, E; Coventry, K; Slatter, P; Eshtiaghi, N
2018-06-15
Sludge pumps in wastewater treatment plants are often oversized due to uncertainty in calculation of pressure drop. This issue costs millions of dollars for industry to purchase and operate the oversized pumps. Besides costs, higher electricity consumption is associated with extra CO 2 emission which creates huge environmental impacts. Calculation of pressure drop via current pipe flow theory requires model estimation of flow curve data which depends on regression analysis and also varies with natural variation of rheological data. This study investigates impact of variation of rheological data and regression analysis on variation of pressure drop calculated via current pipe flow theories. Results compare the variation of calculated pressure drop between different models and regression methods and suggest on the suitability of each method. Copyright © 2018 Elsevier Ltd. All rights reserved.
CNV-TV: a robust method to discover copy number variation from short sequencing reads.
Duan, Junbo; Zhang, Ji-Gang; Deng, Hong-Wen; Wang, Yu-Ping
2013-05-02
Copy number variation (CNV) is an important structural variation (SV) in human genome. Various studies have shown that CNVs are associated with complex diseases. Traditional CNV detection methods such as fluorescence in situ hybridization (FISH) and array comparative genomic hybridization (aCGH) suffer from low resolution. The next generation sequencing (NGS) technique promises a higher resolution detection of CNVs and several methods were recently proposed for realizing such a promise. However, the performances of these methods are not robust under some conditions, e.g., some of them may fail to detect CNVs of short sizes. There has been a strong demand for reliable detection of CNVs from high resolution NGS data. A novel and robust method to detect CNV from short sequencing reads is proposed in this study. The detection of CNV is modeled as a change-point detection from the read depth (RD) signal derived from the NGS, which is fitted with a total variation (TV) penalized least squares model. The performance (e.g., sensitivity and specificity) of the proposed approach are evaluated by comparison with several recently published methods on both simulated and real data from the 1000 Genomes Project. The experimental results showed that both the true positive rate and false positive rate of the proposed detection method do not change significantly for CNVs with different copy numbers and lengthes, when compared with several existing methods. Therefore, our proposed approach results in a more reliable detection of CNVs than the existing methods.
Constrained variation in Jastrow method at high density
DOE Office of Scientific and Technical Information (OSTI.GOV)
Owen, J.C.; Bishop, R.F.; Irvine, J.M.
1976-11-01
A method is derived for constraining the correlation function in a Jastrow variational calculation which permits the truncation of the cluster expansion after two-body terms, and which permits exact minimization of the two-body cluster by functional variation. This method is compared with one previously proposed by Pandharipande and is found to be superior both theoretically and practically. The method is tested both on liquid /sup 3/He, by using the Lennard--Jones potential, and on the model system of neutrons treated as Boltzmann particles (''homework'' problem). Good agreement is found both with experiment and with other calculations involving the explicit evaluation ofmore » higher-order terms in the cluster expansion. The method is then applied to a more realistic model of a neutron gas up to a density of 4 neutrons per F/sup 3/, and is found to give ground-state energies considerably lower than those of Pandharipande. (AIP)« less
A visual tracking method based on deep learning without online model updating
NASA Astrophysics Data System (ADS)
Tang, Cong; Wang, Yicheng; Feng, Yunsong; Zheng, Chao; Jin, Wei
2018-02-01
The paper proposes a visual tracking method based on deep learning without online model updating. In consideration of the advantages of deep learning in feature representation, deep model SSD (Single Shot Multibox Detector) is used as the object extractor in the tracking model. Simultaneously, the color histogram feature and HOG (Histogram of Oriented Gradient) feature are combined to select the tracking object. In the process of tracking, multi-scale object searching map is built to improve the detection performance of deep detection model and the tracking efficiency. In the experiment of eight respective tracking video sequences in the baseline dataset, compared with six state-of-the-art methods, the method in the paper has better robustness in the tracking challenging factors, such as deformation, scale variation, rotation variation, illumination variation, and background clutters, moreover, its general performance is better than other six tracking methods.
Adaptive variational mode decomposition method for signal processing based on mode characteristic
NASA Astrophysics Data System (ADS)
Lian, Jijian; Liu, Zhuo; Wang, Haijun; Dong, Xiaofeng
2018-07-01
Variational mode decomposition is a completely non-recursive decomposition model, where all the modes are extracted concurrently. However, the model requires a preset mode number, which limits the adaptability of the method since a large deviation in the number of mode set will cause the discard or mixing of the mode. Hence, a method called Adaptive Variational Mode Decomposition (AVMD) was proposed to automatically determine the mode number based on the characteristic of intrinsic mode function. The method was used to analyze the simulation signals and the measured signals in the hydropower plant. Comparisons have also been conducted to evaluate the performance by using VMD, EMD and EWT. It is indicated that the proposed method has strong adaptability and is robust to noise. It can determine the mode number appropriately without modulation even when the signal frequencies are relatively close.
Selecting Magnet Laminations Recipes Using the Meth-od of Sim-u-la-ted Annealing
NASA Astrophysics Data System (ADS)
Russell, A. D.; Baiod, R.; Brown, B. C.; Harding, D. J.; Martin, P. S.
1997-05-01
The Fermilab Main Injector project is building 344 dipoles using more than 7000 tons of steel. Budget and logistical constraints required that steel production, lamination stamping and magnet fabrication proceed in parallel. There were significant run-to-run variations in the magnetic properties of the steel (Martin, P.S., et al., Variations in the Steel Properties and the Excitation Characteristics of FMI Dipoles, this conference). The large lamination size (>0.5 m coil opening) resulted in variations of gap height due to differences in stress relief in the steel after stamping. To minimize magnet-to-magnet strength and field shape variations the laminations were shuffled based on the available magnetic and mechanical data and assigned to magnets using a computer program based on the method of simulated annealing. The lamination sets selected by the program have produced magnets which easily satisfy the design requirements. Variations of the average magnet gap are an order of magnitude smaller than the variations in lamination gaps. This paper discusses observed gap variations, the program structure and the strength uniformity results.
Szatkiewicz, Jin P; Wang, WeiBo; Sullivan, Patrick F; Wang, Wei; Sun, Wei
2013-02-01
Structural variation is an important class of genetic variation in mammals. High-throughput sequencing (HTS) technologies promise to revolutionize copy-number variation (CNV) detection but present substantial analytic challenges. Converging evidence suggests that multiple types of CNV-informative data (e.g. read-depth, read-pair, split-read) need be considered, and that sophisticated methods are needed for more accurate CNV detection. We observed that various sources of experimental biases in HTS confound read-depth estimation, and note that bias correction has not been adequately addressed by existing methods. We present a novel read-depth-based method, GENSENG, which uses a hidden Markov model and negative binomial regression framework to identify regions of discrete copy-number changes while simultaneously accounting for the effects of multiple confounders. Based on extensive calibration using multiple HTS data sets, we conclude that our method outperforms existing read-depth-based CNV detection algorithms. The concept of simultaneous bias correction and CNV detection can serve as a basis for combining read-depth with other types of information such as read-pair or split-read in a single analysis. A user-friendly and computationally efficient implementation of our method is freely available.
Use of variational methods in the determination of wind-driven ocean circulation
NASA Technical Reports Server (NTRS)
Gelos, R.; Laura, P. A. A.
1976-01-01
Simple polynomial approximations and a variational approach were used to predict wind-induced circulation in rectangular ocean basins. Stommel's and Munk's models were solved in a unified fashion by means of the proposed method. Very good agreement with exact solutions available in the literature was shown to exist. The method was then applied to more complex situations where an exact solution seems out of the question.
Variable Density Effects in Stochastic Lagrangian Models for Turbulent Combustion
2016-07-20
PDF methods in dealing with chemical reaction and convection are preserved irrespective of density variation. Since the density variation in a typical...combustion process may be as large as factor of seven, including variable- density effects in PDF methods is of significance. Conventionally, the...strategy of modelling variable density flows in PDF methods is similar to that used for second-moment closure models (SMCM): models are developed based on
2017-01-01
Mapping gene expression as a quantitative trait using whole genome-sequencing and transcriptome analysis allows to discover the functional consequences of genetic variation. We developed a novel method and ultra-fast software Findr for higly accurate causal inference between gene expression traits using cis-regulatory DNA variations as causal anchors, which improves current methods by taking into consideration hidden confounders and weak regulations. Findr outperformed existing methods on the DREAM5 Systems Genetics challenge and on the prediction of microRNA and transcription factor targets in human lymphoblastoid cells, while being nearly a million times faster. Findr is publicly available at https://github.com/lingfeiwang/findr. PMID:28821014
Variational Methods in Sensitivity Analysis and Optimization for Aerodynamic Applications
NASA Technical Reports Server (NTRS)
Ibrahim, A. H.; Hou, G. J.-W.; Tiwari, S. N. (Principal Investigator)
1996-01-01
Variational methods (VM) sensitivity analysis, which is the continuous alternative to the discrete sensitivity analysis, is employed to derive the costate (adjoint) equations, the transversality conditions, and the functional sensitivity derivatives. In the derivation of the sensitivity equations, the variational methods use the generalized calculus of variations, in which the variable boundary is considered as the design function. The converged solution of the state equations together with the converged solution of the costate equations are integrated along the domain boundary to uniquely determine the functional sensitivity derivatives with respect to the design function. The determination of the sensitivity derivatives of the performance index or functional entails the coupled solutions of the state and costate equations. As the stable and converged numerical solution of the costate equations with their boundary conditions are a priori unknown, numerical stability analysis is performed on both the state and costate equations. Thereafter, based on the amplification factors obtained by solving the generalized eigenvalue equations, the stability behavior of the costate equations is discussed and compared with the state (Euler) equations. The stability analysis of the costate equations suggests that the converged and stable solution of the costate equation is possible only if the computational domain of the costate equations is transformed to take into account the reverse flow nature of the costate equations. The application of the variational methods to aerodynamic shape optimization problems is demonstrated for internal flow problems at supersonic Mach number range. The study shows, that while maintaining the accuracy of the functional sensitivity derivatives within the reasonable range for engineering prediction purposes, the variational methods show a substantial gain in computational efficiency, i.e., computer time and memory, when compared with the finite difference sensitivity analysis.
Blazhko modulation in the infrared
NASA Astrophysics Data System (ADS)
Jurcsik, J.; Hajdu, G.; Dékány, I.; Nuspl, J.; Catelan, M.; Grebel, E. K.
2018-04-01
We present first direct evidence of modulation in the K band of Blazhko-type RR Lyrae stars that are identified by their secular modulations in the I-band data of Optical Gravitational Lensing Experiment-IV. A method has been developed to decompose the K-band light variation into two parts originating from the temperature and the radius changes using synthetic data of atmosphere-model grids. The amplitudes of the temperature and the radius variations derived from the method for non-Blazhko RRab stars are in very good agreement with the results of the Baade-Wesselink analysis of RRab stars in the M3 globular cluster confirming the applicability and correctness of the method. It has been found that the Blazhko modulation is primarily driven by the change in the temperature variation. The radius variation plays a marginal part, moreover it has an opposite sign as if the Blazhko effect was caused by the radii variations. This result reinforces the previous finding based on the Baade-Wesselink analysis of M3 (NGC 5272) RR Lyrae, that significant modulation of the radius variations can only be detected in radial-velocity measurements, which relies on spectral lines that form in the uppermost atmospheric layers. Our result gives the first insight into the energetics and dynamics of the Blazhko phenomenon, hence it puts strong constraints on its possible physical explanations.
FROG - Fingerprinting Genomic Variation Ontology
Bhardwaj, Anshu
2015-01-01
Genetic variations play a crucial role in differential phenotypic outcomes. Given the complexity in establishing this correlation and the enormous data available today, it is imperative to design machine-readable, efficient methods to store, label, search and analyze this data. A semantic approach, FROG: “FingeRprinting Ontology of Genomic variations” is implemented to label variation data, based on its location, function and interactions. FROG has six levels to describe the variation annotation, namely, chromosome, DNA, RNA, protein, variations and interactions. Each level is a conceptual aggregation of logically connected attributes each of which comprises of various properties for the variant. For example, in chromosome level, one of the attributes is location of variation and which has two properties, allosomes or autosomes. Another attribute is variation kind which has four properties, namely, indel, deletion, insertion, substitution. Likewise, there are 48 attributes and 278 properties to capture the variation annotation across six levels. Each property is then assigned a bit score which in turn leads to generation of a binary fingerprint based on the combination of these properties (mostly taken from existing variation ontologies). FROG is a novel and unique method designed for the purpose of labeling the entire variation data generated till date for efficient storage, search and analysis. A web-based platform is designed as a test case for users to navigate sample datasets and generate fingerprints. The platform is available at http://ab-openlab.csir.res.in/frog. PMID:26244889
Empirical correction for earth sensor horizon radiance variation
NASA Technical Reports Server (NTRS)
Hashmall, Joseph A.; Sedlak, Joseph; Andrews, Daniel; Luquette, Richard
1998-01-01
A major limitation on the use of infrared horizon sensors for attitude determination is the variability of the height of the infrared Earth horizon. This variation includes a climatological component and a stochastic component of approximately equal importance. The climatological component shows regular variation with season and latitude. Models based on historical measurements have been used to compensate for these systematic changes. The stochastic component is analogous to tropospheric weather. It can cause extreme, localized changes that for a period of days, overwhelm the climatological variation. An algorithm has been developed to compensate partially for the climatological variation of horizon height and at least to mitigate the stochastic variation. This method uses attitude and horizon sensor data from spacecraft to update a horizon height history as a function of latitude. For spacecraft that depend on horizon sensors for their attitudes (such as the Total Ozone Mapping Spectrometer-Earth Probe-TOMS-EP) a batch least squares attitude determination system is used. It is assumed that minimizing the average sensor residual throughout a full orbit of data results in attitudes that are nearly independent of local horizon height variations. The method depends on the additional assumption that the mean horizon height over all latitudes is approximately independent of season. Using these assumptions, the method yields the latitude dependent portion of local horizon height variations. This paper describes the algorithm used to generate an empirical horizon height. Ideally, an international horizon height database could be established that would rapidly merge data from various spacecraft to provide timely corrections that could be used by all.
NASA Astrophysics Data System (ADS)
Zhao, Xia; Wang, Guang-xin
2008-12-01
Synthetic aperture radar (SAR) is an active remote sensing sensor. It is a coherent imaging system, the speckle is its inherent default, which affects badly the interpretation and recognition of the SAR targets. Conventional methods of removing the speckle is studied usually in real SAR image, which reduce the edges of the images at the same time as depressing the speckle. Morever, Conventional methods lost the information about images phase. Removing the speckle and enhancing the target and edge simultaneously are still a puzzle. To suppress the spckle and enhance the targets and the edges simultaneously, a half-quadratic variational regularization method in complex SAR image is presented, which is based on the prior knowledge of the targets and the edge. Due to the non-quadratic and non- convex quality and the complexity of the cost function, a half-quadratic variational regularization variation is used to construct a new cost function,which is solved by alternate optimization. In the proposed scheme, the construction of the model, the solution of the model and the selection of the model peremeters are studied carefully. In the end, we validate the method using the real SAR data.Theoretic analysis and the experimental results illustrate the the feasibility of the proposed method. Further more, the proposed method can preserve the information about images phase.
Zhang, Hanming; Wang, Linyuan; Yan, Bin; Li, Lei; Cai, Ailong; Hu, Guoen
2016-01-01
Total generalized variation (TGV)-based computed tomography (CT) image reconstruction, which utilizes high-order image derivatives, is superior to total variation-based methods in terms of the preservation of edge information and the suppression of unfavorable staircase effects. However, conventional TGV regularization employs l1-based form, which is not the most direct method for maximizing sparsity prior. In this study, we propose a total generalized p-variation (TGpV) regularization model to improve the sparsity exploitation of TGV and offer efficient solutions to few-view CT image reconstruction problems. To solve the nonconvex optimization problem of the TGpV minimization model, we then present an efficient iterative algorithm based on the alternating minimization of augmented Lagrangian function. All of the resulting subproblems decoupled by variable splitting admit explicit solutions by applying alternating minimization method and generalized p-shrinkage mapping. In addition, approximate solutions that can be easily performed and quickly calculated through fast Fourier transform are derived using the proximal point method to reduce the cost of inner subproblems. The accuracy and efficiency of the simulated and real data are qualitatively and quantitatively evaluated to validate the efficiency and feasibility of the proposed method. Overall, the proposed method exhibits reasonable performance and outperforms the original TGV-based method when applied to few-view problems. PMID:26901410
A Calibration Method for Nanowire Biosensors to Suppress Device-to-device Variation
Ishikawa, Fumiaki N.; Curreli, Marco; Chang, Hsiao-Kang; Chen, Po-Chiang; Zhang, Rui; Cote, Richard J.; Thompson, Mark E.; Zhou, Chongwu
2009-01-01
Nanowire/nanotube biosensors have stimulated significant interest; however the inevitable device-to-device variation in the biosensor performance remains a great challenge. We have developed an analytical method to calibrate nanowire biosensor responses that can suppress the device-to-device variation in sensing response significantly. The method is based on our discovery of a strong correlation between the biosensor gate dependence (dIds/dVg) and the absolute response (absolute change in current, ΔI). In2O3 nanowire based biosensors for streptavidin detection were used as the model system. Studying the liquid gate effect and ionic concentration dependence of strepavidin sensing indicates that electrostatic interaction is the dominant mechanism for sensing response. Based on this sensing mechanism and transistor physics, a linear correlation between the absolute sensor response (ΔI) and the gate dependence (dIds/dVg) is predicted and confirmed experimentally. Using this correlation, a calibration method was developed where the absolute response is divided by dIds/dVg for each device, and the calibrated responses from different devices behaved almost identically. Compared to the common normalization method (normalization of the conductance/resistance/current by the initial value), this calibration method was proved advantageous using a conventional transistor model. The method presented here substantially suppresses device-to-device variation, allowing the use of nanosensors in large arrays. PMID:19921812
Variational Approach to Monte Carlo Renormalization Group
NASA Astrophysics Data System (ADS)
Wu, Yantao; Car, Roberto
2017-12-01
We present a Monte Carlo method for computing the renormalized coupling constants and the critical exponents within renormalization theory. The scheme, which derives from a variational principle, overcomes critical slowing down, by means of a bias potential that renders the coarse grained variables uncorrelated. The two-dimensional Ising model is used to illustrate the method.
Thermal and acid tolerant beta-xylosidases, genes encoding, related organisms, and methods
Thompson, David N [Idaho Falls, ID; Thompson, Vicki S [Idaho Falls, ID; Schaller, Kastli D [Ammon, ID; Apel, William A [Jackson, WY; Lacey, Jeffrey A [Idaho Falls, ID; Reed, David W [Idaho Falls, ID
2011-04-12
Isolated and/or purified polypeptides and nucleic acid sequences encoding polypeptides from Alicyclobacillus acidocaldarius and variations thereof are provided. Further provided are methods of at least partially degrading xylotriose and/or xylobiose using isolated and/or purified polypeptides and nucleic acid sequences encoding polypeptides from Alicyclobacillus acidocaldarius and variations thereof.
In this report we present examples of methods that we have used to explore associations between aquatic biotic condition and stressors in two different aquatic systems: estuaries and lakes. We review metrics and indices of biotic condition in lakes and estuaries; discuss some ph...
Mixed Gaussian-Impulse Noise Image Restoration Via Total Variation
2012-05-01
deblurring under impulse noise ,” J. Math. Imaging Vis., vol. 36, pp. 46–53, January 2010. [5] B. Li, Q. Liu, J. Xu, and X. Luo, “A new method for removing......Several Total Variation (TV) regularization methods have recently been proposed to address denoising under mixed Gaussian and impulse noise . While
On the optimal use of fictitious time in variation of parameters methods with application to BG14
NASA Technical Reports Server (NTRS)
Gottlieb, Robert G.
1991-01-01
The optimal way to use fictitious time in variation of parameter methods is presented. Setting fictitious time to zero at the end of each step is shown to cure the instability associated with some types of problems. Only some parameters are reinitialized, thereby retaining redundant information.
A Decision-Based Modified Total Variation Diffusion Method for Impulse Noise Removal
Zhu, Qingxin; Song, Xiuli; Tao, Jinsong
2017-01-01
Impulsive noise removal usually employs median filtering, switching median filtering, the total variation L1 method, and variants. These approaches however often introduce excessive smoothing and can result in extensive visual feature blurring and thus are suitable only for images with low density noise. A new method to remove noise is proposed in this paper to overcome this limitation, which divides pixels into different categories based on different noise characteristics. If an image is corrupted by salt-and-pepper noise, the pixels are divided into corrupted and noise-free; if the image is corrupted by random valued impulses, the pixels are divided into corrupted, noise-free, and possibly corrupted. Pixels falling into different categories are processed differently. If a pixel is corrupted, modified total variation diffusion is applied; if the pixel is possibly corrupted, weighted total variation diffusion is applied; otherwise, the pixel is left unchanged. Experimental results show that the proposed method is robust to different noise strengths and suitable for different images, with strong noise removal capability as shown by PSNR/SSIM results as well as the visual quality of restored images. PMID:28536602
Variation and Defect Tolerance for Nano Crossbars
NASA Astrophysics Data System (ADS)
Tunc, Cihan
With the extreme shrinking in CMOS technology, quantum effects and manufacturing issues are getting more crucial. Hence, additional shrinking in CMOS feature size seems becoming more challenging, difficult, and costly. On the other hand, emerging nanotechnology has attracted many researchers since additional scaling down has been demonstrated by manufacturing nanowires, Carbon nanotubes as well as molecular switches using bottom-up manufacturing techniques. In addition to the progress in manufacturing, developments in architecture show that emerging nanoelectronic devices will be promising for the future system designs. Using nano crossbars, which are composed of two sets of perpendicular nanowires with programmable intersections, it is possible to implement logic functions. In addition, nano crossbars present some important features as regularity, reprogrammability, and interchangeability. Combining these features, researchers have presented different effective architectures. Although bottom-up nanofabrication can greatly reduce manufacturing costs, due to low controllability in the manufacturing process, some critical issues occur. Bottom- up nanofabrication process results in high variation compared to conventional top- down lithography used in CMOS technology. In addition, an increased failure rate is expected. Variation and defect tolerance methods used for conventional CMOS technology seem inadequate for adapting to emerging nano technology because the variation and the defect rate for emerging nano technology is much more than current CMOS technology. Therefore, variations and defect tolerance methods for emerging nano technology are necessary for a successful transition. In this work, in order to tolerate variations for crossbars, we introduce a framework that is established based on reprogrammability and interchangeability features of nano crossbars. This framework is shown to be applicable for both FET-based and diode-based nano crossbars. We present a characterization testing method which requires minimal number of test vectors. We formulate the variation optimization problem using Simulated Annealing with different optimization goals. Furthermore, we extend the framework for defect tolerance. Experimental results and comparison of proposed framework with exhaustive methods confirm its effectiveness for both variation and defect tolerance.
Chen, X.; Ashcroft, I. A.; Wildman, R. D.; Tuck, C. J.
2015-01-01
A method using experimental nanoindentation and inverse finite-element analysis (FEA) has been developed that enables the spatial variation of material constitutive properties to be accurately determined. The method was used to measure property variation in a three-dimensional printed (3DP) polymeric material. The accuracy of the method is dependent on the applicability of the constitutive model used in the inverse FEA, hence four potential material models: viscoelastic, viscoelastic–viscoplastic, nonlinear viscoelastic and nonlinear viscoelastic–viscoplastic were evaluated, with the latter enabling the best fit to experimental data. Significant changes in material properties were seen in the depth direction of the 3DP sample, which could be linked to the degree of cross-linking within the material, a feature inherent in a UV-cured layer-by-layer construction method. It is proposed that the method is a powerful tool in the analysis of manufacturing processes with potential spatial property variation that will also enable the accurate prediction of final manufactured part performance. PMID:26730216
Chen, X; Ashcroft, I A; Wildman, R D; Tuck, C J
2015-11-08
A method using experimental nanoindentation and inverse finite-element analysis (FEA) has been developed that enables the spatial variation of material constitutive properties to be accurately determined. The method was used to measure property variation in a three-dimensional printed (3DP) polymeric material. The accuracy of the method is dependent on the applicability of the constitutive model used in the inverse FEA, hence four potential material models: viscoelastic, viscoelastic-viscoplastic, nonlinear viscoelastic and nonlinear viscoelastic-viscoplastic were evaluated, with the latter enabling the best fit to experimental data. Significant changes in material properties were seen in the depth direction of the 3DP sample, which could be linked to the degree of cross-linking within the material, a feature inherent in a UV-cured layer-by-layer construction method. It is proposed that the method is a powerful tool in the analysis of manufacturing processes with potential spatial property variation that will also enable the accurate prediction of final manufactured part performance.
Variational method of determining effective moduli of polycrystals with tetragonal symmetry
Meister, R.; Peselnick, L.
1966-01-01
Variational principles have been applied to aggregates of randomly oriented pure-phase polycrystals having tetragonal symmetry. The bounds of the effective elastic moduli obtained in this way show a substantial improvement over the bounds obtained by means of the Voigt and Reuss assumptions. The Hill average is found to be a good approximation in most cases when compared to the bounds found from the variational method. The new bounds reduce in their limits to the Voigt and Reuss values. ?? 1966 The American Institute of Physics.
Numerical realization of the variational method for generating self-trapped beams.
Duque, Erick I; Lopez-Aguayo, Servando; Malomed, Boris A
2018-03-19
We introduce a numerical variational method based on the Rayleigh-Ritz optimization principle for predicting two-dimensional self-trapped beams in nonlinear media. This technique overcomes the limitation of the traditional variational approximation in performing analytical Lagrangian integration and differentiation. Approximate soliton solutions of a generalized nonlinear Schrödinger equation are obtained, demonstrating robustness of the beams of various types (fundamental, vortices, multipoles, azimuthons) in the course of their propagation. The algorithm offers possibilities to produce more sophisticated soliton profiles in general nonlinear models.
NASA Technical Reports Server (NTRS)
Delamorena, B. A.
1984-01-01
A method to detect stratospheric warmings using ionospheric absorption records obtained by an Absorption Meter (method A3) is introduced. The activity of the stratospheric circulation and the D region ionospheric absorption as well as other atmospheric parameters during the winter anomaly experience an abnormal variation. A simultaneity was found in the beginning of abnormal variation in the mentioned parameters, using the absorption records for detecting the initiation of the stratospheric warming. Results of this scientific experience of forecasting in the El Arenosillo Range, are presented.
Ucar, Fatma; Erden, Gonul; Ginis, Zeynep; Ozturk, Gulfer; Sezer, Sevilay; Gurler, Mukaddes; Guneyk, Ahmet
2013-10-01
Available data on biological variation of HbA1c revealed marked heterogeneity. We therefore investigated and estimated the components of biological variation for HbA1c in a group of healthy individuals by applying a recommended and strictly designed study protocol using two different assay methods. Each month, samples were derived on the same day, for three months. Four EDTA whole blood samples were collected from each individual (20 women, 9 men; 20-45 years of age) and stored at -80°C until analysis. HbA1c values were measured by both high performance liquid chromatography (HPLC) (Shimadzu, Prominence, Japan) and boronate affinity chromatography methods (Trinity Biotech, Premier Hb9210, Ireland). All samples were assayed in duplicate in a single batch for each assay method. Estimations were calculated according to the formulas described by Fraser and Harris. The within subject (CV(I))-between subject (CV(G)) biological variations were 1.17% and 5.58%, respectively for HPLC. The calculated CV(I) and CV(G) were 2.15% and 4.03%, respectively for boronate affinity chromatography. Reference change value (RCV) for HPLC and boronate affinity chromatography was 5.4% and 10.4% respectively and individuality index of HbA(1c) was 0.35 and 0.93 respectively. This study for the first time described the components of biological variation for HbA1c in healthy individuals by two different assay methods. Obtained findings showed that the difference between CV(A) values of the methods might considerably affect RCV. These data regarding biological variation of HbA(1c) could be useful for a better evaluation of HbA(1c) test results in clinical interpretation. Copyright © 2013 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stemkens, B; Glitzner, M; Kontaxis, C
Purpose: To assess the dose deposition in simulated single-fraction MR-Linac treatments of renal cell carcinoma, when inter-cycle respiratory motion variation is taken into account using online MRI. Methods: Three motion characterization methods, with increasing complexity, were compared to evaluate the effect of inter-cycle motion variation and drifts on the accumulated dose for an SBRT kidney MR-Linac treatment: 1) STATIC, in which static anatomy was assumed, 2) AVG-RESP, in which 4D-MRI phase-volumes were time-weighted, based on the respiratory phase and 3) PCA, in which 3D volumes were generated using a PCA-model, enabling the detection of inter-cycle variations and drifts. An experimentalmore » ITV-based kidney treatment was simulated in a 1.5T magnetic field on three volunteer datasets. For each volunteer a retrospectively sorted 4D-MRI (ten respiratory phases) and fast 2D cine-MR images (temporal resolution = 476ms) were acquired to simulate MR-imaging during radiation. For each method, the high spatio-temporal resolution 3D volumes were non-rigidly registered to obtain deformation vector fields (DVFs). Using the DVFs, pseudo-CTs (generated from the 4D-MRI) were deformed and the dose was accumulated for the entire treatment. The accuracies of all methods were independently determined using an additional, orthogonal 2D-MRI slice. Results: Motion was most accurately estimated using the PCA method, which correctly estimated drifts and inter-cycle variations (RMSE=3.2, 2.2, 1.1mm on average for STATIC, AVG-RESP and PCA, compared to the 2DMRI slice). Dose-volume parameters on the ITV showed moderate changes (D99=35.2, 32.5, 33.8Gy for STATIC, AVG-RESP and PCA). AVG-RESP showed distinct hot/cold spots outside the ITV margin, which were more distributed for the PCA scenario, since inter-cycle variations were not modeled by the AVG-RESP method. Conclusion: Dose differences were observed when inter-cycle variations were taken into account. The increased inter-cycle randomness in motion as captured by the PCA model mitigates the local (erroneous) hotspots estimated by the AVG-RESP method.« less
Reddy, Michael M.; Schuster, Paul; Kendall, Carol; Reddy, Micaela B.
2006-01-01
18O is an ideal tracer for characterizing hydrological processes because it can be reliably measured in several watershed hydrological compartments. Here, we present multiyear isotopic data, i.e. 18O variations (δ18O), for precipitation inputs, surface water and groundwater in the Shingobee River Headwaters Area (SRHA), a well-instrumented research catchment in north-central Minnesota. SRHA surface waters exhibit δ18O seasonal variations similar to those of groundwaters, and seasonal δ18O variations plotted versus time fit seasonal sine functions. These seasonal δ18O variations were interpreted to estimate surface water and groundwater mean residence times (MRTs) at sampling locations near topographically closed-basin lakes. MRT variations of about 1 to 16 years have been estimated over an area covering about 9 km2 from the basin boundary to the most downgradient well. Estimated MRT error (±0·3 to ±0·7 years) is small for short MRTs and is much larger (±10 years) for a well with an MRT (16 years) near the limit of the method. Groundwater transit time estimates based on Darcy's law, tritium content, and the seasonal δ18O amplitude approach appear to be consistent within the limits of each method. The results from this study suggest that use of the δ18O seasonal variation method to determine MRTs can help assess groundwater recharge areas in small headwaters catchments.
Reddy, Michael M.; Schuster, Paul F.; Kendall, Carol; Reddy, Micaela B.
2006-01-01
18O is an ideal tracer for characterizing hydrological processes because it can be reliably measured in several watershed hydrological compartments. Here, we present multiyear isotopic data, i.e. 18O variations (δ18O), for precipitation inputs, surface water and groundwater in the Shingobee River Headwaters Area (SRHA), a well-instrumented research catchment in north-central Minnesota. SRHA surface waters exhibit δ18O seasonal variations similar to those of groundwaters, and seasonal δ18O variations plotted versus time fit seasonal sine functions. These seasonal δ18O variations were interpreted to estimate surface water and groundwater mean residence times (MRTs) at sampling locations near topographically closed-basin lakes. MRT variations of about 1 to 16 years have been estimated over an area covering about 9 km2 from the basin boundary to the most downgradient well. Estimated MRT error (±0·3 to ±0·7 years) is small for short MRTs and is much larger (±10 years) for a well with an MRT (16 years) near the limit of the method. Groundwater transit time estimates based on Darcy's law, tritium content, and the seasonal δ18O amplitude approach appear to be consistent within the limits of each method. The results from this study suggest that use of the δ18O seasonal variation method to determine MRTs can help assess groundwater recharge areas in small headwaters catchments.
Gallego, Sergi; Márquez, Andrés; Méndez, David; Ortuño, Manuel; Neipp, Cristian; Fernández, Elena; Pascual, Inmaculada; Beléndez, Augusto
2008-05-10
One of the problems associated with photopolymers as optical recording media is the thickness variation during the recording process. Different values of shrinkages or swelling are reported in the literature for photopolymers. Furthermore, these variations depend on the spatial frequencies of the gratings stored in the materials. Thickness variations can be measured using different methods: studying the deviation from the Bragg's angle for nonslanted gratings, using MicroXAM S/N 8038 interferometer, or by the thermomechanical analysis experiments. In a previous paper, we began the characterization of the properties of a polyvinyl alcohol/acrylamide based photopolymer at the lowest end of recorded spatial frequencies. In this work, we continue analyzing the thickness variations of these materials using a reflection interferometer. With this technique we are able to obtain the variations of the layers refractive index and, therefore, a direct estimation of the polymer refractive index.
Geometric constrained variational calculus. II: The second variation (Part I)
NASA Astrophysics Data System (ADS)
Massa, Enrico; Bruno, Danilo; Luria, Gianvittorio; Pagani, Enrico
2016-10-01
Within the geometrical framework developed in [Geometric constrained variational calculus. I: Piecewise smooth extremals, Int. J. Geom. Methods Mod. Phys. 12 (2015) 1550061], the problem of minimality for constrained calculus of variations is analyzed among the class of differentiable curves. A fully covariant representation of the second variation of the action functional, based on a suitable gauge transformation of the Lagrangian, is explicitly worked out. Both necessary and sufficient conditions for minimality are proved, and reinterpreted in terms of Jacobi fields.
SCHOOL DROPOUTS--A COMMENTARY AND ANNOTATED BIBLIOGRAPHY.
ERIC Educational Resources Information Center
MILLER, S.M.; AND OTHERS
RESEARCH ON SCHOOL DROPOUTS IS HANDICAPPED IN THE FOLLOWING AREAS--DEFINITION OF THE DROPOUT POPULATION, INCONSISTENT METHODS OF DATA COLLECTION, INADEQUATE RESEARCH DESIGNS, COMMUNITY VARIATION, VARIATION IN TYPE OF DROPOUT, AND KNOWLEDGE OF THE PROCESS OF DROPPING OUT. DROPOUT GROUPS SHOULD BE CLEARLY DEFINED, AND VARIATION IN THESE GROUPS…
NASA Astrophysics Data System (ADS)
Palmer, Troy A.; Alexay, Christopher C.
2006-05-01
This paper addresses the variety and impact of dispersive model variations for infrared materials and, in particular, the level to which certain optical designs are affected by this potential variation in germanium. This work offers a method for anticipating and/or minimizing the pitfalls such potential model variations may have on a candidate optical design.
Method of calibrating an interferometer and reducing its systematic noise
NASA Technical Reports Server (NTRS)
Hammer, Philip D. (Inventor)
1997-01-01
Methods of operation and data analysis for an interferometer so as to eliminate the errors contributed by non-responsive or unstable pixels, interpixel gain variations that drift over time, and spurious noise that would otherwise degrade the operation of the interferometer are disclosed. The methods provide for either online or post-processing calibration. The methods apply prescribed reversible transformations that exploit the physical properties of interferograms obtained from said interferometer to derive a calibration reference signal for subsequent treatment of said interferograms for interpixel gain variations. A self-consistent approach for treating bad pixels is incorporated into the methods.
Hatch, Christine E; Fisher, Andrew T.; Revenaugh, Justin S.; Constantz, Jim; Ruehl, Chris
2006-01-01
We present a method for determining streambed seepage rates using time series thermal data. The new method is based on quantifying changes in phase and amplitude of temperature variations between pairs of subsurface sensors. For a reasonable range of streambed thermal properties and sensor spacings the time series method should allow reliable estimation of seepage rates for a range of at least ±10 m d−1 (±1.2 × 10−2 m s−1), with amplitude variations being most sensitive at low flow rates and phase variations retaining sensitivity out to much higher rates. Compared to forward modeling, the new method requires less observational data and less setup and data handling and is faster, particularly when interpreting many long data sets. The time series method is insensitive to streambed scour and sedimentation, which allows for application under a wide range of flow conditions and allows time series estimation of variable streambed hydraulic conductivity. This new approach should facilitate wider use of thermal methods and improve understanding of the complex spatial and temporal dynamics of surface water–groundwater interactions.
ERIC Educational Resources Information Center
Bull, Rebecca; Espy, Kimberly Andrews; Wiebe, Sandra A.; Sheffield, Tiffany D.; Nelson, Jennifer Mize
2011-01-01
Latent variable modeling methods have demonstrated utility for understanding the structure of executive control (EC) across development. These methods are utilized to better characterize the relation between EC and mathematics achievement in the preschool period, and to understand contributing sources of individual variation. Using the sample and…
Length polymorphism scanning is an efficient approach for revealing chloroplast DNA variation.
Matthew E. Horning; Richard C. Cronn
2006-01-01
Phylogeographic and population genetic screens of chloroplast DNA (cpDNA) provide insights into seedbased gene flow in angiosperms, yet studies are frequently hampered by the low mutation rate of this genome. Detection methods for intraspecific variation can be either direct (DNA sequencing) or indirect (PCR-RFLP), although no single method incorporates the best...
Variational method for lattice spectroscopy with ghosts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burch, Tommy; Hagen, Christian; Gattringer, Christof
2006-01-01
We discuss the variational method used in lattice spectroscopy calculations. In particular we address the role of ghost contributions which appear in quenched or partially quenched simulations and have a nonstandard euclidean time dependence. We show that the ghosts can be separated from the physical states. Our result is illustrated with numerical data for the scalar meson.
A FORTRAN Program for Computing Refractive Index Using the Double Variation Method.
ERIC Educational Resources Information Center
Blanchard, Frank N.
1984-01-01
Describes a computer program which calculates a best estimate of refractive index and dispersion from a large number of observations using the double variation method of measuring refractive index along with Sellmeier constants of the immersion oils. Program listing with examples will be provided on written request to the author. (Author/JM)
Systematic Convergence in Applying Variational Method to Double-Well Potential
ERIC Educational Resources Information Center
Mei, Wai-Ning
2016-01-01
In this work, we demonstrate the application of the variational method by computing the ground- and first-excited state energies of a double-well potential. We start with the proper choice of the trial wave functions using optimized parameters, and notice that accurate expectation values in excellent agreement with the numerical results can be…
An Evaluation Method of Words Tendency Depending on Time-Series Variation and Its Improvements.
ERIC Educational Resources Information Center
Atlam, El-Sayed; Okada, Makoto; Shishibori, Masami; Aoe, Jun-ichi
2002-01-01
Discussion of word frequency and keywords in text focuses on a method to estimate automatically the stability classes that indicate a word's popularity with time-series variations based on the frequency change in past electronic text data. Compares the evaluation of decision tree stability class results with manual classification results.…
Thompson, David N; Thompson, Vicki S; Schaller, Kastli D; Apel, William A; Reed, David W; Lacey, Jeffrey A
2013-04-30
Isolated and/or purified polypeptides and nucleic acid sequences encoding polypeptides from Alicyclobacillus acidocaldarius and variations thereof are provided. Further provided are methods of at least partially degrading xylotriose, xylobiose, and/or arabinofuranose-substituted xylan using isolated and/or purified polypeptides and nucleic acid sequences encoding polypeptides from Alicyclobacillus acidocaldarius and variations thereof.
A Simple Demonstration of a General Rule for the Variation of Magnetic Field with Distance
ERIC Educational Resources Information Center
Kodama, K.
2009-01-01
We describe a simple experiment demonstrating the variation in magnitude of a magnetic field with distance. The method described requires only an ordinary magnetic compass and a permanent magnet. The proposed graphical analysis illustrates a unique method for deducing a general rule of magnetostatics. (Contains 1 table and 6 figures.)
ERIC Educational Resources Information Center
Mahavier, W. Ted
2002-01-01
Describes a two-semester numerical methods course that serves as a research experience for undergraduate students without requiring external funding or the modification of current curriculum. Uses an engineering problem to introduce students to constrained optimization via a variation of the traditional isoperimetric problem of finding the curve…
Developments in variational methods for high performance plate and shell elements
NASA Technical Reports Server (NTRS)
Felippa, Carlos A.; Militello, Carmelo
1991-01-01
High performance elements are simple finite elements constructed to deliver engineering accuracy with coarse arbitrary grids. This is part of a series on the variational foundations of high-performance elements, with emphasis on plate and shell elements constructed with the free formulation (FF) and assumed natural strain (ANS) methods. Parameterized variational principles are studied that provide a common foundation for the FF and ANS methods, as well as for a combination of both. From this unified formulation a variant of the ANS formulation, called the assumed natural deviatoric strain (ANDES) formulation, emerges as an important special case. The first ANDES element, a high-performance 9 degrees of freedom triangular Kirchhoff plate bending element, is briefly described to illustrate the use of the new formulation.
Estimating nonrigid motion from inconsistent intensity with robust shape features
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Wenyang; Ruan, Dan, E-mail: druan@mednet.ucla.edu; Department of Radiation Oncology, University of California, Los Angeles, California 90095
2013-12-15
Purpose: To develop a nonrigid motion estimation method that is robust to heterogeneous intensity inconsistencies amongst the image pairs or image sequence. Methods: Intensity and contrast variations, as in dynamic contrast enhanced magnetic resonance imaging, present a considerable challenge to registration methods based on general discrepancy metrics. In this study, the authors propose and validate a novel method that is robust to such variations by utilizing shape features. The geometry of interest (GOI) is represented with a flexible zero level set, segmented via well-behaved regularized optimization. The optimization energy drives the zero level set to high image gradient regions, andmore » regularizes it with area and curvature priors. The resulting shape exhibits high consistency even in the presence of intensity or contrast variations. Subsequently, a multiscale nonrigid registration is performed to seek a regular deformation field that minimizes shape discrepancy in the vicinity of GOIs. Results: To establish the working principle, realistic 2D and 3D images were subject to simulated nonrigid motion and synthetic intensity variations, so as to enable quantitative evaluation of registration performance. The proposed method was benchmarked against three alternative registration approaches, specifically, optical flow, B-spline based mutual information, and multimodality demons. When intensity consistency was satisfied, all methods had comparable registration accuracy for the GOIs. When intensities among registration pairs were inconsistent, however, the proposed method yielded pronounced improvement in registration accuracy, with an approximate fivefold reduction in mean absolute error (MAE = 2.25 mm, SD = 0.98 mm), compared to optical flow (MAE = 9.23 mm, SD = 5.36 mm), B-spline based mutual information (MAE = 9.57 mm, SD = 8.74 mm) and mutimodality demons (MAE = 10.07 mm, SD = 4.03 mm). Applying the proposed method on a real MR image sequence also provided qualitatively appealing results, demonstrating good feasibility and applicability of the proposed method. Conclusions: The authors have developed a novel method to estimate the nonrigid motion of GOIs in the presence of spatial intensity and contrast variations, taking advantage of robust shape features. Quantitative analysis and qualitative evaluation demonstrated good promise of the proposed method. Further clinical assessment and validation is being performed.« less
GEMINI: Integrative Exploration of Genetic Variation and Genome Annotations
Paila, Umadevi; Chapman, Brad A.; Kirchner, Rory; Quinlan, Aaron R.
2013-01-01
Modern DNA sequencing technologies enable geneticists to rapidly identify genetic variation among many human genomes. However, isolating the minority of variants underlying disease remains an important, yet formidable challenge for medical genetics. We have developed GEMINI (GEnome MINIng), a flexible software package for exploring all forms of human genetic variation. Unlike existing tools, GEMINI integrates genetic variation with a diverse and adaptable set of genome annotations (e.g., dbSNP, ENCODE, UCSC, ClinVar, KEGG) into a unified database to facilitate interpretation and data exploration. Whereas other methods provide an inflexible set of variant filters or prioritization methods, GEMINI allows researchers to compose complex queries based on sample genotypes, inheritance patterns, and both pre-installed and custom genome annotations. GEMINI also provides methods for ad hoc queries and data exploration, a simple programming interface for custom analyses that leverage the underlying database, and both command line and graphical tools for common analyses. We demonstrate GEMINI's utility for exploring variation in personal genomes and family based genetic studies, and illustrate its ability to scale to studies involving thousands of human samples. GEMINI is designed for reproducibility and flexibility and our goal is to provide researchers with a standard framework for medical genomics. PMID:23874191
Method for Reducing Pumping Damage to Blood
NASA Technical Reports Server (NTRS)
Bozeman, Richard J., Jr. (Inventor); Akkerman, James W. (Inventor); Aber, Gregory S. (Inventor); VanDamm, George Arthur (Inventor); Bacak, James W. (Inventor); Svejkovsky, Robert J. (Inventor); Benkowski, Robert J. (Inventor)
1997-01-01
Methods are provided for minimizing damage to blood in a blood pump wherein the blood pump comprises a plurality of pump components that may affect blood damage such as clearance between pump blades and housing, number of impeller blades, rounded or flat blade edges, variations in entrance angles of blades, impeller length, and the like. The process comprises selecting a plurality of pump components believed to affect blood damage such as those listed herein before. Construction variations for each of the plurality of pump components are then selected. The pump components and variations are preferably listed in a matrix for easy visual comparison of test results. Blood is circulated through a pump configuration to test each variation of each pump component. After each test, total blood damage is determined for the blood pump. Preferably each pump component variation is tested at least three times to provide statistical results and check consistency of results. The least hemolytic variation for each pump component is preferably selected as an optimized component. If no statistical difference as to blood damage is produced for a variation of a pump component, then the variation that provides preferred hydrodynamic performance is selected. To compare the variation of pump components such as impeller and stator blade geometries, the preferred embodiment of the invention uses a stereolithography technique for realizing complex shapes within a short time period.
A methodology for probabilistic remaining creep life assessment of gas turbine components
NASA Astrophysics Data System (ADS)
Liu, Zhimin
Certain gas turbine components operate in harsh environments and various mechanisms may lead to component failure. It is common practice to use remaining life assessments to help operators schedule maintenance and component replacements. Creep is a major failure mechanisms that affect the remaining life assessment, and the resulting life consumption of a component is highly sensitive to variations in the material stresses and temperatures, which fluctuate significantly due to the changes in real operating conditions. In addition, variations in material properties and geometry will result in changes in creep life consumption rate. The traditional method used for remaining life assessment assumes a set of fixed operating conditions at all times, and it fails to capture the variations in operating conditions. This translates into a significant loss of accuracy and unnecessary high maintenance and replacement cost. A new method that captures these variations described above and improves the prediction accuracy of remaining life is developed. First, a metamodel is built to approximate the relationship between variables (operating conditions, material properties, geometry, etc.) and a creep response. The metamodel is developed using Response Surface Method/Design of Experiments methodology. Design of Experiments is an efficient sampling method, and for each sampling point a set of finite element analyses are used to compute the corresponding response value. Next, a low order polynomial Response Surface Equation (RSE) is used to fit these values. Four techniques are suggested to dramatically reduce computational effort, and to increase the accuracy of the RSE: smart meshing technique, automatic geometry parameterization, screening test and regional RSE refinement. The RSEs, along with a probabilistic method and a life fraction model are used to compute current damage accumulation and remaining life. By capturing the variations mentioned above, the new method results in much better accuracy than that available using the traditional method. After further development and proper verification the method should bring significant savings by reducing the number of inspections and deferring part replacement.
A coupled mode formulation by reciprocity and a variational principle
NASA Technical Reports Server (NTRS)
Chuang, Shun-Lien
1987-01-01
A coupled mode formulation for parallel dielectric waveguides is presented via two methods: a reciprocity theorem and a variational principle. In the first method, a generalized reciprocity relation for two sets of field solutions satisfying Maxwell's equations and the boundary conditions in two different media, respectively, is derived. Based on the generalized reciprocity theorem, the coupled mode equations can then be formulated. The second method using a variational principle is also presented for a general waveguide system which can be lossy. The results of the variational principle can also be shown to be identical to those from the reciprocity theorem. The exact relations governing the 'conventional' and the new coupling coefficients are derived. It is shown analytically that the present formulation satisfies the reciprocity theorem and power conservation exactly, while the conventional theory violates the power conservation and reciprocity theorem by as much as 55 percent and the Hardy-Streifer (1985, 1986) theory by 0.033 percent, for example.
NASA Astrophysics Data System (ADS)
Suhartono, Lee, Muhammad Hisyam; Prastyo, Dedy Dwi
2015-12-01
The aim of this research is to develop a calendar variation model for forecasting retail sales data with the Eid ul-Fitr effect. The proposed model is based on two methods, namely two levels ARIMAX and regression methods. Two levels ARIMAX and regression models are built by using ARIMAX for the first level and regression for the second level. Monthly men's jeans and women's trousers sales in a retail company for the period January 2002 to September 2009 are used as case study. In general, two levels of calendar variation model yields two models, namely the first model to reconstruct the sales pattern that already occurred, and the second model to forecast the effect of increasing sales due to Eid ul-Fitr that affected sales at the same and the previous months. The results show that the proposed two level calendar variation model based on ARIMAX and regression methods yields better forecast compared to the seasonal ARIMA model and Neural Networks.
Vertical profiles of wind and temperature by remote acoustical sounding
NASA Technical Reports Server (NTRS)
Fox, H. L.
1969-01-01
An acoustical method was investigated for obtaining meteorological soundings based on the refraction due to the vertical variation of wind and temperature. The method has the potential of yielding horizontally averaged measurements of the vertical variation of wind and temperature up to heights of a few kilometers; the averaging takes place over a radius of 10 to 15 km. An outline of the basic concepts and some of the results obtained with the method are presented.
NASA Astrophysics Data System (ADS)
Chan, W. Y.; Eggins, S. M.
2017-09-01
Significant diurnal variation in seawater carbonate chemistry occurs naturally in many coral reef environments, yet little is known of its effect on coral calcification. Laboratory studies on the response of corals to ocean acidification have manipulated the carbonate chemistry of experimental seawater to compare calcification rate changes under present-day and predicted future mean pH/Ωarag conditions. These experiments, however, have focused exclusively on differences in mean chemistry and have not considered diurnal variation. The aim of this study was to compare calcification responses of branching coral Acropora formosa under conditions with and without diurnal variation in seawater carbonate chemistry. To achieve this aim, we explored (1) a method to recreate natural diurnal variation in a laboratory experiment using the biological activities of a coral-reef mesocosm, and (2) a multi-laser 3D scanning method to accurately measure coral surface areas, essential to normalize their calcification rates. We present a cost- and time-efficient method of coral surface area estimation that is reproducible within 2% of the mean of triplicate measurements. Calcification rates were compared among corals subjected to a diurnal range in pH (total scale) from 7.8 to 8.2, relative to those at constant pH values of 7.8, 8.0 or 8.2. Mean calcification rates of the corals at the pH 7.8-8.2 (diurnal variation) treatment were not statistically different from the pH 8.2 treatment and were 34% higher than the pH 8.0 treatment despite similar mean seawater pH and Ωarag. Our results suggest that calcification of adult coral colonies may benefit from diurnal variation in seawater carbonate chemistry. Experiments that compare calcification rates at different constant pH without considering diurnal variation may have limitations.
NASA Astrophysics Data System (ADS)
Yong, Peng; Liao, Wenyuan; Huang, Jianping; Li, Zhenchuan
2018-04-01
Full waveform inversion is an effective tool for recovering the properties of the Earth from seismograms. However, it suffers from local minima caused mainly by the limited accuracy of the starting model and the lack of a low-frequency component in the seismic data. Because of the high velocity contrast between salt and sediment, the relation between the waveform and velocity perturbation is strongly nonlinear. Therefore, salt inversion can easily get trapped in the local minima. Since the velocity of salt is nearly constant, we can make the most of this characteristic with total variation regularization to mitigate the local minima. In this paper, we develop an adaptive primal dual hybrid gradient method to implement total variation regularization by projecting the solution onto a total variation norm constrained convex set, through which the total variation norm constraint is satisfied at every model iteration. The smooth background velocities are first inverted and the perturbations are gradually obtained by successively relaxing the total variation norm constraints. Numerical experiment of the projection of the BP model onto the intersection of the total variation norm and box constraints has demonstrated the accuracy and efficiency of our adaptive primal dual hybrid gradient method. A workflow is designed to recover complex salt structures in the BP 2004 model and the 2D SEG/EAGE salt model, starting from a linear gradient model without using low-frequency data below 3 Hz. The salt inversion processes demonstrate that wavefield reconstruction inversion with a total variation norm and box constraints is able to overcome local minima and inverts the complex salt velocity layer by layer.
NASA Astrophysics Data System (ADS)
Sun, M. L.; Peng, H. B.; Duan, B. H.; Liu, F. F.; Du, X.; Yuan, W.; Zhang, B. T.; Zhang, X. Y.; Wang, T. S.
2018-03-01
Borosilicate glass has potential application for vitrification of high-level radioactive waste, which attracts extensive interest in studying its radiation durability. In this study, sodium borosilicate glass samples were irradiated with 4 MeV Kr17+ ion, 5 MeV Xe26+ ion and 0.3 MeV P+ ion, respectively. The hardness of irradiated borosilicate glass samples was measured with nanoindentation in continuous stiffness mode and quasi continuous stiffness mode, separately. Extrapolation method, mean value method, squared extrapolation method and selected point method are used to obtain hardness of irradiated glass and a comparison among these four methods is conducted. The extrapolation method is suggested to analyze the hardness of ion irradiated glass. With increasing irradiation dose, the values of hardness for samples irradiated with Kr, Xe and P ions dropped and then saturated at 0.02 dpa. Besides, both the maximum variations and decay constants for three kinds of ions with different energies are similar indicates the similarity behind the hardness variation in glasses after irradiation. Furthermore, the hardness variation of low energy P ion irradiated samples whose range is much smaller than those of high energy Kr and Xe ions, has the same trend as that of Kr and Xe ions. It suggested that electronic energy loss did not play a significant role in hardness decrease for irradiation of low energy ions.
Zhao, Guangju; Mu, Xingmin; Jiao, Juying; Gao, Peng; Sun, Wenyi; Li, Erhui; Wei, Yanhong; Huang, Jiacong
2018-05-23
Understanding the relative contributions of climate change and human activities to variations in sediment load is of great importance for regional soil, and river basin management. Considerable studies have investigated spatial-temporal variation of sediment load within the Loess Plateau; however, contradictory findings exist among methods used. This study systematically reviewed six quantitative methods: simple linear regression, double mass curve, sediment identity factor analysis, dam-sedimentation based method, the Sediment Delivery Distributed (SEDD) model, and the Soil Water Assessment Tool (SWAT) model. The calculation procedures and merits for each method were systematically explained. A case study in the Huangfuchuan watershed on the northern Loess Plateau has been undertaken. The results showed that sediment load had been reduced by 70.5% during the changing period from 1990 to 2012 compared to that of the baseline period from 1955 to 1989. Human activities accounted for an average of 93.6 ± 4.1% of the total decline in sediment load, whereas climate change contributed 6.4 ± 4.1%. Five methods produced similar estimates, but the linear regression yielded relatively different results. The results of this study provide a good reference for assessing the effects of climate change and human activities on sediment load variation by using different methods. Copyright © 2018. Published by Elsevier B.V.
Scene-based nonuniformity correction using local constant statistics.
Zhang, Chao; Zhao, Wenyi
2008-06-01
In scene-based nonuniformity correction, the statistical approach assumes all possible values of the true-scene pixel are seen at each pixel location. This global-constant-statistics assumption does not distinguish fixed pattern noise from spatial variations in the average image. This often causes the "ghosting" artifacts in the corrected images since the existing spatial variations are treated as noises. We introduce a new statistical method to reduce the ghosting artifacts. Our method proposes a local-constant statistics that assumes that the temporal signal distribution is not constant at each pixel but is locally true. This considers statistically a constant distribution in a local region around each pixel but uneven distribution in a larger scale. Under the assumption that the fixed pattern noise concentrates in a higher spatial-frequency domain than the distribution variation, we apply a wavelet method to the gain and offset image of the noise and separate out the pattern noise from the spatial variations in the temporal distribution of the scene. We compare the results to the global-constant-statistics method using a clean sequence with large artificial pattern noises. We also apply the method to a challenging CCD video sequence and a LWIR sequence to show how effective it is in reducing noise and the ghosting artifacts.
Image denoising by a direct variational minimization
NASA Astrophysics Data System (ADS)
Janev, Marko; Atanacković, Teodor; Pilipović, Stevan; Obradović, Radovan
2011-12-01
In this article we introduce a novel method for the image de-noising which combines a mathematically well-posdenes of the variational modeling with the efficiency of a patch-based approach in the field of image processing. It based on a direct minimization of an energy functional containing a minimal surface regularizer that uses fractional gradient. The minimization is obtained on every predefined patch of the image, independently. By doing so, we avoid the use of an artificial time PDE model with its inherent problems of finding optimal stopping time, as well as the optimal time step. Moreover, we control the level of image smoothing on each patch (and thus on the whole image) by adapting the Lagrange multiplier using the information on the level of discontinuities on a particular patch, which we obtain by pre-processing. In order to reduce the average number of vectors in the approximation generator and still to obtain the minimal degradation, we combine a Ritz variational method for the actual minimization on a patch, and a complementary fractional variational principle. Thus, the proposed method becomes computationally feasible and applicable for practical purposes. We confirm our claims with experimental results, by comparing the proposed method with a couple of PDE-based methods, where we get significantly better denoising results specially on the oscillatory regions.
Variational optical flow computation in real time.
Bruhn, Andrés; Weickert, Joachim; Feddern, Christian; Kohlberger, Timo; Schnörr, Christoph
2005-05-01
This paper investigates the usefulness of bidirectional multigrid methods for variational optical flow computations. Although these numerical schemes are among the fastest methods for solving equation systems, they are rarely applied in the field of computer vision. We demonstrate how to employ those numerical methods for the treatment of variational optical flow formulations and show that the efficiency of this approach even allows for real-time performance on standard PCs. As a representative for variational optic flow methods, we consider the recently introduced combined local-global method. It can be considered as a noise-robust generalization of the Horn and Schunck technique. We present a decoupled, as well as a coupled, version of the classical Gauss-Seidel solver, and we develop several multgrid implementations based on a discretization coarse grid approximation. In contrast, with standard bidirectional multigrid algorithms, we take advantage of intergrid transfer operators that allow for nondyadic grid hierarchies. As a consequence, no restrictions concerning the image size or the number of traversed levels have to be imposed. In the experimental section, we juxtapose the developed multigrid schemes and demonstrate their superior performance when compared to unidirectional multgrid methods and nonhierachical solvers. For the well-known 316 x 252 Yosemite sequence, we succeeded in computing the complete set of dense flow fields in three quarters of a second on a 3.06-GHz Pentium4 PC. This corresponds to a frame rate of 18 flow fields per second which outperforms the widely-used Gauss-Seidel method by almost three orders of magnitude.
When things go pear shaped: contour variations of contacts
NASA Astrophysics Data System (ADS)
Utzny, Clemens
2013-04-01
Traditional control of critical dimensions (CD) on photolithographic masks considers the CD average and a measure for the CD variation such as the CD range or the standard deviation. Also systematic CD deviations from the mean such as CD signatures are subject to the control. These measures are valid for mask quality verification as long as patterns across a mask exhibit only size variations and no shape variation. The issue of shape variations becomes especially important in the context of contact holes on EUV masks. For EUV masks the CD error budget is much smaller than for standard optical masks. This means that small deviations from the contact shape can impact EUV waver prints in the sense that contact shape deformations induce asymmetric bridging phenomena. In this paper we present a detailed study of contact shape variations based on regular product data. Two data sets are analyzed: 1) contacts of varying target size and 2) a regularly spaced field of contacts. Here, the methods of statistical shape analysis are used to analyze CD SEM generated contour data. We demonstrate that contacts on photolithographic masks do not only show size variations but exhibit also pronounced nontrivial shape variations. In our data sets we find pronounced shape variations which can be interpreted as asymmetrical shape squeezing and contact rounding. Thus we demonstrate the limitations of classic CD measures for describing the feature variations on masks. Furthermore we show how the methods of statistical shape analysis can be used for quantifying the contour variations thus paving the way to a new understanding of mask linearity and its specification.
NASA Technical Reports Server (NTRS)
Roth, Don J.
1998-01-01
NASA Lewis Research Center's Life Prediction Branch, in partnership with Sonix, Inc., and Cleveland State University, recently advanced the development of, refined, and commercialized an advanced nondestructive evaluation (NDE) inspection method entitled the Single Transducer Thickness-Independent Ultrasonic Imaging Method. Selected by R&D Magazine as one of the 100 most technologically significant new products of 1996, the method uses a single transducer to eliminate the superimposing effects of thickness variation in the ultrasonic images of materials. As a result, any variation seen in the image is due solely to microstructural variation. This nondestructive method precisely and accurately characterizes material gradients (pore fraction, density, or chemical) that affect the uniformity of a material's physical performance (mechanical, thermal, or electrical). Advantages of the method over conventional ultrasonic imaging include (1) elimination of machining costs (for precision thickness control) during the quality control stages of material processing and development and (2) elimination of labor costs and subjectivity involved in further image processing and image interpretation. At NASA Lewis, the method has been used primarily for accurate inspections of high temperature structural materials including monolithic ceramics, metal matrix composites, and polymer matrix composites. Data were published this year for platelike samples, and current research is focusing on applying the method to tubular components. The initial publicity regarding the development of the method generated 150 requests for further information from a wide variety of institutions and individuals including the Federal Bureau of Investigation (FBI), Lockheed Martin Corporation, Rockwell International, Hewlett Packard Company, and Procter & Gamble Company. In addition, NASA has been solicited by the 3M Company and Allison Abrasives to use this method to inspect composite materials that are manufactured by these companies.
NASA Technical Reports Server (NTRS)
Mirels, Harold
1959-01-01
A source distribution method is presented for obtaining flow perturbations due to small unsteady area variations, mass, momentum, and heat additions in a basic uniform (or piecewise uniform) one-dimensional flow. First, the perturbations due to an elemental area variation, mass, momentum, and heat addition are found. The general solution is then represented by a spatial and temporal distribution of these elemental (source) solutions. Emphasis is placed on discussing the physical nature of the flow phenomena. The method is illustrated by several examples. These include the determination of perturbations in basic flows consisting of (1) a shock propagating through a nonuniform tube, (2) a constant-velocity piston driving a shock, (3) ideal shock-tube flows, and (4) deflagrations initiated at a closed end. The method is particularly applicable for finding the perturbations due to relatively thin wall boundary layers.
Analog graphic display method and apparatus
Kronberg, J.W.
1991-08-13
Disclosed are an apparatus and method for using an output device such as an LED to show the approximate analog level of a variable electrical signal wherein a modulating AC waveform is superimposed either on the signal or a reference voltage, both of which are then fed to a comparator which drives the output device. Said device flashes at a constant perceptible rate with a duty cycle which varies in response to variations in the level of the input signal. The human eye perceives these variations in duty cycle as analogous to variations in the level of the input signal. 21 figures.
Total variation approach for adaptive nonuniformity correction in focal-plane arrays.
Vera, Esteban; Meza, Pablo; Torres, Sergio
2011-01-15
In this Letter we propose an adaptive scene-based nonuniformity correction method for fixed-pattern noise removal in imaging arrays. It is based on the minimization of the total variation of the estimated irradiance, and the resulting function is optimized by an isotropic total variation approach making use of an alternating minimization strategy. The proposed method provides enhanced results when applied to a diverse set of real IR imagery, accurately estimating the nonunifomity parameters of each detector in the focal-plane array at a fast convergence rate, while also forming fewer ghosting artifacts.
Analog graphic display method and apparatus
Kronberg, James W.
1991-01-01
An apparatus and method for using an output device such as an LED to show the approximate analog level of a variable electrical signal wherein a modulating AC waveform is superimposed either on the signal or a reference voltage, both of which are then fed to a comparator which drives the output device. Said device flashes at a constant perceptible rate with a duty cycle which varies in response to variations in the level of the input signal. The human eye perceives these variations in duty cycle as analogous to variations in the level of the input signal.
Peselnick, L.; Meister, R.
1965-01-01
Variational principles of anisotropic elasticity have been applied to aggregates of randomly oriented pure-phase polycrystals having hexagonal symmetry and trigonal symmetry. The bounds of the effective elastic moduli obtained in this way show a considerable improvement over the bounds obtained by means of the Voigt and Reuss assumptions. The Hill average is found to be in most cases a good approximation when compared to the bounds found from the variational method. The new bounds reduce in their limits to the Voigt and Reuss values. ?? 1965 The American Institute of Physics.
An MHD variational principle that admits reconnection
NASA Technical Reports Server (NTRS)
Rilee, M. L.; Sudan, R. N.; Pfirsch, D.
1997-01-01
The variational approach of Pfirsch and Sudan's averaged magnetohydrodynamics (MHD) to the stability of a line-tied current layer is summarized. The effect of line-tying on current sheets that might arise in line-tied magnetic flux tubes by estimating the growth rates of a resistive instability using a variational method. The results show that this method provides a potentially new technique to gauge the stability of nearly ideal magnetohydrodynamic systems. The primary implication for the stability of solar coronal structures is that tearing modes are probably constant at work removing magnetic shear from the solar corona.
Li, Hui
2009-11-14
Linear response and variational treatment are formulated for Hartree-Fock (HF) and Kohn-Sham density functional theory (DFT) methods and combined discrete-continuum solvation models that incorporate self-consistently induced dipoles and charges. Due to the variational treatment, analytic nuclear gradients can be evaluated efficiently for these discrete and continuum solvation models. The forces and torques on the induced point dipoles and point charges can be evaluated using simple electrostatic formulas as for permanent point dipoles and point charges, in accordance with the electrostatic nature of these methods. Implementation and tests using the effective fragment potential (EFP, a polarizable force field) method and the conductorlike polarizable continuum model (CPCM) show that the nuclear gradients are as accurate as those in the gas phase HF and DFT methods. Using B3LYP/EFP/CPCM and time-dependent-B3LYP/EFP/CPCM methods, acetone S(0)-->S(1) excitation in aqueous solution is studied. The results are close to those from full B3LYP/CPCM calculations.
NASA Astrophysics Data System (ADS)
Miura, Shinichi
2018-03-01
In this paper, the ground state of para-hydrogen clusters for size regime N ≤ 40 has been studied by our variational path integral molecular dynamics method. Long molecular dynamics calculations have been performed to accurately evaluate ground state properties. The chemical potential of the hydrogen molecule is found to have a zigzag size dependence, indicating the magic number stability for the clusters of the size N = 13, 26, 29, 34, and 39. One-body density of the hydrogen molecule is demonstrated to have a structured profile, not a melted one. The observed magic number stability is examined using the inherent structure analysis. We also have developed a novel method combining our variational path integral hybrid Monte Carlo method with the replica exchange technique. We introduce replicas of the original system bridging from the structured to the melted cluster, which is realized by scaling the potential energy of the system. Using the enhanced sampling method, the clusters are demonstrated to have the structured density profile in the ground state.
Miura, Shinichi
2018-03-14
In this paper, the ground state of para-hydrogen clusters for size regime N ≤ 40 has been studied by our variational path integral molecular dynamics method. Long molecular dynamics calculations have been performed to accurately evaluate ground state properties. The chemical potential of the hydrogen molecule is found to have a zigzag size dependence, indicating the magic number stability for the clusters of the size N = 13, 26, 29, 34, and 39. One-body density of the hydrogen molecule is demonstrated to have a structured profile, not a melted one. The observed magic number stability is examined using the inherent structure analysis. We also have developed a novel method combining our variational path integral hybrid Monte Carlo method with the replica exchange technique. We introduce replicas of the original system bridging from the structured to the melted cluster, which is realized by scaling the potential energy of the system. Using the enhanced sampling method, the clusters are demonstrated to have the structured density profile in the ground state.
NASA Astrophysics Data System (ADS)
Hai-Jung In,; Oh-Kyong Kwon,
2010-03-01
A simple pixel structure using a video data correction method is proposed to compensate for electrical characteristic variations of driving thin-film transistors (TFTs) and the degradation of organic light-emitting diodes (OLEDs) in active-matrix OLED (AMOLED) displays. The proposed method senses the electrical characteristic variations of TFTs and OLEDs and stores them in external memory. The nonuniform emission current of TFTs and the aging of OLEDs are corrected by modulating video data using the stored data. Experimental results show that the emission current error due to electrical characteristic variation of driving TFTs is in the range from -63.1 to 61.4% without compensation, but is decreased to the range from -1.9 to 1.9% with the proposed correction method. The luminance error due to the degradation of an OLED is less than 1.8% when the proposed correction method is used for a 50% degraded OLED.
NASA Astrophysics Data System (ADS)
Kamibayashi, Yuki; Miura, Shinichi
2016-08-01
In the present study, variational path integral molecular dynamics and associated hybrid Monte Carlo (HMC) methods have been developed on the basis of a fourth order approximation of a density operator. To reveal various parameter dependence of physical quantities, we analytically solve one dimensional harmonic oscillators by the variational path integral; as a byproduct, we obtain the analytical expression of the discretized density matrix using the fourth order approximation for the oscillators. Then, we apply our methods to realistic systems like a water molecule and a para-hydrogen cluster. In the HMC, we adopt two level description to avoid the time consuming Hessian evaluation. For the systems examined in this paper, the HMC method is found to be about three times more efficient than the molecular dynamics method if appropriate HMC parameters are adopted; the advantage of the HMC method is suggested to be more evident for systems described by many body interaction.
NASA Astrophysics Data System (ADS)
Rezeau, L.; Belmont, G.; Manuzzo, R.; Aunai, N.; Dargent, J.
2018-01-01
We explore the structure of the magnetopause using a crossing observed by the Magnetospheric Multiscale (MMS) spacecraft on 16 October 2015. Several methods (minimum variance analysis, BV method, and constant velocity analysis) are first applied to compute the normal to the magnetopause considered as a whole. The different results obtained are not identical, and we show that the whole boundary is not stationary and not planar, so that basic assumptions of these methods are not well satisfied. We then analyze more finely the internal structure for investigating the departures from planarity. Using the basic mathematical definition of what is a one-dimensional physical problem, we introduce a new single spacecraft method, called LNA (local normal analysis) for determining the varying normal, and we compare the results so obtained with those coming from the multispacecraft minimum directional derivative (MDD) tool developed by Shi et al. (2005). This last method gives the dimensionality of the magnetic variations from multipoint measurements and also allows estimating the direction of the local normal when the variations are locally 1-D. This study shows that the magnetopause does include approximate one-dimensional substructures but also two- and three-dimensional structures. It also shows that the dimensionality of the magnetic variations can differ from the variations of other fields so that, at some places, the magnetic field can have a 1-D structure although all the plasma variations do not verify the properties of a global one-dimensional problem. A generalization of the MDD tool is proposed.
Estimating nonrigid motion from inconsistent intensity with robust shape features.
Liu, Wenyang; Ruan, Dan
2013-12-01
To develop a nonrigid motion estimation method that is robust to heterogeneous intensity inconsistencies amongst the image pairs or image sequence. Intensity and contrast variations, as in dynamic contrast enhanced magnetic resonance imaging, present a considerable challenge to registration methods based on general discrepancy metrics. In this study, the authors propose and validate a novel method that is robust to such variations by utilizing shape features. The geometry of interest (GOI) is represented with a flexible zero level set, segmented via well-behaved regularized optimization. The optimization energy drives the zero level set to high image gradient regions, and regularizes it with area and curvature priors. The resulting shape exhibits high consistency even in the presence of intensity or contrast variations. Subsequently, a multiscale nonrigid registration is performed to seek a regular deformation field that minimizes shape discrepancy in the vicinity of GOIs. To establish the working principle, realistic 2D and 3D images were subject to simulated nonrigid motion and synthetic intensity variations, so as to enable quantitative evaluation of registration performance. The proposed method was benchmarked against three alternative registration approaches, specifically, optical flow, B-spline based mutual information, and multimodality demons. When intensity consistency was satisfied, all methods had comparable registration accuracy for the GOIs. When intensities among registration pairs were inconsistent, however, the proposed method yielded pronounced improvement in registration accuracy, with an approximate fivefold reduction in mean absolute error (MAE = 2.25 mm, SD = 0.98 mm), compared to optical flow (MAE = 9.23 mm, SD = 5.36 mm), B-spline based mutual information (MAE = 9.57 mm, SD = 8.74 mm) and mutimodality demons (MAE = 10.07 mm, SD = 4.03 mm). Applying the proposed method on a real MR image sequence also provided qualitatively appealing results, demonstrating good feasibility and applicability of the proposed method. The authors have developed a novel method to estimate the nonrigid motion of GOIs in the presence of spatial intensity and contrast variations, taking advantage of robust shape features. Quantitative analysis and qualitative evaluation demonstrated good promise of the proposed method. Further clinical assessment and validation is being performed.
Homodyne chiral polarimetry for measuring thermo-optic refractive index variations.
Twu, Ruey-Ching; Wang, Jhao-Sheng
2015-10-10
Novel reflection-type homodyne chiral polarimetry is proposed for measuring the refractive index variations of a transparent plate under thermal impact. The experimental results show it is a simple and useful method for providing accurate measurements of refractive index variations. The measurement can reach a resolution of 7×10-5.
Confidence bounds for normal and lognormal distribution coefficients of variation
Steve Verrill
2003-01-01
This paper compares the so-called exact approach for obtaining confidence intervals on normal distribution coefficients of variation to approximate methods. Approximate approaches were found to perform less well than the exact approach for large coefficients of variation and small sample sizes. Web-based computer programs are described for calculating confidence...
Subtelomeric Rearrangements and Copy Number Variations in People with Intellectual Disabilities
ERIC Educational Resources Information Center
Christofolini, D. M.; De Paula Ramos, M. A.; Kulikowski, L. D.; Da Silva Bellucco, F. T.; Belangero, S. I. N.; Brunoni, D.; Melaragno, M. I.
2010-01-01
Background: The most prevalent type of structural variation in the human genome is represented by copy number variations that can affect transcription levels, sequence, structure and function of genes. Method: In the present study, we used the multiplex ligation-dependent probe amplification (MLPA) technique and quantitative PCR for the detection…
JOINT AND INDIVIDUAL VARIATION EXPLAINED (JIVE) FOR INTEGRATED ANALYSIS OF MULTIPLE DATA TYPES.
Lock, Eric F; Hoadley, Katherine A; Marron, J S; Nobel, Andrew B
2013-03-01
Research in several fields now requires the analysis of datasets in which multiple high-dimensional types of data are available for a common set of objects. In particular, The Cancer Genome Atlas (TCGA) includes data from several diverse genomic technologies on the same cancerous tumor samples. In this paper we introduce Joint and Individual Variation Explained (JIVE), a general decomposition of variation for the integrated analysis of such datasets. The decomposition consists of three terms: a low-rank approximation capturing joint variation across data types, low-rank approximations for structured variation individual to each data type, and residual noise. JIVE quantifies the amount of joint variation between data types, reduces the dimensionality of the data, and provides new directions for the visual exploration of joint and individual structure. The proposed method represents an extension of Principal Component Analysis and has clear advantages over popular two-block methods such as Canonical Correlation Analysis and Partial Least Squares. A JIVE analysis of gene expression and miRNA data on Glioblastoma Multiforme tumor samples reveals gene-miRNA associations and provides better characterization of tumor types.
Short and long periodic atmospheric variations between 25 and 200 km
NASA Technical Reports Server (NTRS)
Justus, C. G.; Woodrum, A.
1973-01-01
Previously collected data on atmospheric pressure, density, temperature and winds between 25 and 200 km from sources including Meteorological Rocket Network data, ROBIN falling sphere data, grenade release and pitot tube data, meteor winds, chemical release winds, satellite data, and others were analyzed by a daily difference method and results on the distribution statistics, magnitude, and spatial structure of gravity wave and planetary wave atmospheric variations are presented. Time structure of the gravity wave variations were determined by the analysis of residuals from harmonic analysis of time series data. Planetary wave contributions in the 25-85 km range were discovered and found to have significant height and latitudinal variation. Long period planetary waves, and seasonal variations were also computed by harmonic analysis. Revised height variations of the gravity wave contributions in the 25 to 85 km height range were computed. An engineering method and design values for gravity wave magnitudes and wave lengths are given to be used for such tasks as evaluating the effects on the dynamical heating, stability and control of spacecraft such as the space shuttle vehicle in launch or reentry trajectories.
Money for health: the equivalent variation of cardiovascular diseases.
Groot, Wim; Van Den Brink, Henriëtte Maassen; Plug, Erik
2004-09-01
This paper introduces a new method to calculate the extent to which individuals are willing to trade money for improvements in their health status. An individual welfare function of income (WFI) is applied to calculate the equivalent income variation of health impairments. We believe that this approach avoids various drawbacks of alternative willingness-to-pay methods. The WFI is used to calculate the equivalent variation of cardiovascular diseases. It is found that for a 25 year old male the equivalent variation of a heart disease ranges from 114,000 euro to 380,000 euro depending on the welfare level. This is about 10,000 euro - 30,000 euro for an additional life year. The equivalent variation declines with age and is about the same for men and women. The estimates further vary by discount rate chosen. The estimates of the equivalent variation are generally higher than the money spent on most heart-related medical interventions per QALY. The cost-benefit analysis shows that for most interventions the value of the health benefits exceeds the costs. Heart transplants seem to be too costly and only beneficial if patients are young.
Identification and ranking of environmental threats with ecosystem vulnerability distributions.
Zijp, Michiel C; Huijbregts, Mark A J; Schipper, Aafke M; Mulder, Christian; Posthuma, Leo
2017-08-24
Responses of ecosystems to human-induced stress vary in space and time, because both stressors and ecosystem vulnerabilities vary in space and time. Presently, ecosystem impact assessments mainly take into account variation in stressors, without considering variation in ecosystem vulnerability. We developed a method to address ecosystem vulnerability variation by quantifying ecosystem vulnerability distributions (EVDs) based on monitoring data of local species compositions and environmental conditions. The method incorporates spatial variation of both abiotic and biotic variables to quantify variation in responses among species and ecosystems. We show that EVDs can be derived based on a selection of locations, existing monitoring data and a selected impact boundary, and can be used in stressor identification and ranking for a region. A case study on Ohio's freshwater ecosystems, with freshwater fish as target species group, showed that physical habitat impairment and nutrient loads ranked highest as current stressors, with species losses higher than 5% for at least 6% of the locations. EVDs complement existing approaches of stressor assessment and management, which typically account only for variability in stressors, by accounting for variation in the vulnerability of the responding ecosystems.
Westwood, A; Bullock, D G; Whitehead, T P
1986-01-01
Hexokinase methods for serum glucose assay appeared to give slightly but consistently higher inter-laboratory coefficients of variation than all methods combined in the UK External Quality Assessment Scheme; their performance over a two-year period was therefore compared with that for three groups of glucose oxidase methods. This assessment showed no intrinsic inferiority in the hexokinase method. The greater variation may be due to the more heterogeneous group of instruments, particularly discrete analysers, on which the method is used. The Beckman Glucose Analyzer and Astra group (using a glucose oxidase method) showed the least inter-laboratory variability but also the lowest mean value. No comment is offered on the absolute accuracy of any of the methods.
Fast magnetic resonance imaging based on high degree total variation
NASA Astrophysics Data System (ADS)
Wang, Sujie; Lu, Liangliang; Zheng, Junbao; Jiang, Mingfeng
2018-04-01
In order to eliminating the artifacts and "staircase effect" of total variation in Compressive Sensing MRI, high degree total variation model is proposed for dynamic MRI reconstruction. the high degree total variation regularization term is used as a constraint to reconstruct the magnetic resonance image, and the iterative weighted MM algorithm is proposed to solve the convex optimization problem of the reconstructed MR image model, In addtion, one set of cardiac magnetic resonance data is used to verify the proposed algorithm for MRI. The results show that the high degree total variation method has a better reconstruction effect than the total variation and the total generalized variation, which can obtain higher reconstruction SNR and better structural similarity.
Two research studies funded and overseen by EPA have been conducted since October 2006 on soil gas sampling methods and variations in shallow soil gas concentrations with the purpose of improving our understanding of soil gas methods and data for vapor intrusion applications. Al...
2015-01-01
6. Zhang WG, Wilkin JL, Arango HG. Towards an integrated observation and modeling system in the New York Bight using variational methods. Part 1...1992;7:262- 72. ---- -- - ---------------------------- References 391 17. Rosmond TE, Teixeria J, Pcng M, Hogan TF, Pauley R. Navy operational global
Kyle J. Haynes; Andrew M. Liebhold; Ottar N. Bjørnstad; Andrew J. Allstadt; Randall S. Morin
2018-01-01
Evaluating the causes of spatial synchrony in population dynamics in nature is notoriously difficult due to a lack of data and appropriate statistical methods. Here, we use a recently developed method, a multivariate extension of the local indicators of spatial autocorrelation statistic, to map geographic variation in the synchrony of gypsy moth outbreaks. Regression...
ERIC Educational Resources Information Center
Cruzeiro, Vinícius Wilian D.; Roitberg, Adrian; Polfer, Nicolas C.
2016-01-01
In this work we are going to present how an interactive platform can be used as a powerful tool to allow students to better explore a foundational problem in quantum chemistry: the application of the variational method to the dihydrogen molecule using simple Gaussian trial functions. The theoretical approach for the hydrogen atom is quite…
NASA Technical Reports Server (NTRS)
Gao, Bo-Cai; Wiscombe, W. J.
1994-01-01
A method for detecting cirrus clouds in terms of brightness temperature differences between narrowbands at 8, 11, and 12 microns has been proposed by Ackerman et al. In this method, the variation of emissivity with wavelength for different surface targets was not taken into consideration. Based on state-of-the-art laboratory measurements of reflectance spectra of terrestrial materials by Salisbury and D'Aria, it is found that the brightness temperature differences between the 8- and 11-microns bands for soils, rocks, and minerals, and dry vegetation can vary between approximately -8 and +8 K due solely to surface emissivity variations. The large brightness temperature differences are sufficient to cause false detection of cirrus clouds from remote sensing data acquired over certain surface targets using the 8-11-12-microns method directly. It is suggested that the 8-11-12-microns method should be improved to include the surface emissivity effects. In addition, it is recommended that in the future the variation of surface emissivity with wavelength should be taken into account in algorithms for retrieving surface temperatures and low-level atmospheric temperature and water vapor profiles.
Variational approach to direct and inverse problems of atmospheric pollution studies
NASA Astrophysics Data System (ADS)
Penenko, Vladimir; Tsvetova, Elena; Penenko, Alexey
2016-04-01
We present the development of a variational approach for solving interrelated problems of atmospheric hydrodynamics and chemistry concerning air pollution transport and transformations. The proposed approach allows us to carry out complex studies of different-scale physical and chemical processes using the methods of direct and inverse modeling [1-3]. We formulate the problems of risk/vulnerability and uncertainty assessment, sensitivity studies, variational data assimilation procedures [4], etc. A computational technology of constructing consistent mathematical models and methods of their numerical implementation is based on the variational principle in the weak constraint formulation specifically designed to account for uncertainties in models and observations. Algorithms for direct and inverse modeling are designed with the use of global and local adjoint problems. Implementing the idea of adjoint integrating factors provides unconditionally monotone and stable discrete-analytic approximations for convection-diffusion-reaction problems [5,6]. The general framework is applied to the direct and inverse problems for the models of transport and transformation of pollutants in Siberian and Arctic regions. The work has been partially supported by the RFBR grant 14-01-00125 and RAS Presidium Program I.33P. References: 1. V. Penenko, A.Baklanov, E. Tsvetova and A. Mahura . Direct and inverse problems in a variational concept of environmental modeling //Pure and Applied Geoph.(2012) v.169: 447-465. 2. V. V. Penenko, E. A. Tsvetova, and A. V. Penenko Development of variational approach for direct and inverse problems of atmospheric hydrodynamics and chemistry, Izvestiya, Atmospheric and Oceanic Physics, 2015, Vol. 51, No. 3, p. 311-319, DOI: 10.1134/S0001433815030093. 3. V.V. Penenko, E.A. Tsvetova, A.V. Penenko. Methods based on the joint use of models and observational data in the framework of variational approach to forecasting weather and atmospheric composition quality// Russian meteorology and hydrology, V. 40, Issue: 6, Pages: 365-373, DOI: 10.3103/S1068373915060023. 4. A.V. Penenko and V.V. Penenko. Direct data assimilation method for convection-diffusion models based on splitting scheme. Computational technologies, 19(4):69-83, 2014. 5. V.V. Penenko, E.A. Tsvetova, A.V. Penenko Variational approach and Euler's integrating factors for environmental studies// Computers and Mathematics with Applications, 2014, V.67, Issue 12, Pages 2240-2256, DOI:10.1016/j.camwa.2014.04.004 6. V.V. Penenko, E.A. Tsvetova. Variational methods of constructing monotone approximations for atmospheric chemistry models // Numerical analysis and applications, 2013, V. 6, Issue 3, pp 210-220, DOI 10.1134/S199542391303004X
Satellite and Model Analysis of the Atmospheric Moisture Budget in High Latitudes
NASA Technical Reports Server (NTRS)
Bromwich, David H.; Chen, Qui-Shi
2001-01-01
In order to understand variations of accumulation over Greenland, it is necessary to investigate precipitation and its variations. Observations of precipitation over Greenland are limited and generally inaccurate, but the analyzed wind, geopotential height, and moisture fields are available for recent years. The objective of this study is to enhance the dynamic method for retrieving high resolution precipitation over Greenland from the analyzed fields. The dynamic method enhanced in this study is referred to as the improved dynamic method.
Blekhman, Ran; Tang, Karen; Archie, Elizabeth A; Barreiro, Luis B; Johnson, Zachary P; Wilson, Mark E; Kohn, Jordan; Yuan, Michael L; Gesquiere, Laurence; Grieneisen, Laura E; Tung, Jenny
2016-08-16
Field studies of wild vertebrates are frequently associated with extensive collections of banked fecal samples-unique resources for understanding ecological, behavioral, and phylogenetic effects on the gut microbiome. However, we do not understand whether sample storage methods confound the ability to investigate interindividual variation in gut microbiome profiles. Here, we extend previous work on storage methods for gut microbiome samples by comparing immediate freezing, the gold standard of preservation, to three methods commonly used in vertebrate field studies: lyophilization, storage in ethanol, and storage in RNAlater. We found that the signature of individual identity consistently outweighed storage effects: alpha diversity and beta diversity measures were significantly correlated across methods, and while samples often clustered by donor, they never clustered by storage method. Provided that all analyzed samples are stored the same way, banked fecal samples therefore appear highly suitable for investigating variation in gut microbiota. Our results open the door to a much-expanded perspective on variation in the gut microbiome across species and ecological contexts.
Blankena, Roos; Kleinloog, Rachel; Verweij, Bon H.; van Ooij, Pim; ten Haken, Bennie; Luijten, Peter R.; Rinkel, Gabriel J.E.; Zwanenburg, Jaco J.M.
2016-01-01
Purpose To develop a method for semi-quantitative wall thickness assessment on in vivo 7.0 tesla (7T) MRI images of intracranial aneurysms for studying the relation between apparent aneurysm wall thickness and wall shear stress. Materials and Methods Wall thickness was analyzed in 11 unruptured aneurysms in 9 patients, who underwent 7T MRI with a TSE based vessel wall sequence (0.8 mm isotropic resolution). A custom analysis program determined the in vivo aneurysm wall intensities, which were normalized to signal of nearby brain tissue and were used as measure for apparent wall thickness (AWT). Spatial wall thickness variation was determined as the interquartile range in AWT (the middle 50% of the AWT range). Wall shear stress was determined using phase contrast MRI (0.5 mm isotropic resolution). We performed visual and statistical comparisons (Pearson’s correlation) to study the relation between wall thickness and wall shear stress. Results 3D colored AWT maps of the aneurysms showed spatial AWT variation, which ranged from 0.07 to 0.53, with a mean variation of 0.22 (a variation of 1.0 roughly means a wall thickness variation of one voxel (0.8mm)). In all aneurysms, AWT was inversely related to WSS (mean correlation coefficient −0.35, P<0.05). Conclusions A method was developed to measure the wall thickness semi-quantitatively, using 7T MRI. An inverse correlation between wall shear stress and AWT was determined. In future studies, this non-invasive method can be used to assess spatial wall thickness variation in relation to pathophysiologic processes such as aneurysm growth and –rupture. PMID:26892986
Identification of structural variation in mouse genomes.
Keane, Thomas M; Wong, Kim; Adams, David J; Flint, Jonathan; Reymond, Alexandre; Yalcin, Binnaz
2014-01-01
Structural variation is variation in structure of DNA regions affecting DNA sequence length and/or orientation. It generally includes deletions, insertions, copy-number gains, inversions, and transposable elements. Traditionally, the identification of structural variation in genomes has been challenging. However, with the recent advances in high-throughput DNA sequencing and paired-end mapping (PEM) methods, the ability to identify structural variation and their respective association to human diseases has improved considerably. In this review, we describe our current knowledge of structural variation in the mouse, one of the prime model systems for studying human diseases and mammalian biology. We further present the evolutionary implications of structural variation on transposable elements. We conclude with future directions on the study of structural variation in mouse genomes that will increase our understanding of molecular architecture and functional consequences of structural variation.
Gupta, Munish; Kaplan, Heather C
2017-09-01
Quality improvement (QI) is based on measuring performance over time, and variation in data measured over time must be understood to guide change and make optimal improvements. Common cause variation is natural variation owing to factors inherent to any process; special cause variation is unnatural variation owing to external factors. Statistical process control methods, and particularly control charts, are robust tools for understanding data over time and identifying common and special cause variation. This review provides a practical introduction to the use of control charts in health care QI, with a focus on neonatology. Copyright © 2017 Elsevier Inc. All rights reserved.
Cox, Holly D; Eichner, Daniel
2017-09-19
The dried blood spot (DBS) matrix has significant utility for applications in the field where venous blood collection and timely shipment of labile blood samples is difficult. Unfortunately, protein measurement in DBS is hindered by high abundance proteins and matrix interference that increases with hematocrit. We developed a DBS method to enrich for membrane proteins and remove soluble proteins and matrix interference. Following a wash in a series of buffers, the membrane proteins are digested with trypsin and quantitated by parallel reaction monitoring mass spectrometry methods. The DBS method was applied to the quantification of four cell-specific cluster of differentiation (CD) proteins used to count cells by flow cytometry, band 3 (CD233), CD71, CD45, and CD41. We demonstrate that the DBS method counts low abundance cell types such as immature reticulocytes as well as high abundance cell types such as red blood cells, white blood cells, and platelets. When tested in 82 individuals, counts obtained by the DBS method demonstrated good agreement with flow cytometry and automated hematology analyzers. Importantly, the method allows longitudinal monitoring of CD protein concentration and calculation of interindividual variation which is difficult by other methods. Interindividual variation of band 3 and CD45 was low, 6 and 8%, respectively, while variation of CD41 and CD71 was higher, 18 and 78%, respectively. Longitudinal measurement of CD71 concentration in DBS over an 8-week period demonstrated intraindividual variation 17.1-38.7%. Thus, the method may allow stable longitudinal measurement of blood parameters currently monitored to detect blood doping practices.
Relevant Feature Set Estimation with a Knock-out Strategy and Random Forests
Ganz, Melanie; Greve, Douglas N.; Fischl, Bruce; Konukoglu, Ender
2015-01-01
Group analysis of neuroimaging data is a vital tool for identifying anatomical and functional variations related to diseases as well as normal biological processes. The analyses are often performed on a large number of highly correlated measurements using a relatively smaller number of samples. Despite the correlation structure, the most widely used approach is to analyze the data using univariate methods followed by post-hoc corrections that try to account for the data’s multivariate nature. Although widely used, this approach may fail to recover from the adverse effects of the initial analysis when local effects are not strong. Multivariate pattern analysis (MVPA) is a powerful alternative to the univariate approach for identifying relevant variations. Jointly analyzing all the measures, MVPA techniques can detect global effects even when individual local effects are too weak to detect with univariate analysis. Current approaches are successful in identifying variations that yield highly predictive and compact models. However, they suffer from lessened sensitivity and instabilities in identification of relevant variations. Furthermore, current methods’ user-defined parameters are often unintuitive and difficult to determine. In this article, we propose a novel MVPA method for group analysis of high-dimensional data that overcomes the drawbacks of the current techniques. Our approach explicitly aims to identify all relevant variations using a “knock-out” strategy and the Random Forest algorithm. In evaluations with synthetic datasets the proposed method achieved substantially higher sensitivity and accuracy than the state-of-the-art MVPA methods, and outperformed the univariate approach when the effect size is low. In experiments with real datasets the proposed method identified regions beyond the univariate approach, while other MVPA methods failed to replicate the univariate results. More importantly, in a reproducibility study with the well-known ADNI dataset the proposed method yielded higher stability and power than the univariate approach. PMID:26272728
NASA Astrophysics Data System (ADS)
Jin, Seung-Seop; Jung, Hyung-Jo
2014-03-01
It is well known that the dynamic properties of a structure such as natural frequencies depend not only on damage but also on environmental condition (e.g., temperature). The variation in dynamic characteristics of a structure due to environmental condition may mask damage of the structure. Without taking the change of environmental condition into account, false-positive or false-negative damage diagnosis may occur so that structural health monitoring becomes unreliable. In order to address this problem, an approach to construct a regression model based on structural responses considering environmental factors has been usually used by many researchers. The key to success of this approach is the formulation between the input and output variables of the regression model to take into account the environmental variations. However, it is quite challenging to determine proper environmental variables and measurement locations in advance for fully representing the relationship between the structural responses and the environmental variations. One alternative (i.e., novelty detection) is to remove the variations caused by environmental factors from the structural responses by using multivariate statistical analysis (e.g., principal component analysis (PCA), factor analysis, etc.). The success of this method is deeply depending on the accuracy of the description of normal condition. Generally, there is no prior information on normal condition during data acquisition, so that the normal condition is determined by subjective perspective with human-intervention. The proposed method is a novel adaptive multivariate statistical analysis for monitoring of structural damage detection under environmental change. One advantage of this method is the ability of a generative learning to capture the intrinsic characteristics of the normal condition. The proposed method is tested on numerically simulated data for a range of noise in measurement under environmental variation. A comparative study with conventional methods (i.e., fixed reference scheme) demonstrates the superior performance of the proposed method for structural damage detection.
On some variational acceleration techniques and related methods for local refinement
NASA Astrophysics Data System (ADS)
Teigland, Rune
1998-10-01
This paper shows that the well-known variational acceleration method described by Wachspress (E. Wachspress, Iterative Solution of Elliptic Systems and Applications to the Neutron Diffusion Equations of Reactor Physics, Prentice-Hall, Englewood Cliffs, NJ, 1966) and later generalized to multilevels (known as the additive correction multigrid method (B.R Huthchinson and G.D. Raithby, Numer. Heat Transf., 9, 511-537 (1986))) is similar to the FAC method of McCormick and Thomas (S.F McCormick and J.W. Thomas, Math. Comput., 46, 439-456 (1986)) and related multilevel methods. The performance of the method is demonstrated for some simple model problems using local refinement and suggestions for improving the performance of the method are given.
Thickness and resistivity variations over the upper surface of the human skull.
Law, S K
1993-01-01
A study of skull thickness and resistivity variations over the upper surface was made for an adult human skull. Physical measurements of thickness and qualitative analysis of photographs and CT scans of the skull were performed to determine internal and external features of the skull. Resistivity measurements were made using the four-electrode method and ranged from 1360 to 21400 Ohm-cm with an overall mean of 7560 +/- 4130 Ohm-cm. The presence of sutures was found to decrease resistivity substantially. The absence of cancellous bone was found to increase resistivity, particularly for samples from the temporal bone. An inverse relationship between skull thickness and resistivity was determined for trilayer bone (n = 12, p < 0.001). The results suggest that the skull cannot be considered a uniform layer and that local resistivity variations should be incorporated into realistic geometric and resistive head models to improve resolution in EEG. Influences of these variations on head models, methods for determining these variations, and incorporation into realistic head models, are discussed.
NASA Technical Reports Server (NTRS)
Achtemeier, Gary L.
1991-01-01
The second step in development of MODEL III is summarized. It combines the four radiative transfer equations of the first step with the equations for a geostrophic and hydrostatic atmosphere. This step is intended to bring radiance into a three dimensional balance with wind, height, and temperature. The use of the geostrophic approximation in place of the full set of primitive equations allows for an easier evaluation of how the inclusion of the radiative transfer equation increases the complexity of the variational equations. Seven different variational formulations were developed for geostrophic, hydrostatic, and radiative transfer equations. The first derivation was too complex to yield solutions that were physically meaningful. For the remaining six derivations, the variational method gave the same physical interpretation (the observed brightness temperatures could provide no meaningful input to a geostrophic, hydrostatic balance) at least through the problem solving methodology used in these studies. The variational method is presented and the Euler-Lagrange equations rederived for the geostrophic, hydrostatic, and radiative transfer equations.
Smith, Amanda L.; Benazzi, Stefano; Ledogar, Justin A.; Tamvada, Kelli; Smith, Leslie C. Pryor; Weber, Gerhard W.; Spencer, Mark A.; Dechow, Paul C.; Grosse, Ian R.; Ross, Callum F.; Richmond, Brian G.; Wright, Barth W.; Wang, Qian; Byron, Craig; Slice, Dennis E.; Strait, David S.
2014-01-01
In a broad range of evolutionary studies, an understanding of intraspecific variation is needed in order to contextualize and interpret the meaning of variation between species. However, mechanical analyses of primate crania using experimental or modeling methods typically encounter logistical constraints that force them to rely on data gathered from only one or a few individuals. This results in a lack of knowledge concerning the mechanical significance of intraspecific shape variation that limits our ability to infer the significance of interspecific differences. This study uses geometric morphometric methods (GM) and finite element analysis (FEA) to examine the biomechanical implications of shape variation in chimpanzee crania, thereby providing a comparative context in which to interpret shape-related mechanical variation between hominin species. Six finite element models (FEMs) of chimpanzee crania were constructed from CT scans following shape-space Principal Component Analysis (PCA) of a matrix of 709 Procrustes coordinates (digitized onto 21 specimens) to identify the individuals at the extremes of the first three principal components. The FEMs were assigned the material properties of bone and were loaded and constrained to simulate maximal bites on the P3 and M2. Resulting strains indicate that intraspecific cranial variation in morphology is associated with quantitatively high levels of variation in strain magnitudes, but qualitatively little variation in the distribution of strain concentrations. Thus, interspecific comparisons should include considerations of the spatial patterning of strains rather than focus only their magnitude. PMID:25529239
Vertical Bridgman growth of Hg 1-xMn xTe with variational withdrawal rate
NASA Astrophysics Data System (ADS)
Zhi, Gu; Wan-Qi, Jie; Guo-Qiang, Li; Long, Zhang
2004-09-01
Based on the solute redistribution models, Vertical Bridgman growth of Hg1-xMnxTe with variational withdrawal rate is studied. Both theoretical analysis and experimental results show that the axial composition uniformity is improved and the crystal growth rate is also increased at the optimized variational method of withdrawal rate.
Isogeometric Divergence-conforming B-splines for the Steady Navier-Stokes Equations
2012-04-01
discretizations produce pointwise divergence-free velocity elds and hence exactly satisfy mass conservation. Consequently, discrete variational formulations...cretizations produce pointwise divergence-free velocity fields and hence exactly satisfy mass conservation. Consequently, discrete variational ... variational formulation. Using a combination of an advective for- mulation, SUPG, PSPG, and grad-div stabilization, provably convergent numerical methods
Measurement and Socio-Demographic Variation of Social Capital in a Large Population-Based Survey
ERIC Educational Resources Information Center
Nieminen, Tarja; Martelin, Tuija; Koskinen, Seppo; Simpura, Jussi; Alanen, Erkki; Harkanen, Tommi; Aromaa, Arpo
2008-01-01
Objectives: The main objective of this study was to describe the variation of individual social capital according to socio-demographic factors, and to develop a suitable way to measure social capital for this purpose. The similarity of socio-demographic variation between the genders was also assessed. Data and methods: The study applied…
BayesPI-BAR: a new biophysical model for characterization of regulatory sequence variations
Wang, Junbai; Batmanov, Kirill
2015-01-01
Sequence variations in regulatory DNA regions are known to cause functionally important consequences for gene expression. DNA sequence variations may have an essential role in determining phenotypes and may be linked to disease; however, their identification through analysis of massive genome-wide sequencing data is a great challenge. In this work, a new computational pipeline, a Bayesian method for protein–DNA interaction with binding affinity ranking (BayesPI-BAR), is proposed for quantifying the effect of sequence variations on protein binding. BayesPI-BAR uses biophysical modeling of protein–DNA interactions to predict single nucleotide polymorphisms (SNPs) that cause significant changes in the binding affinity of a regulatory region for transcription factors (TFs). The method includes two new parameters (TF chemical potentials or protein concentrations and direct TF binding targets) that are neglected by previous methods. The new method is verified on 67 known human regulatory SNPs, of which 47 (70%) have predicted true TFs ranked in the top 10. Importantly, the performance of BayesPI-BAR, which uses principal component analysis to integrate multiple predictions from various TF chemical potentials, is found to be better than that of existing programs, such as sTRAP and is-rSNP, when evaluated on the same SNPs. BayesPI-BAR is a publicly available tool and is able to carry out parallelized computation, which helps to investigate a large number of TFs or SNPs and to detect disease-associated regulatory sequence variations in the sea of genome-wide noncoding regions. PMID:26202972
Intra- and Inter-Fractional Variation Prediction of Lung Tumors Using Fuzzy Deep Learning
Park, Seonyeong; Lee, Suk Jin; Weiss, Elisabeth
2016-01-01
Tumor movements should be accurately predicted to improve delivery accuracy and reduce unnecessary radiation exposure to healthy tissue during radiotherapy. The tumor movements pertaining to respiration are divided into intra-fractional variation occurring in a single treatment session and inter-fractional variation arising between different sessions. Most studies of patients’ respiration movements deal with intra-fractional variation. Previous studies on inter-fractional variation are hardly mathematized and cannot predict movements well due to inconstant variation. Moreover, the computation time of the prediction should be reduced. To overcome these limitations, we propose a new predictor for intra- and inter-fractional data variation, called intra- and inter-fraction fuzzy deep learning (IIFDL), where FDL, equipped with breathing clustering, predicts the movement accurately and decreases the computation time. Through the experimental results, we validated that the IIFDL improved root-mean-square error (RMSE) by 29.98% and prediction overshoot by 70.93%, compared with existing methods. The results also showed that the IIFDL enhanced the average RMSE and overshoot by 59.73% and 83.27%, respectively. In addition, the average computation time of IIFDL was 1.54 ms for both intra- and inter-fractional variation, which was much smaller than the existing methods. Therefore, the proposed IIFDL might achieve real-time estimation as well as better tracking techniques in radiotherapy. PMID:27170914
Lisovskiĭ, A A; Pavlinov, I Ia
2008-01-01
Any morphospace is partitioned by the forms of group variation, its structure is described by a set of scalar (range, overlap) and vector (direction) characteristics. They are analyzed quantitatively for the sex and age variations in the sample of 200 skulls of the pine marten described by 14 measurable traits. Standard dispersion and variance components analyses are employed, accompanied with several resampling methods (randomization and bootstrep); effects of changes in the analysis design on results of the above methods are also considered. Maximum likelihood algorithm of variance components analysis is shown to give an adequate estimates of portions of particular forms of group variation within the overall disparity. It is quite stable in respect to changes of the analysis design and therefore could be used in the explorations of the real data with variously unbalanced designs. A new algorithm of estimation of co-directionality of particular forms of group variation within the overall disparity is elaborated, which includes angle measures between eigenvectors of covariation matrices of effects of group variations calculated by dispersion analysis. A null hypothesis of random portion of a given group variation could be tested by means of randomization of the respective grouping variable. A null hypothesis of equality of both portions and directionalities of different forms of group variation could be tested by means of the bootstrep procedure.
NASA Astrophysics Data System (ADS)
Huang, Wei; Chen, Xiu; Wang, Yueyun
2018-03-01
Landsat data are widely used in various earth observations, but the clouds interfere with the applications of the images. This paper proposes a weighted variational gradient-based fusion method (WVGBF) for high-fidelity thin cloud removal of Landsat images, which is an improvement of the variational gradient-based fusion (VGBF) method. The VGBF method integrates the gradient information from the reference band into visible bands of cloudy image to enable spatial details and remove thin clouds. The VGBF method utilizes the same gradient constraints to the entire image, which causes the color distortion in cloudless areas. In our method, a weight coefficient is introduced into the gradient approximation term to ensure the fidelity of image. The distribution of weight coefficient is related to the cloud thickness map. The map is built on Independence Component Analysis (ICA) by using multi-temporal Landsat images. Quantitatively, we use R value to evaluate the fidelity in the cloudless regions and metric Q to evaluate the clarity in the cloud areas. The experimental results indicate that the proposed method has the better ability to remove thin cloud and achieve high fidelity.
3D first-arrival traveltime tomography with modified total variation regularization
NASA Astrophysics Data System (ADS)
Jiang, Wenbin; Zhang, Jie
2018-02-01
Three-dimensional (3D) seismic surveys have become a major tool in the exploration and exploitation of hydrocarbons. 3D seismic first-arrival traveltime tomography is a robust method for near-surface velocity estimation. A common approach for stabilizing the ill-posed inverse problem is to apply Tikhonov regularization to the inversion. However, the Tikhonov regularization method recovers smooth local structures while blurring the sharp features in the model solution. We present a 3D first-arrival traveltime tomography method with modified total variation (MTV) regularization to preserve sharp velocity contrasts and improve the accuracy of velocity inversion. To solve the minimization problem of the new traveltime tomography method, we decouple the original optimization problem into two following subproblems: a standard traveltime tomography problem with the traditional Tikhonov regularization and a L2 total variation problem. We apply the conjugate gradient method and split-Bregman iterative method to solve these two subproblems, respectively. Our synthetic examples show that the new method produces higher resolution models than the conventional traveltime tomography with Tikhonov regularization. We apply the technique to field data. The stacking section shows significant improvements with static corrections from the MTV traveltime tomography.
Face landmark point tracking using LK pyramid optical flow
NASA Astrophysics Data System (ADS)
Zhang, Gang; Tang, Sikan; Li, Jiaquan
2018-04-01
LK pyramid optical flow is an effective method to implement object tracking in a video. It is used for face landmark point tracking in a video in the paper. The landmark points, i.e. outer corner of left eye, inner corner of left eye, inner corner of right eye, outer corner of right eye, tip of a nose, left corner of mouth, right corner of mouth, are considered. It is in the first frame that the landmark points are marked by hand. For subsequent frames, performance of tracking is analyzed. Two kinds of conditions are considered, i.e. single factors such as normalized case, pose variation and slowly moving, expression variation, illumination variation, occlusion, front face and rapidly moving, pose face and rapidly moving, and combination of the factors such as pose and illumination variation, pose and expression variation, pose variation and occlusion, illumination and expression variation, expression variation and occlusion. Global measures and local ones are introduced to evaluate performance of tracking under different factors or combination of the factors. The global measures contain the number of images aligned successfully, average alignment error, the number of images aligned before failure, and the local ones contain the number of images aligned successfully for components of a face, average alignment error for the components. To testify performance of tracking for face landmark points under different cases, tests are carried out for image sequences gathered by us. Results show that the LK pyramid optical flow method can implement face landmark point tracking under normalized case, expression variation, illumination variation which does not affect facial details, pose variation, and that different factors or combination of the factors have different effect on performance of alignment for different landmark points.
Lithium Enolates of Simple Ketones: Structure Determination Using the Method of Continuous Variation
Liou, Lara R.; McNeil, Anne J.; Ramirez, Antonio; Toombes, Gilman E. S.; Gruver, Jocelyn M.
2009-01-01
The method of continuous variation in conjunction with 6Li NMR spectroscopy was used to characterize lithium enolates derived from 1-indanone, cyclohexanone, and cyclopentanone in solution. The strategy relies on forming ensembles of homo- and heteroaggregated enolates. The enolates form exclusively chelated dimers in N,N,N’,N’-tetramethylethylenediamine and cubic tetramers in tetrahydrofuran and 1,2-dimethoxyethane. PMID:18336025
NASA Astrophysics Data System (ADS)
Singh, K.; Sandu, A.; Bowman, K. W.; Parrington, M.; Jones, D. B. A.; Lee, M.
2011-08-01
Chemistry transport models determine the evolving chemical state of the atmosphere by solving the fundamental equations that govern physical and chemical transformations subject to initial conditions of the atmospheric state and surface boundary conditions, e.g., surface emissions. The development of data assimilation techniques synthesize model predictions with measurements in a rigorous mathematical framework that provides observational constraints on these conditions. Two families of data assimilation methods are currently widely used: variational and Kalman filter (KF). The variational approach is based on control theory and formulates data assimilation as a minimization problem of a cost functional that measures the model-observations mismatch. The Kalman filter approach is rooted in statistical estimation theory and provides the analysis covariance together with the best state estimate. Suboptimal Kalman filters employ different approximations of the covariances in order to make the computations feasible with large models. Each family of methods has both merits and drawbacks. This paper compares several data assimilation methods used for global chemical data assimilation. Specifically, we evaluate data assimilation approaches for improving estimates of the summertime global tropospheric ozone distribution in August 2006 based on ozone observations from the NASA Tropospheric Emission Spectrometer and the GEOS-Chem chemistry transport model. The resulting analyses are compared against independent ozonesonde measurements to assess the effectiveness of each assimilation method. All assimilation methods provide notable improvements over the free model simulations, which differ from the ozonesonde measurements by about 20 % (below 200 hPa). Four dimensional variational data assimilation with window lengths between five days and two weeks is the most accurate method, with mean differences between analysis profiles and ozonesonde measurements of 1-5 %. Two sequential assimilation approaches (three dimensional variational and suboptimal KF), although derived from different theoretical considerations, provide similar ozone estimates, with relative differences of 5-10 % between the analyses and ozonesonde measurements. Adjoint sensitivity analysis techniques are used to explore the role of of uncertainties in ozone precursors and their emissions on the distribution of tropospheric ozone. A novel technique is introduced that projects 3-D-Variational increments back to an equivalent initial condition, which facilitates comparison with 4-D variational techniques.
NASA Astrophysics Data System (ADS)
Zhao, Y.; Wang, B.; Wang, Y.
2007-12-01
Recently, a new data assimilation method called “3-dimensional variational data assimilation of mapped observation (3DVM)” has been developed by the authors. We have shown that the new method is very efficient and inexpensive compared with its counterpart 4-dimensional variational data assimilation (4DVar). The new method has been implemented into the Penn State/NCAR mesoscale model MM5V1 (MM5_3DVM). In this study, we apply the new method to the bogus data assimilation (BDA) available in the original MM5 with the 4DVar. By the new approach, a specified sea-level pressure (SLP) field (bogus data) is incorporated into MM5 through the 3DVM (for convenient, we call it variational bogus mapped data assimilation - BMDA) instead of the original 4DVar data assimilation. To demonstrate the effectiveness of the new 3DVM method, initialization and simulation of a landfalling typhoon - typhoon Dan (1999) over the western North Pacific with the new method are compared with that with its counterpart 4DVar in MM5. Results show that the initial structure and the simulated intensity and track are improved more significantly using 3DVM than 4DVar. Sensitivity experiments also show that the simulated typhoon track and intensity are more sensitive to the size of the assimilation window in the 4DVar than that in the 3DVM. Meanwhile, 3DVM takes much less computing cost than its counterpart 4DVar for a given time window.
Spatial Normalization of Reverse Phase Protein Array Data
Kaushik, Poorvi; Molinelli, Evan J.; Miller, Martin L.; Wang, Weiqing; Korkut, Anil; Liu, Wenbin; Ju, Zhenlin; Lu, Yiling; Mills, Gordon; Sander, Chris
2014-01-01
Reverse phase protein arrays (RPPA) are an efficient, high-throughput, cost-effective method for the quantification of specific proteins in complex biological samples. The quality of RPPA data may be affected by various sources of error. One of these, spatial variation, is caused by uneven exposure of different parts of an RPPA slide to the reagents used in protein detection. We present a method for the determination and correction of systematic spatial variation in RPPA slides using positive control spots printed on each slide. The method uses a simple bi-linear interpolation technique to obtain a surface representing the spatial variation occurring across the dimensions of a slide. This surface is used to calculate correction factors that can normalize the relative protein concentrations of the samples on each slide. The adoption of the method results in increased agreement between technical and biological replicates of various tumor and cell-line derived samples. Further, in data from a study of the melanoma cell-line SKMEL-133, several slides that had previously been rejected because they had a coefficient of variation (CV) greater than 15%, are rescued by reduction of CV below this threshold in each case. The method is implemented in the R statistical programing language. It is compatible with MicroVigene and SuperCurve, packages commonly used in RPPA data analysis. The method is made available, along with suggestions for implementation, at http://bitbucket.org/rppa_preprocess/rppa_preprocess/src. PMID:25501559
Maintenance of genetic diversity through plant-herbivore interactions
Gloss, Andrew D.; Dittrich, Anna C. Nelson; Goldman-Huertas, Benjamin; Whiteman, Noah K.
2013-01-01
Identifying the factors governing the maintenance of genetic variation is a central challenge in evolutionary biology. New genomic data, methods and conceptual advances provide increasing evidence that balancing selection, mediated by antagonistic species interactions, maintains functionally-important genetic variation within species and natural populations. Because diverse interactions between plants and herbivorous insects dominate terrestrial communities, they provide excellent systems to address this hypothesis. Population genomic studies of Arabidopsis thaliana and its relatives suggest spatial variation in herbivory maintains adaptive genetic variation controlling defense phenotypes, both within and among populations. Conversely, inter-species variation in plant defenses promotes adaptive genetic variation in herbivores. Emerging genomic model herbivores of Arabidopsis could illuminate how genetic variation in herbivores and plants interact simultaneously. PMID:23834766
DOE Office of Scientific and Technical Information (OSTI.GOV)
More, J. J.; Sorensen, D. C.
1982-02-01
Newton's method plays a central role in the development of numerical techniques for optimization. In fact, most of the current practical methods for optimization can be viewed as variations on Newton's method. It is therefore important to understand Newton's method as an algorithm in its own right and as a key introduction to the most recent ideas in this area. One of the aims of this expository paper is to present and analyze two main approaches to Newton's method for unconstrained minimization: the line search approach and the trust region approach. The other aim is to present some of themore » recent developments in the optimization field which are related to Newton's method. In particular, we explore several variations on Newton's method which are appropriate for large scale problems, and we also show how quasi-Newton methods can be derived quite naturally from Newton's method.« less
Marinho, V C; Richards, D; Niederman, R
2001-05-01
Variation in health care, and more particularly in dental care, was recently chronicled in a Readers Digest investigative report. The conclusions of this report are consistent with sound scientific studies conducted in various areas of health care, including dental care, which demonstrate substantial variation in the care provided to patients. This variation in care parallels the certainty with which clinicians and faculty members often articulate strongly held, but very different opinions. Using a case-based dental scenario, we present systematic evidence-based methods for accessing dental health care information, evaluating this information for validity and importance, and using this information to make informed curricular and clinical decisions. We also discuss barriers inhibiting these systematic approaches to evidence-based clinical decision making and methods for effectively promoting behavior change in health care professionals.
Feng, Zhao; Kui-Dong, Xu; Zhao-Cui, Meng
2012-12-01
By using denaturing gradient gel electrophoresis (DGGE) and sequencing as well as Ludox-QPS method, an investigation was made on the ciliate diversity and its spatiotemporal variation in the surface sediments at three sites of Yangtze River estuary hypoxic zone in April and August 2011. The ANOSIM analysis indicated that the ciliate diversity had significant difference among the sites (R = 0.896, P = 0.0001), but less difference among seasons (R = 0.043, P = 0.207). The sequencing of 18S rDNA DGGE bands revealed that the most predominant groups were planktonic Choreotrichia and Oligotrichia. The detection by Ludox-QPS method showed that the species number and abundance of active ciliates were maintained at a higher level, and increased by 2-5 times in summer, as compared with those in spring. Both the Ludox-QPS method and the DGGE technique detected that the ciliate diversity at the three sites had the similar variation trend, and the Ludox-QPS method detected that there was a significant variation in the ciliate species number and abundance between different seasons. The species number detected by Ludox-QPS method was higher than that detected by DGGE bands. Our study indicated that the ciliates in Yangtze River estuary hypoxic zone had higher diversity and abundance, with the potential to supply food for the polyps of jellyfish.
NASA Technical Reports Server (NTRS)
Roth, Don J.; Carney, Dorothy V.; Baaklini, George Y.; Bodis, James R.; Rauser, Richard W.
1998-01-01
Ultrasonic velocity/time-of-flight imaging that uses back surface reflections to gauge volumetric material quality is highly suited for quantitative characterization of microstructural gradients including those due to pore fraction, density, fiber fraction, and chemical composition variations. However, a weakness of conventional pulse-echo ultrasonic velocity/time-of-flight imaging is that the image shows the effects of thickness as well as microstructural variations unless the part is uniformly thick. This limits this imaging method's usefulness in practical applications. Prior studies have described a pulse-echo time-of-flight-based ultrasonic imaging method that requires using a single transducer in combination with a reflector plate placed behind samples that eliminates the effect of thickness variation in the image. In those studies, this method was successful at isolating ultrasonic variations due to material microstructure in plate-like samples of silicon nitride, metal matrix composite, and polymer matrix composite. In this study, the method is engineered for inspection of more complex-shaped structures-those having (hollow) tubular/curved geometry. The experimental inspection technique and results are described as applied to (1) monolithic mullite ceramic and polymer matrix composite 'proof-of-concept' tubular structures that contain machined patches of various depths and (2) as-manufactured monolithic silicon nitride ceramic and silicon carbide/silicon carbide composite tubular structures that might be used in 'real world' applications.
Lopez-Martin, Manuel; Carro, Belen; Sanchez-Esguevillas, Antonio; Lloret, Jaime
2017-08-26
The purpose of a Network Intrusion Detection System is to detect intrusive, malicious activities or policy violations in a host or host's network. In current networks, such systems are becoming more important as the number and variety of attacks increase along with the volume and sensitiveness of the information exchanged. This is of particular interest to Internet of Things networks, where an intrusion detection system will be critical as its economic importance continues to grow, making it the focus of future intrusion attacks. In this work, we propose a new network intrusion detection method that is appropriate for an Internet of Things network. The proposed method is based on a conditional variational autoencoder with a specific architecture that integrates the intrusion labels inside the decoder layers. The proposed method is less complex than other unsupervised methods based on a variational autoencoder and it provides better classification results than other familiar classifiers. More important, the method can perform feature reconstruction, that is, it is able to recover missing features from incomplete training datasets. We demonstrate that the reconstruction accuracy is very high, even for categorical features with a high number of distinct values. This work is unique in the network intrusion detection field, presenting the first application of a conditional variational autoencoder and providing the first algorithm to perform feature recovery.
Carro, Belen; Sanchez-Esguevillas, Antonio
2017-01-01
The purpose of a Network Intrusion Detection System is to detect intrusive, malicious activities or policy violations in a host or host’s network. In current networks, such systems are becoming more important as the number and variety of attacks increase along with the volume and sensitiveness of the information exchanged. This is of particular interest to Internet of Things networks, where an intrusion detection system will be critical as its economic importance continues to grow, making it the focus of future intrusion attacks. In this work, we propose a new network intrusion detection method that is appropriate for an Internet of Things network. The proposed method is based on a conditional variational autoencoder with a specific architecture that integrates the intrusion labels inside the decoder layers. The proposed method is less complex than other unsupervised methods based on a variational autoencoder and it provides better classification results than other familiar classifiers. More important, the method can perform feature reconstruction, that is, it is able to recover missing features from incomplete training datasets. We demonstrate that the reconstruction accuracy is very high, even for categorical features with a high number of distinct values. This work is unique in the network intrusion detection field, presenting the first application of a conditional variational autoencoder and providing the first algorithm to perform feature recovery. PMID:28846608
Delanghe, Joris R; Cobbaert, Christa; Galteau, Marie-Madeleine; Harmoinen, Aimo; Jansen, Rob; Kruse, Rolf; Laitinen, Päivi; Thienpont, Linda M; Wuyts, Birgitte; Weykamp, Cas; Panteghini, Mauro
2008-01-01
The European In Vitro Diagnostics (IVD) directive requires traceability to reference methods and materials of analytes. It is a task of the profession to verify the trueness of results and IVD compatibility. The results of a trueness verification study by the European Communities Confederation of Clinical Chemistry (EC4) working group on creatinine standardization are described, in which 189 European laboratories analyzed serum creatinine in a commutable serum-based material, using analytical systems from seven companies. Values were targeted using isotope dilution gas chromatography/mass spectrometry. Results were tested on their compliance to a set of three criteria: trueness, i.e., no significant bias relative to the target value, between-laboratory variation and within-laboratory variation relative to the maximum allowable error. For the lower and intermediate level, values differed significantly from the target value in the Jaffe and the dry chemistry methods. At the high level, dry chemistry yielded higher results. Between-laboratory coefficients of variation ranged from 4.37% to 8.74%. Total error budget was mainly consumed by the bias. Non-compensated Jaffe methods largely exceeded the total error budget. Best results were obtained for the enzymatic method. The dry chemistry method consumed a large part of its error budget due to calibration bias. Despite the European IVD directive and the growing needs for creatinine standardization, an unacceptable inter-laboratory variation was observed, which was mainly due to calibration differences. The calibration variation has major clinical consequences, in particular in pediatrics, where reference ranges for serum and plasma creatinine are low, and in the estimation of glomerular filtration rate.
A parameter-free variational coupling approach for trimmed isogeometric thin shells
NASA Astrophysics Data System (ADS)
Guo, Yujie; Ruess, Martin; Schillinger, Dominik
2017-04-01
The non-symmetric variant of Nitsche's method was recently applied successfully for variationally enforcing boundary and interface conditions in non-boundary-fitted discretizations. In contrast to its symmetric variant, it does not require stabilization terms and therefore does not depend on the appropriate estimation of stabilization parameters. In this paper, we further consolidate the non-symmetric Nitsche approach by establishing its application in isogeometric thin shell analysis, where variational coupling techniques are of particular interest for enforcing interface conditions along trimming curves. To this end, we extend its variational formulation within Kirchhoff-Love shell theory, combine it with the finite cell method, and apply the resulting framework to a range of representative shell problems based on trimmed NURBS surfaces. We demonstrate that the non-symmetric variant applied in this context is stable and can lead to the same accuracy in terms of displacements and stresses as its symmetric counterpart. Based on our numerical evidence, the non-symmetric Nitsche method is a viable parameter-free alternative to the symmetric variant in elastostatic shell analysis.
Single nucleotide variations: Biological impact and theoretical interpretation
Katsonis, Panagiotis; Koire, Amanda; Wilson, Stephen Joseph; Hsu, Teng-Kuei; Lua, Rhonald C; Wilkins, Angela Dawn; Lichtarge, Olivier
2014-01-01
Genome-wide association studies (GWAS) and whole-exome sequencing (WES) generate massive amounts of genomic variant information, and a major challenge is to identify which variations drive disease or contribute to phenotypic traits. Because the majority of known disease-causing mutations are exonic non-synonymous single nucleotide variations (nsSNVs), most studies focus on whether these nsSNVs affect protein function. Computational studies show that the impact of nsSNVs on protein function reflects sequence homology and structural information and predict the impact through statistical methods, machine learning techniques, or models of protein evolution. Here, we review impact prediction methods and discuss their underlying principles, their advantages and limitations, and how they compare to and complement one another. Finally, we present current applications and future directions for these methods in biological research and medical genetics. PMID:25234433
NASA Astrophysics Data System (ADS)
Wong, Kin-Yiu; Gao, Jiali
2007-12-01
Based on Kleinert's variational perturbation (KP) theory [Path Integrals in Quantum Mechanics, Statistics, Polymer Physics, and Financial Markets, 3rd ed. (World Scientific, Singapore, 2004)], we present an analytic path-integral approach for computing the effective centroid potential. The approach enables the KP theory to be applied to any realistic systems beyond the first-order perturbation (i.e., the original Feynman-Kleinert [Phys. Rev. A 34, 5080 (1986)] variational method). Accurate values are obtained for several systems in which exact quantum results are known. Furthermore, the computed kinetic isotope effects for a series of proton transfer reactions, in which the potential energy surfaces are evaluated by density-functional theory, are in good accordance with experiments. We hope that our method could be used by non-path-integral experts or experimentalists as a "black box" for any given system.
Fat scoring: Sources of variability
Krementz, D.G.; Pendleton, G.W.
1990-01-01
Fat scoring is a widely used nondestructive method of assessing total body fat in birds. This method has not been rigorously investigated. We investigated inter- and intraobserver variability in scoring as well as the predictive ability of fat scoring using five species of passerines. Between-observer variation in scoring was variable and great at times. Observers did not consistently score species higher or lower relative to other observers nor did they always score birds with more total body fat higher. We found that within-observer variation was acceptable but was dependent on the species being scored. The precision of fat scoring was species-specific and for most species, fat scores accounted for less than 50% of the variation in true total body fat. Overall, we would describe fat scoring as a fairly precise method of indexing total body fat but with limited reliability among observers.
Importance of parametrizing constraints in quantum-mechanical variational calculations
NASA Technical Reports Server (NTRS)
Chung, Kwong T.; Bhatia, A. K.
1992-01-01
In variational calculations of quantum mechanics, constraints are sometimes imposed explicitly on the wave function. These constraints, which are deduced by physical arguments, are often not uniquely defined. In this work, the advantage of parametrizing constraints and letting the variational principle determine the best possible constraint for the problem is pointed out. Examples are carried out to show the surprising effectiveness of the variational method if constraints are parameterized. It is also shown that misleading results may be obtained if a constraint is not parameterized.
Li, Zhao; Dosso, Stan E; Sun, Dajun
2016-07-01
This letter develops a Bayesian inversion for localizing underwater acoustic transponders using a surface ship which compensates for sound-speed profile (SSP) temporal variation during the survey. The method is based on dividing observed acoustic travel-time data into time segments and including depth-independent SSP variations for each segment as additional unknown parameters to approximate the SSP temporal variation. SSP variations are estimated jointly with transponder locations, rather than calculated separately as in existing two-step inversions. Simulation and sea-trial results show this localization/SSP joint inversion performs better than two-step inversion in terms of localization accuracy, agreement with measured SSP variations, and computational efficiency.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brizard, Alain J.; Tronci, Cesare
The variational formulations of guiding-center Vlasov-Maxwell theory based on Lagrange, Euler, and Euler-Poincaré variational principles are presented. Each variational principle yields a different approach to deriving guiding-center polarization and magnetization effects into the guiding-center Maxwell equations. The conservation laws of energy, momentum, and angular momentum are also derived by Noether method, where the guiding-center stress tensor is now shown to be explicitly symmetric.
Variational Principles for Buckling of Microtubules Modeled as Nonlocal Orthotropic Shells
2014-01-01
A variational principle for microtubules subject to a buckling load is derived by semi-inverse method. The microtubule is modeled as an orthotropic shell with the constitutive equations based on nonlocal elastic theory and the effect of filament network taken into account as an elastic surrounding. Microtubules can carry large compressive forces by virtue of the mechanical coupling between the microtubules and the surrounding elastic filament network. The equations governing the buckling of the microtubule are given by a system of three partial differential equations. The problem studied in the present work involves the derivation of the variational formulation for microtubule buckling. The Rayleigh quotient for the buckling load as well as the natural and geometric boundary conditions of the problem is obtained from this variational formulation. It is observed that the boundary conditions are coupled as a result of nonlocal formulation. It is noted that the analytic solution of the buckling problem for microtubules is usually a difficult task. The variational formulation of the problem provides the basis for a number of approximate and numerical methods of solutions and furthermore variational principles can provide physical insight into the problem. PMID:25214886
Artificial mismatch hybridization
Guo, Zhen; Smith, Lloyd M.
1998-01-01
An improved nucleic acid hybridization process is provided which employs a modified oligonucleotide and improves the ability to discriminate a control nucleic acid target from a variant nucleic acid target containing a sequence variation. The modified probe contains at least one artificial mismatch relative to the control nucleic acid target in addition to any mismatch(es) arising from the sequence variation. The invention has direct and advantageous application to numerous existing hybridization methods, including, applications that employ, for example, the Polymerase Chain Reaction, allele-specific nucleic acid sequencing methods, and diagnostic hybridization methods.
Natural frequencies of thin rectangular plates clamped on contour using the Finite Element Method
NASA Astrophysics Data System (ADS)
(Barboni Haţiegan, L.; Haţiegan, C.; Gillich, G. R.; Hamat, C. O.; Vasile, O.; Stroia, M. D.
2018-01-01
This paper presents the determining of natural frequencies of plates without and with damages using the finite element method of SolidWorks program. The first thirty natural frequencies obtained for thin rectangular rectangular plates clamped on contour without and with central damages a for different dimensions. The relative variation of natural frequency was determined and the obtained results by the finite element method (FEM) respectively relative variation of natural frequency, were graphically represented according to their vibration natural modes. Finally, the obtained results were compared.
Measuring the surface tension of a liquid-gas interface by automatic stalagmometer
NASA Astrophysics Data System (ADS)
Molina, C.; Victoria, L.; Arenas, A.
2000-06-01
We present a variation of the stalagmometer method for automatically determining the surface tension of a liquid-gas interface using a pressure sensor to measure the pressure variation per drop. The presented method does not depend on a knowledge of the density of the problem liquid and obtains values with a measurement error in the range of 1%-2%. Its low cost and simplicity mean that the technique can be used in the teaching and instrumentation laboratory in the same way as other methods.
Introduction of Total Variation Regularization into Filtered Backprojection Algorithm
NASA Astrophysics Data System (ADS)
Raczyński, L.; Wiślicki, W.; Klimaszewski, K.; Krzemień, W.; Kowalski, P.; Shopa, R. Y.; Białas, P.; Curceanu, C.; Czerwiński, E.; Dulski, K.; Gajos, A.; Głowacz, B.; Gorgol, M.; Hiesmayr, B.; Jasińska, B.; Kisielewska-Kamińska, D.; Korcyl, G.; Kozik, T.; Krawczyk, N.; Kubicz, E.; Mohammed, M.; Pawlik-Niedźwiecka, M.; Niedźwiecki, S.; Pałka, M.; Rudy, Z.; Sharma, N. G.; Sharma, S.; Silarski, M.; Skurzok, M.; Wieczorek, A.; Zgardzińska, B.; Zieliński, M.; Moskal, P.
In this paper we extend the state-of-the-art filtered backprojection (FBP) method with application of the concept of Total Variation regularization. We compare the performance of the new algorithm with the most common form of regularizing in the FBP image reconstruction via apodizing functions. The methods are validated in terms of cross-correlation coefficient between reconstructed and real image of radioactive tracer distribution using standard Derenzo-type phantom. We demonstrate that the proposed approach results in higher cross-correlation values with respect to the standard FBP method.
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Pai, Shantaram S.; Coroneos, Rula M.
2010-01-01
Structural design generated by traditional method, optimization method and the stochastic design concept are compared. In the traditional method, the constraints are manipulated to obtain the design and weight is back calculated. In design optimization, the weight of a structure becomes the merit function with constraints imposed on failure modes and an optimization algorithm is used to generate the solution. Stochastic design concept accounts for uncertainties in loads, material properties, and other parameters and solution is obtained by solving a design optimization problem for a specified reliability. Acceptable solutions were produced by all the three methods. The variation in the weight calculated by the methods was modest. Some variation was noticed in designs calculated by the methods. The variation may be attributed to structural indeterminacy. It is prudent to develop design by all three methods prior to its fabrication. The traditional design method can be improved when the simplified sensitivities of the behavior constraint is used. Such sensitivity can reduce design calculations and may have a potential to unify the traditional and optimization methods. Weight versus reliabilitytraced out an inverted-S-shaped graph. The center of the graph corresponded to mean valued design. A heavy design with weight approaching infinity could be produced for a near-zero rate of failure. Weight can be reduced to a small value for a most failure-prone design. Probabilistic modeling of load and material properties remained a challenge.
NASA Astrophysics Data System (ADS)
Yang, Haijian; Sun, Shuyu; Yang, Chao
2017-03-01
Most existing methods for solving two-phase flow problems in porous media do not take the physically feasible saturation fractions between 0 and 1 into account, which often destroys the numerical accuracy and physical interpretability of the simulation. To calculate the solution without the loss of this basic requirement, we introduce a variational inequality formulation of the saturation equilibrium with a box inequality constraint, and use a conservative finite element method for the spatial discretization and a backward differentiation formula with adaptive time stepping for the temporal integration. The resulting variational inequality system at each time step is solved by using a semismooth Newton algorithm. To accelerate the Newton convergence and improve the robustness, we employ a family of adaptive nonlinear elimination methods as a nonlinear preconditioner. Some numerical results are presented to demonstrate the robustness and efficiency of the proposed algorithm. A comparison is also included to show the superiority of the proposed fully implicit approach over the classical IMplicit Pressure-Explicit Saturation (IMPES) method in terms of the time step size and the total execution time measured on a parallel computer.
On the Total Variation of High-Order Semi-Discrete Central Schemes for Conservation Laws
NASA Technical Reports Server (NTRS)
Bryson, Steve; Levy, Doron
2004-01-01
We discuss a new fifth-order, semi-discrete, central-upwind scheme for solving one-dimensional systems of conservation laws. This scheme combines a fifth-order WENO reconstruction, a semi-discrete central-upwind numerical flux, and a strong stability preserving Runge-Kutta method. We test our method with various examples, and give particular attention to the evolution of the total variation of the approximations.
ERIC Educational Resources Information Center
Ninemire, B.; Mei, W. N.
2004-01-01
In applying the variational method, six different sets of trial wave functions are used to calculate the ground state and first excited state energies of the strongly bound potentials, i.e. V(x)=x[2m], where m = 4, 5 and 6. It is shown that accurate results can be obtained from thorough analysis of the asymptotic behaviour of the solutions.…
Tracking and recognition face in videos with incremental local sparse representation model
NASA Astrophysics Data System (ADS)
Wang, Chao; Wang, Yunhong; Zhang, Zhaoxiang
2013-10-01
This paper addresses the problem of tracking and recognizing faces via incremental local sparse representation. First a robust face tracking algorithm is proposed via employing local sparse appearance and covariance pooling method. In the following face recognition stage, with the employment of a novel template update strategy, which combines incremental subspace learning, our recognition algorithm adapts the template to appearance changes and reduces the influence of occlusion and illumination variation. This leads to a robust video-based face tracking and recognition with desirable performance. In the experiments, we test the quality of face recognition in real-world noisy videos on YouTube database, which includes 47 celebrities. Our proposed method produces a high face recognition rate at 95% of all videos. The proposed face tracking and recognition algorithms are also tested on a set of noisy videos under heavy occlusion and illumination variation. The tracking results on challenging benchmark videos demonstrate that the proposed tracking algorithm performs favorably against several state-of-the-art methods. In the case of the challenging dataset in which faces undergo occlusion and illumination variation, and tracking and recognition experiments under significant pose variation on the University of California, San Diego (Honda/UCSD) database, our proposed method also consistently demonstrates a high recognition rate.
Erus, Guray; Zacharaki, Evangelia I; Davatzikos, Christos
2014-04-01
This paper presents a method for capturing statistical variation of normal imaging phenotypes, with emphasis on brain structure. The method aims to estimate the statistical variation of a normative set of images from healthy individuals, and identify abnormalities as deviations from normality. A direct estimation of the statistical variation of the entire volumetric image is challenged by the high-dimensionality of images relative to smaller sample sizes. To overcome this limitation, we iteratively sample a large number of lower dimensional subspaces that capture image characteristics ranging from fine and localized to coarser and more global. Within each subspace, a "target-specific" feature selection strategy is applied to further reduce the dimensionality, by considering only imaging characteristics present in a test subject's images. Marginal probability density functions of selected features are estimated through PCA models, in conjunction with an "estimability" criterion that limits the dimensionality of estimated probability densities according to available sample size and underlying anatomy variation. A test sample is iteratively projected to the subspaces of these marginals as determined by PCA models, and its trajectory delineates potential abnormalities. The method is applied to segmentation of various brain lesion types, and to simulated data on which superiority of the iterative method over straight PCA is demonstrated. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Instanes, Geir; Pedersen, Audun; Toppe, Mads; Nagy, Peter B.
2009-03-01
This paper describes a novel ultrasonic guided wave inspection technique for the monitoring of internal corrosion and erosion in pipes, which exploits the fundamental flexural mode to measure the average wall thickness over the inspection path. The inspection frequency is chosen so that the group velocity of the fundamental flexural mode is essentially constant throughout the wall thickness range of interest, while the phase velocity is highly dispersive and changes in a systematic way with varying wall thickness in the pipe. Although this approach is somewhat less accurate than the often used transverse resonance methods, it smoothly integrates the wall thickness over the whole propagation length, therefore it is very robust and can tolerate large and uneven thickness variations from point to point. The constant group velocity (CGV) method is capable of monitoring the true average of the wall thickness over the inspection length with an accuracy of 1% even in the presence of one order of magnitude larger local variations. This method also eliminates spurious variations caused by changing temperature, which can cause fairly large velocity variations, but do not significantly influence the dispersion as measured by the true phase angle in the vicinity of the CGV point. The CGV guided wave CEM method was validated in both laboratory and field tests.
Erus, Guray; Zacharaki, Evangelia I.; Davatzikos, Christos
2014-01-01
This paper presents a method for capturing statistical variation of normal imaging phenotypes, with emphasis on brain structure. The method aims to estimate the statistical variation of a normative set of images from healthy individuals, and identify abnormalities as deviations from normality. A direct estimation of the statistical variation of the entire volumetric image is challenged by the high-dimensionality of images relative to smaller sample sizes. To overcome this limitation, we iteratively sample a large number of lower dimensional subspaces that capture image characteristics ranging from fine and localized to coarser and more global. Within each subspace, a “target-specific” feature selection strategy is applied to further reduce the dimensionality, by considering only imaging characteristics present in a test subject’s images. Marginal probability density functions of selected features are estimated through PCA models, in conjunction with an “estimability” criterion that limits the dimensionality of estimated probability densities according to available sample size and underlying anatomy variation. A test sample is iteratively projected to the subspaces of these marginals as determined by PCA models, and its trajectory delineates potential abnormalities. The method is applied to segmentation of various brain lesion types, and to simulated data on which superiority of the iterative method over straight PCA is demonstrated. PMID:24607564
NASA Astrophysics Data System (ADS)
Witantyo; Setyawan, David
2018-03-01
In a lead acid battery industry, grid casting is a process that has high defect and thickness variation level. DMAIC (Define-Measure-Analyse-Improve-Control) method and its tools will be used to improve the casting process. In the Define stage, it is used project charter and SIPOC (Supplier Input Process Output Customer) method to map the existent problem. In the Measure stage, it is conducted a data retrieval related to the types of defect and the amount of it, also the grid thickness variation that happened. And then the retrieved data is processed and analyzed by using 5 Why’s and FMEA method. In the Analyze stage, it is conducted a grid observation that experience fragile and crack type of defect by using microscope showing the amount of oxide Pb inclusion in the grid. Analysis that is used in grid casting process shows the difference of temperature that is too high between the metal fluid and mold temperature, also the corking process that doesn’t have standard. The Improve stage is conducted a fixing process which generates the reduction of grid variation thickness level and defect/unit level from 9,184% to 0,492%. In Control stage, it is conducted a new working standard determination and already fixed control process.
MEM spectral analysis for predicting influenza epidemics in Japan.
Sumi, Ayako; Kamo, Ken-ichi
2012-03-01
The prediction of influenza epidemics has long been the focus of attention in epidemiology and mathematical biology. In this study, we tested whether time series analysis was useful for predicting the incidence of influenza in Japan. The method of time series analysis we used consists of spectral analysis based on the maximum entropy method (MEM) in the frequency domain and the nonlinear least squares method in the time domain. Using this time series analysis, we analyzed the incidence data of influenza in Japan from January 1948 to December 1998; these data are unique in that they covered the periods of pandemics in Japan in 1957, 1968, and 1977. On the basis of the MEM spectral analysis, we identified the periodic modes explaining the underlying variations of the incidence data. The optimum least squares fitting (LSF) curve calculated with the periodic modes reproduced the underlying variation of the incidence data. An extension of the LSF curve could be used to predict the incidence of influenza quantitatively. Our study suggested that MEM spectral analysis would allow us to model temporal variations of influenza epidemics with multiple periodic modes much more effectively than by using the method of conventional time series analysis, which has been used previously to investigate the behavior of temporal variations in influenza data.
Schwinger-variational-principle theory of collisions in the presence of multiple potentials
NASA Astrophysics Data System (ADS)
Robicheaux, F.; Giannakeas, P.; Greene, Chris H.
2015-08-01
A theoretical method for treating collisions in the presence of multiple potentials is developed by employing the Schwinger variational principle. The current treatment agrees with the local (regularized) frame transformation theory and extends its capabilities. Specifically, the Schwinger variational approach gives results without the divergences that need to be regularized in other methods. Furthermore, it provides a framework to identify the origin of these singularities and possibly improve the local frame transformation. We have used the method to obtain the scattering parameters for different confining potentials symmetric in x ,y . The method is also used to treat photodetachment processes in the presence of various confining potentials, thereby highlighting effects of the infinitely many closed channels. Two general features predicted are the vanishing of the total photoabsorption probability at every channel threshold and the occurrence of resonances below the channel thresholds for negative scattering lengths. In addition, the case of negative-ion photodetachment in the presence of uniform magnetic fields is also considered where unique features emerge at large scattering lengths.
Effect of Temperature on Ultrasonic Signal Propagation for Extra Virgin Olive Oil Adulteration
NASA Astrophysics Data System (ADS)
Alias, N. A.; Hamid, S. B. Abdul; Sophian, A.
2017-11-01
Fraud cases involving adulteration of extra virgin olive oil has become significant nowadays due to increasing in cost of supply and highlight given the benefit of extra virgin olive oil for human consumption. This paper presents the effects of temperature variation on spectral formed utilising pulse-echo technique of ultrasound signal. Several methods had been introduced to characterize the adulteration of extra virgin olive oil with other fluid sample such as mass chromatography, standard method by ASTM (density test, distillation test and evaporation test) and mass spectrometer. Pulse-echo method of ultrasound being a non-destructive method to be used to analyse the sound wave signal captured by oscilloscope. In this paper, a non-destructive technique utilizing ultrasound to characterize extra virgin olive oil adulteration level will be presented. It can be observed that frequency spectrum of sample with different ratio and variation temperature shows significant percentages different from 30% up to 70% according to temperature variation thus possible to be used for sample characterization.
Elastic least-squares reverse time migration with velocities and density perturbation
NASA Astrophysics Data System (ADS)
Qu, Yingming; Li, Jinli; Huang, Jianping; Li, Zhenchun
2018-02-01
Elastic least-squares reverse time migration (LSRTM) based on the non-density-perturbation assumption can generate false-migrated interfaces caused by density variations. We perform an elastic LSRTM scheme with density variations for multicomponent seismic data to produce high-quality images in Vp, Vs and ρ components. However, the migrated images may suffer from crosstalk artefacts caused by P- and S-waves coupling in elastic LSRTM no matter what model parametrizations used. We have proposed an elastic LSRTM with density variations method based on wave modes separation to reduce these crosstalk artefacts by using P- and S-wave decoupled elastic velocity-stress equations to derive demigration equations and gradient formulae with respect to Vp, Vs and ρ. Numerical experiments with synthetic data demonstrate the capability and superiority of the proposed method. The imaging results suggest that our method promises imaging results with higher quality and has a faster residual convergence rate. Sensitivity analysis of migration velocity, migration density and stochastic noise verifies the robustness of the proposed method for field data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, X; Rosenfield, J; Dong, X
2016-06-15
Purpose: Rotational total skin electron irradiation (RTSEI) is used in the treatment of cutaneous T-cell lymphoma. Due to inter-film uniformity variations the dosimetry measurement of a large electron beam of a very low energy is challenging. This work provides a method to improve the accuracy of flatness and symmetry for a very large treatment field of low electron energy used in dual beam RTSEI. Methods: RTSEI is delivered by dual angles field a gantry of ±20 degrees of 270 to cover the upper and the lower halves of the patient body with acceptable beam uniformity. The field size is inmore » the order of 230cm in vertical height and 120 cm in horizontal width and beam energy is a degraded 6 MeV (6 mm of PMMA spoiler). We utilized parallel plate chambers, Gafchromic films and OSLDs as a measuring devices for absolute dose, B-Factor, stationary and rotational percent depth dose and beam uniformity. To reduce inter-film dosimetric variation we introduced a new specific correction method to analyze beam uniformity. This correction method uses some image processing techniques combining film value before and after radiation dose to compensate the inter-variation dose response differences among films. Results: Stationary and rotational depth of dose demonstrated that the Rp is 2 cm for rotational and the maximum dose is shifted toward the surface (3mm). The dosimetry for the phantom showed that dose uniformity reduced to 3.01% for the vertical flatness and 2.35% for horizontal flatness after correction thus achieving better flatness and uniformity. The absolute dose readings of calibrated films after our correction matched with the readings from OSLD. Conclusion: The proposed correction method for Gafchromic films will be a useful tool to correct inter-film dosimetric variation for the future clinical film dosimetry verification in very large fields, allowing the optimizations of other parameters.« less
Denoising Medical Images using Calculus of Variations
Kohan, Mahdi Nakhaie; Behnam, Hamid
2011-01-01
We propose a method for medical image denoising using calculus of variations and local variance estimation by shaped windows. This method reduces any additive noise and preserves small patterns and edges of images. A pyramid structure-texture decomposition of images is used to separate noise and texture components based on local variance measures. The experimental results show that the proposed method has visual improvement as well as a better SNR, RMSE and PSNR than common medical image denoising methods. Experimental results in denoising a sample Magnetic Resonance image show that SNR, PSNR and RMSE have been improved by 19, 9 and 21 percents respectively. PMID:22606674
Chen, Cheng; Wang, Wei; Ozolek, John A.; Rohde, Gustavo K.
2013-01-01
We describe a new supervised learning-based template matching approach for segmenting cell nuclei from microscopy images. The method uses examples selected by a user for building a statistical model which captures the texture and shape variations of the nuclear structures from a given dataset to be segmented. Segmentation of subsequent, unlabeled, images is then performed by finding the model instance that best matches (in the normalized cross correlation sense) local neighborhood in the input image. We demonstrate the application of our method to segmenting nuclei from a variety of imaging modalities, and quantitatively compare our results to several other methods. Quantitative results using both simulated and real image data show that, while certain methods may work well for certain imaging modalities, our software is able to obtain high accuracy across several imaging modalities studied. Results also demonstrate that, relative to several existing methods, the template-based method we propose presents increased robustness in the sense of better handling variations in illumination, variations in texture from different imaging modalities, providing more smooth and accurate segmentation borders, as well as handling better cluttered nuclei. PMID:23568787
NASA Astrophysics Data System (ADS)
Kazantsev, Daniil; Jørgensen, Jakob S.; Andersen, Martin S.; Lionheart, William R. B.; Lee, Peter D.; Withers, Philip J.
2018-06-01
Rapid developments in photon-counting and energy-discriminating detectors have the potential to provide an additional spectral dimension to conventional x-ray grayscale imaging. Reconstructed spectroscopic tomographic data can be used to distinguish individual materials by characteristic absorption peaks. The acquired energy-binned data, however, suffer from low signal-to-noise ratio, acquisition artifacts, and frequently angular undersampled conditions. New regularized iterative reconstruction methods have the potential to produce higher quality images and since energy channels are mutually correlated it can be advantageous to exploit this additional knowledge. In this paper, we propose a novel method which jointly reconstructs all energy channels while imposing a strong structural correlation. The core of the proposed algorithm is to employ a variational framework of parallel level sets to encourage joint smoothing directions. In particular, the method selects reference channels from which to propagate structure in an adaptive and stochastic way while preferring channels with a high data signal-to-noise ratio. The method is compared with current state-of-the-art multi-channel reconstruction techniques including channel-wise total variation and correlative total nuclear variation regularization. Realistic simulation experiments demonstrate the performance improvements achievable by using correlative regularization methods.
Comparing two Bayes methods based on the free energy functions in Bernoulli mixtures.
Yamazaki, Keisuke; Kaji, Daisuke
2013-08-01
Hierarchical learning models are ubiquitously employed in information science and data engineering. The structure makes the posterior distribution complicated in the Bayes method. Then, the prediction including construction of the posterior is not tractable though advantages of the method are empirically well known. The variational Bayes method is widely used as an approximation method for application; it has the tractable posterior on the basis of the variational free energy function. The asymptotic behavior has been studied in many hierarchical models and a phase transition is observed. The exact form of the asymptotic variational Bayes energy is derived in Bernoulli mixture models and the phase diagram shows that there are three types of parameter learning. However, the approximation accuracy or interpretation of the transition point has not been clarified yet. The present paper precisely analyzes the Bayes free energy function of the Bernoulli mixtures. Comparing free energy functions in these two Bayes methods, we can determine the approximation accuracy and elucidate behavior of the parameter learning. Our results claim that the Bayes free energy has the same learning types while the transition points are different. Copyright © 2013 Elsevier Ltd. All rights reserved.
A Continuous Variation Study of Heats of Neutralization.
ERIC Educational Resources Information Center
Mahoney, Dennis W.; And Others
1981-01-01
Suggests that students study heats of neutralization of a 1 M solution of an unknown acid by 1 M solution of a strong base using the method continuous variation. Reviews results using several common acids. (SK)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steenbergen, K. G., E-mail: kgsteen@gmail.com; Gaston, N.
2014-02-14
Inspired by methods of remote sensing image analysis, we analyze structural variation in cluster molecular dynamics (MD) simulations through a unique application of the principal component analysis (PCA) and Pearson Correlation Coefficient (PCC). The PCA analysis characterizes the geometric shape of the cluster structure at each time step, yielding a detailed and quantitative measure of structural stability and variation at finite temperature. Our PCC analysis captures bond structure variation in MD, which can be used to both supplement the PCA analysis as well as compare bond patterns between different cluster sizes. Relying only on atomic position data, without requirement formore » a priori structural input, PCA and PCC can be used to analyze both classical and ab initio MD simulations for any cluster composition or electronic configuration. Taken together, these statistical tools represent powerful new techniques for quantitative structural characterization and isomer identification in cluster MD.« less
NASA Technical Reports Server (NTRS)
Winfree, William P.; Howell, Patricia A.; Zalameda, Joseph N.
2014-01-01
Flaw detection and characterization with thermographic techniques in graphite polymer composites are often limited by localized variations in the thermographic response. Variations in properties such as acceptable porosity, fiber volume content and surface polymer thickness result in variations in the thermal response that in general cause significant variations in the initial thermal response. These result in a "noise" floor that increases the difficulty of detecting and characterizing deeper flaws. A method is presented for computationally removing a significant amount of the "noise" from near surface porosity by diffusing the early time response, then subtracting it from subsequent responses. Simulations of the thermal response of a composite are utilized in defining the limitations of the technique. This method for reducing the data is shown to give considerable improvement characterizing both the size and depth of damage. Examples are shown for data acquired on specimens with fabricated delaminations and impact damage.
Zhang, Yanyan; Zhao, Jianlin; Di, Jianglei; Jiang, Hongzhen; Wang, Qian; Wang, Jun; Guo, Yunzhu; Yin, Dachuan
2012-07-30
We report a real-time measurement method of the solution concentration variation during the growth of protein-lysozyme crystals based on digital holographic interferometry. A series of holograms containing the information of the solution concentration variation in the whole crystallization process is recorded by CCD. Based on the principle of double-exposure holographic interferometry and the relationship between the phase difference of the reconstructed object wave and the solution concentration, the solution concentration variation with time for arbitrary point in the solution can be obtained, and then the two-dimensional concentration distribution of the solution during crystallization process can also be figured out under the precondition which the refractive index is constant through the light propagation direction. The experimental results turns out that it is feasible to in situ, full-field and real-time monitor the crystal growth process by using this method.
Variational Approach to Enhanced Sampling and Free Energy Calculations
NASA Astrophysics Data System (ADS)
Valsson, Omar; Parrinello, Michele
2014-08-01
The ability of widely used sampling methods, such as molecular dynamics or Monte Carlo simulations, to explore complex free energy landscapes is severely hampered by the presence of kinetic bottlenecks. A large number of solutions have been proposed to alleviate this problem. Many are based on the introduction of a bias potential which is a function of a small number of collective variables. However constructing such a bias is not simple. Here we introduce a functional of the bias potential and an associated variational principle. The bias that minimizes the functional relates in a simple way to the free energy surface. This variational principle can be turned into a practical, efficient, and flexible sampling method. A number of numerical examples are presented which include the determination of a three-dimensional free energy surface. We argue that, beside being numerically advantageous, our variational approach provides a convenient and novel standpoint for looking at the sampling problem.
Effect of Ice-Shell Thickness Variations on the Tidal Deformation of Enceladus
NASA Astrophysics Data System (ADS)
Choblet, G.; Cadek, O.; Behounkova, M.; Tobie, G.; Kozubek, T.
2015-12-01
Recent analysis of Enceladus's gravity and topography has suggested that the thickness of the ice shell significantly varies laterally - from 30-40 km in the south polar region to 60 km elsewhere. These variations may influence the activity of the geysers and increase the tidal heat production in regions where the ice shell is thinned. Using a model including a regional or global subsurface ocean and Maxwell viscoelasticity, we investigate the impact of these variations on the tidal deformation of the moon and its heat production. For that purpose, we use different numerical approaches - finite elements, local application of 1d spectral method, and a generalized spectral method. Results obtained with these three approaches for various models of ice-shell thickness variations are presented and compared. Implications of a reduced ice shell thickness for the south polar terrain activity are discussed.
NASA Astrophysics Data System (ADS)
Winfree, William P.; Howell, Patricia A.; Zalameda, Joseph N.
2014-05-01
Flaw detection and characterization with thermographic techniques in graphite polymer composites are often limited by localized variations in the thermographic response. Variations in properties such as acceptable porosity, fiber volume content and surface polymer thickness result in variations in the thermal response that in general cause significant variations in the initial thermal response. These result in a "noise" floor that increases the difficulty of detecting and characterizing deeper flaws. A method is presented for computationally removing a significant amount of the "noise" from near surface porosity by diffusing the early time response, then subtracting it from subsequent responses. Simulations of the thermal response of a composite are utilized in defining the limitations of the technique. This method for reducing the data is shown to give considerable improvement characterizing both the size and depth of damage. Examples are shown for data acquired on specimens with fabricated delaminations and impact damage.
NASA Technical Reports Server (NTRS)
Rai, Man Mohan (Inventor); Madavan, Nateri K. (Inventor)
2007-01-01
A method and system for data modeling that incorporates the advantages of both traditional response surface methodology (RSM) and neural networks is disclosed. The invention partitions the parameters into a first set of s simple parameters, where observable data are expressible as low order polynomials, and c complex parameters that reflect more complicated variation of the observed data. Variation of the data with the simple parameters is modeled using polynomials; and variation of the data with the complex parameters at each vertex is analyzed using a neural network. Variations with the simple parameters and with the complex parameters are expressed using a first sequence of shape functions and a second sequence of neural network functions. The first and second sequences are multiplicatively combined to form a composite response surface, dependent upon the parameter values, that can be used to identify an accurate mode
Longitudinal variability in Jupiter's zonal winds derived from multi-wavelength HST observations
NASA Astrophysics Data System (ADS)
Johnson, Perianne E.; Morales-Juberías, Raúl; Simon, Amy; Gaulme, Patrick; Wong, Michael H.; Cosentino, Richard G.
2018-06-01
Multi-wavelength Hubble Space Telescope (HST) images of Jupiter from the Outer Planets Atmospheres Legacy (OPAL) and Wide Field Coverage for Juno (WFCJ) programs in 2015, 2016, and 2017 are used to derive wind profiles as a function of latitude and longitude. Wind profiles are typically zonally averaged to reduce measurement uncertainties. However, doing this destroys any variations of the zonal-component of winds in the longitudinal direction. Here, we present the results derived from using a "sliding-window" correlation method. This method adds longitudinal specificity, and allows for the detection of spatial variations in the zonal winds. Spatial variations are identified in two jets: 1 at 17 ° N, the location of a prominent westward jet, and the other at 7 ° S, the location of the chevrons. Temporal and spatial variations at the 24°N jet and the 5-μm hot spots are also examined.
Steenbergen, K G; Gaston, N
2014-02-14
Inspired by methods of remote sensing image analysis, we analyze structural variation in cluster molecular dynamics (MD) simulations through a unique application of the principal component analysis (PCA) and Pearson Correlation Coefficient (PCC). The PCA analysis characterizes the geometric shape of the cluster structure at each time step, yielding a detailed and quantitative measure of structural stability and variation at finite temperature. Our PCC analysis captures bond structure variation in MD, which can be used to both supplement the PCA analysis as well as compare bond patterns between different cluster sizes. Relying only on atomic position data, without requirement for a priori structural input, PCA and PCC can be used to analyze both classical and ab initio MD simulations for any cluster composition or electronic configuration. Taken together, these statistical tools represent powerful new techniques for quantitative structural characterization and isomer identification in cluster MD.
Identification of hydrological model parameter variation using ensemble Kalman filter
NASA Astrophysics Data System (ADS)
Deng, Chao; Liu, Pan; Guo, Shenglian; Li, Zejun; Wang, Dingbao
2016-12-01
Hydrological model parameters play an important role in the ability of model prediction. In a stationary context, parameters of hydrological models are treated as constants; however, model parameters may vary with time under climate change and anthropogenic activities. The technique of ensemble Kalman filter (EnKF) is proposed to identify the temporal variation of parameters for a two-parameter monthly water balance model (TWBM) by assimilating the runoff observations. Through a synthetic experiment, the proposed method is evaluated with time-invariant (i.e., constant) parameters and different types of parameter variations, including trend, abrupt change and periodicity. Various levels of observation uncertainty are designed to examine the performance of the EnKF. The results show that the EnKF can successfully capture the temporal variations of the model parameters. The application to the Wudinghe basin shows that the water storage capacity (SC) of the TWBM model has an apparent increasing trend during the period from 1958 to 2000. The identified temporal variation of SC is explained by land use and land cover changes due to soil and water conservation measures. In contrast, the application to the Tongtianhe basin shows that the estimated SC has no significant variation during the simulation period of 1982-2013, corresponding to the relatively stationary catchment properties. The evapotranspiration parameter (C) has temporal variations while no obvious change patterns exist. The proposed method provides an effective tool for quantifying the temporal variations of the model parameters, thereby improving the accuracy and reliability of model simulations and forecasts.
Evaluating abundance and trends in a Hawaiian avian community using state-space analysis
Camp, Richard J.; Brinck, Kevin W.; Gorresen, P.M.; Paxton, Eben H.
2016-01-01
Estimating population abundances and patterns of change over time are important in both ecology and conservation. Trend assessment typically entails fitting a regression to a time series of abundances to estimate population trajectory. However, changes in abundance estimates from year-to-year across time are due to both true variation in population size (process variation) and variation due to imperfect sampling and model fit. State-space models are a relatively new method that can be used to partition the error components and quantify trends based only on process variation. We compare a state-space modelling approach with a more traditional linear regression approach to assess trends in uncorrected raw counts and detection-corrected abundance estimates of forest birds at Hakalau Forest National Wildlife Refuge, Hawai‘i. Most species demonstrated similar trends using either method. In general, evidence for trends using state-space models was less strong than for linear regression, as measured by estimates of precision. However, while the state-space models may sacrifice precision, the expectation is that these estimates provide a better representation of the real world biological processes of interest because they are partitioning process variation (environmental and demographic variation) and observation variation (sampling and model variation). The state-space approach also provides annual estimates of abundance which can be used by managers to set conservation strategies, and can be linked to factors that vary by year, such as climate, to better understand processes that drive population trends.
Short-term landfill methane emissions dependency on wind.
Delkash, Madjid; Zhou, Bowen; Han, Byunghyun; Chow, Fotini K; Rella, Chris W; Imhoff, Paul T
2016-09-01
Short-term (2-10h) variations of whole-landfill methane emissions have been observed in recent field studies using the tracer dilution method for emissions measurement. To investigate the cause of these variations, the tracer dilution method is applied using 1-min emissions measurements at Sandtown Landfill (Delaware, USA) for a 2-h measurement period. An atmospheric dispersion model is developed for this field test site, which is the first application of such modeling to evaluate atmospheric effects on gas plume transport from landfills. The model is used to examine three possible causes of observed temporal emissions variability: temporal variability of surface wind speed affecting whole landfill emissions, spatial variability of emissions due to local wind speed variations, and misaligned tracer gas release and methane emissions locations. At this site, atmospheric modeling indicates that variation in tracer dilution method emissions measurements may be caused by whole-landfill emissions variation with wind speed. Field data collected over the time period of the atmospheric model simulations corroborate this result: methane emissions are correlated with wind speed on the landfill surface with R(2)=0.51 for data 2.5m above ground, or R(2)=0.55 using data 85m above ground, with emissions increasing by up to a factor of 2 for an approximately 30% increase in wind speed. Although the atmospheric modeling and field test are conducted at a single landfill, the results suggest that wind-induced emissions may affect tracer dilution method emissions measurements at other landfills. Copyright © 2016 Elsevier Ltd. All rights reserved.
Pavanello, Michele; Tung, Wei-Cheng; Adamowicz, Ludwik
2009-11-14
Efficient optimization of the basis set is key to achieving a very high accuracy in variational calculations of molecular systems employing basis functions that are explicitly dependent on the interelectron distances. In this work we present a method for a systematic enlargement of basis sets of explicitly correlated functions based on the iterative-complement-interaction approach developed by Nakatsuji [Phys. Rev. Lett. 93, 030403 (2004)]. We illustrate the performance of the method in the variational calculations of H(3) where we use explicitly correlated Gaussian functions with shifted centers. The total variational energy (-1.674 547 421 Hartree) and the binding energy (-15.74 cm(-1)) obtained in the calculation with 1000 Gaussians are the most accurate results to date.
Analysing the magnetopause internal structure: new possibilities offered by MMS
NASA Astrophysics Data System (ADS)
Belmont, G.; Rezeau, L.; Manuzzo, R.; Aunai, N.; Dargent, J.
2017-12-01
We explore the structure of the magnetopause using a crossing observed by the MMS spacecraft on October 16th, 2015. Several methods (MVA, BV, CVA) are first applied to compute the normal to the magnetopause considered as a whole. The different results obtained are not identical and we show that the whole boundary is not stationary and not planar, so that basic assumptions of these methods are not well satisfied. We then analyse more finely the internal structure for investigating the departures from planarity. Using the basic mathematical definition of what is a one-dimensional physical problem, we introduce a new method, called LNA (Local Normal Analysis) for determining the varying normal, and we compare the results so obtained with those coming from the MDD tool developed by [Shi et al., 2005]. This method gives the dimensionality of the magnetic variations from multi-point measurements and allows estimating the direction of the local normal using the magnetic field. On the other hand, LNA is a single-spacecraft method which gives the local normal from the magnetic field and particle data. This study shows that the magnetopause does include approximate one-dimensional sub-structures but also two and three dimensional intervals. It also shows that the dimensionality of the magnetic variations can differ from the variations of the other fields so that, at some places, the magnetic field can have a 1D structure although all the plasma variations do not verify the properties of a global one-dimensional problem. Finally a generalisation and a systematic application of the MDD method to the physical quantities of interest is shown.
Larson, Nicholas B; McDonnell, Shannon; Cannon Albright, Lisa; Teerlink, Craig; Stanford, Janet; Ostrander, Elaine A; Isaacs, William B; Xu, Jianfeng; Cooney, Kathleen A; Lange, Ethan; Schleutker, Johanna; Carpten, John D; Powell, Isaac; Bailey-Wilson, Joan E; Cussenot, Olivier; Cancel-Tassin, Geraldine; Giles, Graham G; MacInnis, Robert J; Maier, Christiane; Whittemore, Alice S; Hsieh, Chih-Lin; Wiklund, Fredrik; Catalona, William J; Foulkes, William; Mandal, Diptasri; Eeles, Rosalind; Kote-Jarai, Zsofia; Ackerman, Michael J; Olson, Timothy M; Klein, Christopher J; Thibodeau, Stephen N; Schaid, Daniel J
2017-05-01
Next-generation sequencing technologies have afforded unprecedented characterization of low-frequency and rare genetic variation. Due to low power for single-variant testing, aggregative methods are commonly used to combine observed rare variation within a single gene. Causal variation may also aggregate across multiple genes within relevant biomolecular pathways. Kernel-machine regression and adaptive testing methods for aggregative rare-variant association testing have been demonstrated to be powerful approaches for pathway-level analysis, although these methods tend to be computationally intensive at high-variant dimensionality and require access to complete data. An additional analytical issue in scans of large pathway definition sets is multiple testing correction. Gene set definitions may exhibit substantial genic overlap, and the impact of the resultant correlation in test statistics on Type I error rate control for large agnostic gene set scans has not been fully explored. Herein, we first outline a statistical strategy for aggregative rare-variant analysis using component gene-level linear kernel score test summary statistics as well as derive simple estimators of the effective number of tests for family-wise error rate control. We then conduct extensive simulation studies to characterize the behavior of our approach relative to direct application of kernel and adaptive methods under a variety of conditions. We also apply our method to two case-control studies, respectively, evaluating rare variation in hereditary prostate cancer and schizophrenia. Finally, we provide open-source R code for public use to facilitate easy application of our methods to existing rare-variant analysis results. © 2017 WILEY PERIODICALS, INC.
A variational theorem for creep with applications to plates and columns
NASA Technical Reports Server (NTRS)
Sanders, J Lyell, Jr; Mccomb, Harvey G , Jr; Schlechte, Floyd R
1958-01-01
A variational theorem is presented for a body undergoing creep. Solutions to problems of the creep behavior of plates, columns, beams, and shells can be obtained by means of the direct methods of the calculus of variations in conjunction with the stated theorem. The application of the theorem is illustrated for plates and columns by the solution of two sample problems.
Li, Danhui; Martini, Nataly; Wu, Zimei; Wen, Jingyuan
2012-10-01
The aim of this study was to develop a simple, rapid and accurate isocratic HPLC analytical method to qualify and quantify five catechin derivatives, namely (+)-catechin (C), (-)-epigallocatechin (EGC), (-)-epicatechin gallate (ECG), (-)-epicatechin (EC) and (-)-epigallocatechin gallate (EGCG). To validate the analytical method, linearity, repeatability, intermediate precision, sensitivity, selectivity and recovery were investigated. The five catechin derivatives were completely separated by HPLC using a mobile phase containing 0.1% TFA in Milli-Q water (pH 2.0) mixed with methanol at the volume ratio of 75:25 at a flow rate of 0.8 ml/min. The method was shown to be linear (r²>0.99), repeatable with instrumental precision<2.0 and intra-assay precision<2.5 (%CV, percent coefficient of variation), precise with intra-day variation<1 and inter-day variation<2.5 (%CV, percent coefficient of variation) and sensitive (LOD<1 μg/mL and LOQ<3 μg/mL) over the calibration range for all five derivatives. Derivatives could be fully recovered in the presence of niosomal formulation (recovery rates>91%). Selectivity of the method was proven by the forced degradation studies, which showed that under acidic, basic, oxidation temperature and photolysis stresses, the parent drug can be separated from the degradation products by means of this analytical method. The described method was successfully applied in the in vitro release studies of catechin-loaded niosomes to manifest its utility in formulation characterization. Obtained results indicated that the drug release from niosomal formulations was a biphasic process and a diffusion mechanism regulated the permeation of catechin niosomes. Copyright © 2012 Elsevier B.V. All rights reserved.
Gray, Allan; Wright, Alex; Jackson, Pete; Hale, Mike; Treanor, Darren
2015-03-01
Histochemical staining of tissue is a fundamental technique in tissue diagnosis and research, but it suffers from significant variability. Efforts to address this include laboratory quality controls and quality assurance schemes, but these rely on subjective interpretation of stain quality, are laborious and have low reproducibility. We aimed (1) to develop a method for histochemical stain quantification using whole slide imaging and image analysis and (2) to demonstrate its usefulness in measuring staining variation. A method to quantify the individual stain components of histochemical stains on virtual slides was developed. It was evaluated for repeatability and reproducibility, then applied to control sections of an appendix to quantify H&E staining (H/E intensities and H:E ratio) between automated staining machines and to measure differences between six regional diagnostic laboratories. The method was validated with <0.5% variation in H:E ratio measurement when using the same scanner for a batch of slides (ie, it was repeatable) but was not highly reproducible between scanners or over time, where variation of 7% was found. Application of the method showed H:E ratios between three staining machines varied from 0.69 to 0.93, H:E ratio variation over time was observed. Interlaboratory comparison demonstrated differences in H:E ratio between regional laboratories from 0.57 to 0.89. A simple method using whole slide imaging can be used to quantify and compare histochemical staining. This method could be deployed in routine quality assurance and quality control. Work is needed on whole slide imaging devices to improve reproducibility. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
He, Guilin; Zhang, Tuqiao; Zheng, Feifei; Zhang, Qingzhou
2018-06-20
Water quality security within water distribution systems (WDSs) has been an important issue due to their inherent vulnerability associated with contamination intrusion. This motivates intensive studies to identify optimal water quality sensor placement (WQSP) strategies, aimed to timely/effectively detect (un)intentional intrusion events. However, these available WQSP optimization methods have consistently presumed that each WDS node has an equal contamination probability. While being simple in implementation, this assumption may do not conform to the fact that the nodal contamination probability may be significantly regionally varied owing to variations in population density and user properties. Furthermore, the low computational efficiency is another important factor that has seriously hampered the practical applications of the currently available WQSP optimization approaches. To address these two issues, this paper proposes an efficient multi-objective WQSP optimization method to explicitly account for contamination probability variations. Four different contamination probability functions (CPFs) are proposed to represent the potential variations of nodal contamination probabilities within the WDS. Two real-world WDSs are used to demonstrate the utility of the proposed method. Results show that WQSP strategies can be significantly affected by the choice of the CPF. For example, when the proposed method is applied to the large case study with the CPF accounting for user properties, the event detection probabilities of the resultant solutions are approximately 65%, while these values are around 25% for the traditional approach, and such design solutions are achieved approximately 10,000 times faster than the traditional method. This paper provides an alternative method to identify optimal WQSP solutions for the WDS, and also builds knowledge regarding the impacts of different CPFs on sensor deployments. Copyright © 2018 Elsevier Ltd. All rights reserved.
Development of Multistep and Degenerate Variational Integrators for Applications in Plasma Physics
NASA Astrophysics Data System (ADS)
Ellison, Charles Leland
Geometric integrators yield high-fidelity numerical results by retaining conservation laws in the time advance. A particularly powerful class of geometric integrators is symplectic integrators, which are widely used in orbital mechanics and accelerator physics. An important application presently lacking symplectic integrators is the guiding center motion of magnetized particles represented by non-canonical coordinates. Because guiding center trajectories are foundational to many simulations of magnetically confined plasmas, geometric guiding center algorithms have high potential for impact. The motivation is compounded by the need to simulate long-pulse fusion devices, including ITER, and opportunities in high performance computing, including the use of petascale resources and beyond. This dissertation uses a systematic procedure for constructing geometric integrators --- known as variational integration --- to deliver new algorithms for guiding center trajectories and other plasma-relevant dynamical systems. These variational integrators are non-trivial because the Lagrangians of interest are degenerate - the Euler-Lagrange equations are first-order differential equations and the Legendre transform is not invertible. The first contribution of this dissertation is that variational integrators for degenerate Lagrangian systems are typically multistep methods. Multistep methods admit parasitic mode instabilities that can ruin the numerical results. These instabilities motivate the second major contribution: degenerate variational integrators. By replicating the degeneracy of the continuous system, degenerate variational integrators avoid parasitic mode instabilities. The new methods are therefore robust geometric integrators for degenerate Lagrangian systems. These developments in variational integration theory culminate in one-step degenerate variational integrators for non-canonical magnetic field line flow and guiding center dynamics. The guiding center integrator assumes coordinates such that one component of the magnetic field is zero; it is shown how to construct such coordinates for nested magnetic surface configurations. Additionally, collisional drag effects are incorporated in the variational guiding center algorithm for the first time, allowing simulation of energetic particle thermalization. Advantages relative to existing canonical-symplectic and non-geometric algorithms are numerically demonstrated. All algorithms have been implemented as part of a modern, parallel, ODE-solving library, suitable for use in high-performance simulations.
Jahan, K Luhluh; Boda, A; Shankar, I V; Raju, Ch Narasimha; Chatterjee, Ashok
2018-03-22
The problem of an exciton trapped in a Gaussian quantum dot (QD) of GaAs is studied in both two and three dimensions in the presence of an external magnetic field using the Ritz variational method, the 1/N expansion method and the shifted 1/N expansion method. The ground state energy and the binding energy of the exciton are obtained as a function of the quantum dot size, confinement strength and the magnetic field and compared with those available in the literature. While the variational method gives the upper bound to the ground state energy, the 1/N expansion method gives the lower bound. The results obtained from the shifted 1/N expansion method are shown to match very well with those obtained from the exact diagonalization technique. The variation of the exciton size and the oscillator strength of the exciton are also studied as a function of the size of the quantum dot. The excited states of the exciton are computed using the shifted 1/N expansion method and it is suggested that a given number of stable excitonic bound states can be realized in a quantum dot by tuning the quantum dot parameters. This can open up the possibility of having quantum dot lasers using excitonic states.
An intercomparison for NIRS and NYU passive thoron gas detectors at NYU.
Sorimachi, Atsuyuki; Ishikawa, Tetsuo; Tokonami, Shinji; Chittaporn, Passaporn; Harley, Naomi H
2012-04-01
An intercomparison on thoron ((220)Rn) measurement was carried out between National Institute of Radiological Sciences, Japan (NIRS), and New York University School of Medicine, USA (NYU). The measurements of (220)Rn concentration at NIRS and NYU were performed by using the scintillation cell method and the two-filter method, respectively, as the standard measurement method. Three types of alpha track detectors based on passive radon ((222)Rn)-(220)Rn discriminative measurement technique were used: Raduet and Radopot detectors were used at NIRS, and four-leaf detectors were used at NYU. In this study, the authors evaluated (220)Rn concentration variation in terms of run for exposure, measurement method, and exposure chamber. The detectors were exposed to (220)Rn gas with approximately 15 kBq m(-3) during the period from 0.75 to 3 d. As a result, the variation of each measurement method among these exposure runs was comparable to or less than that for the two-filter method. Agreement between the standard measurement methods of NIRS and NYU was observed to be about 10%, as is the case with the passive detectors. The Raduet detector showed a large variation in the detection response between the NIRS and NYU chambers, which could be related to different traceability.
The Uncertainty of Long-term Linear Trend in Global SST Due to Internal Variation
NASA Astrophysics Data System (ADS)
Lian, Tao
2016-04-01
In most parts of the global ocean, the magnitude of the long-term linear trend in sea surface temperature (SST) is much smaller than the amplitude of local multi-scale internal variation. One can thus use the record of a specified period to arbitrarily determine the value and the sign of the long-term linear trend in regional SST, and further leading to controversial conclusions on how global SST responds to global warming in the recent history. Analyzing the linear trend coefficient estimated by the ordinary least-square method indicates that the linear trend consists of two parts: One related to the long-term change, and the other related to the multi-scale internal variation. The sign of the long-term change can be correctly reproduced only when the magnitude of the linear trend coefficient is greater than a theoretical threshold which scales the influence from the multi-scale internal variation. Otherwise, the sign of the linear trend coefficient will depend on the phase of the internal variation, or in the other words, the period being used. An improved least-square method is then proposed to reduce the theoretical threshold. When apply the new method to a global SST reconstruction from 1881 to 2013, we find that in a large part of Pacific, the southern Indian Ocean and North Atlantic, the influence from the multi-scale internal variation on the sign of the linear trend coefficient can-not be excluded. Therefore, the resulting warming or/and cooling linear trends in these regions can-not be fully assigned to global warming.
Structural Organization and Strain Variation in the Genome of Varicella Zoster Virus
1984-10-23
Zoster 6 Growth of VZV in tissue culture 9 Structure and proteins of VZV 15 Structure of HSV DNA 20 Classification of herpesviruses based on DNA...structure 28 Strain variation in herpesvirus DNA 31 VZV DNA 33 Specific aims 36 II. MATERIALS AND METHODS 38 Cells and viruses 38 Isolation of virus...endonuclease fragments by colony hybridization 106 21. Selected methods of restriction endonuclease mapping .... 109 22. Identification of
İbiş, Birol
2014-01-01
This paper aims to obtain the approximate solution of time-fractional advection-dispersion equation (FADE) involving Jumarie's modification of Riemann-Liouville derivative by the fractional variational iteration method (FVIM). FVIM provides an analytical approximate solution in the form of a convergent series. Some examples are given and the results indicate that the FVIM is of high accuracy, more efficient, and more convenient for solving time FADEs. PMID:24578662
Unconventional Hamilton-type variational principle in phase space and symplectic algorithm
NASA Astrophysics Data System (ADS)
Luo, En; Huang, Weijiang; Zhang, Hexin
2003-06-01
By a novel approach proposed by Luo, the unconventional Hamilton-type variational principle in phase space for elastodynamics of multidegree-of-freedom system is established in this paper. It not only can fully characterize the initial-value problem of this dynamic, but also has a natural symplectic structure. Based on this variational principle, a symplectic algorithm which is called a symplectic time-subdomain method is proposed. A non-difference scheme is constructed by applying Lagrange interpolation polynomial to the time subdomain. Furthermore, it is also proved that the presented symplectic algorithm is an unconditionally stable one. From the results of the two numerical examples of different types, it can be seen that the accuracy and the computational efficiency of the new method excel obviously those of widely used Wilson-θ and Newmark-β methods. Therefore, this new algorithm is a highly efficient one with better computational performance.
A Variational Method in Out-of-Equilibrium Physical Systems
Pinheiro, Mario J.
2013-01-01
We propose a new variational principle for out-of-equilibrium dynamic systems that are fundamentally based on the method of Lagrange multipliers applied to the total entropy of an ensemble of particles. However, we use the fundamental equation of thermodynamics on differential forms, considering U and S as 0-forms. We obtain a set of two first order differential equations that reveal the same formal symplectic structure shared by classical mechanics, fluid mechanics and thermodynamics. From this approach, a topological torsion current emerges of the form , where Aj and ωk denote the components of the vector potential (gravitational and/or electromagnetic) and where ω denotes the angular velocity of the accelerated frame. We derive a special form of the Umov-Poynting theorem for rotating gravito-electromagnetic systems. The variational method is then applied to clarify the working mechanism of particular devices. PMID:24316718
Herbei, Radu; Kubatko, Laura
2013-03-26
Markov chains are widely used for modeling in many areas of molecular biology and genetics. As the complexity of such models advances, it becomes increasingly important to assess the rate at which a Markov chain converges to its stationary distribution in order to carry out accurate inference. A common measure of convergence to the stationary distribution is the total variation distance, but this measure can be difficult to compute when the state space of the chain is large. We propose a Monte Carlo method to estimate the total variation distance that can be applied in this situation, and we demonstrate how the method can be efficiently implemented by taking advantage of GPU computing techniques. We apply the method to two Markov chains on the space of phylogenetic trees, and discuss the implications of our findings for the development of algorithms for phylogenetic inference.
NASA Astrophysics Data System (ADS)
Scovazzi, Guglielmo; Wheeler, Mary F.; Mikelić, Andro; Lee, Sanghyun
2017-04-01
The miscible displacement of one fluid by another in a porous medium has received considerable attention in subsurface, environmental and petroleum engineering applications. When a fluid of higher mobility displaces another of lower mobility, unstable patterns - referred to as viscous fingering - may arise. Their physical and mathematical study has been the object of numerous investigations over the past century. The objective of this paper is to present a review of these contributions with particular emphasis on variational methods. These algorithms are tailored to real field applications thanks to their advanced features: handling of general complex geometries, robustness in the presence of rough tensor coefficients, low sensitivity to mesh orientation in advection dominated scenarios, and provable convergence with fully unstructured grids. This paper is dedicated to the memory of Dr. Jim Douglas Jr., for his seminal contributions to miscible displacement and variational numerical methods.
Adaptive torque estimation of robot joint with harmonic drive transmission
NASA Astrophysics Data System (ADS)
Shi, Zhiguo; Li, Yuankai; Liu, Guangjun
2017-11-01
Robot joint torque estimation using input and output position measurements is a promising technique, but the result may be affected by the load variation of the joint. In this paper, a torque estimation method with adaptive robustness and optimality adjustment according to load variation is proposed for robot joint with harmonic drive transmission. Based on a harmonic drive model and a redundant adaptive robust Kalman filter (RARKF), the proposed approach can adapt torque estimation filtering optimality and robustness to the load variation by self-tuning the filtering gain and self-switching the filtering mode between optimal and robust. The redundant factor of RARKF is designed as a function of the motor current for tolerating the modeling error and load-dependent filtering mode switching. The proposed joint torque estimation method has been experimentally studied in comparison with a commercial torque sensor and two representative filtering methods. The results have demonstrated the effectiveness of the proposed torque estimation technique.
Finite-temperature time-dependent variation with multiple Davydov states
NASA Astrophysics Data System (ADS)
Wang, Lu; Fujihashi, Yuta; Chen, Lipeng; Zhao, Yang
2017-03-01
The Dirac-Frenkel time-dependent variational approach with Davydov Ansätze is a sophisticated, yet efficient technique to obtain an accurate solution to many-body Schrödinger equations for energy and charge transfer dynamics in molecular aggregates and light-harvesting complexes. We extend this variational approach to finite temperature dynamics of the spin-boson model by adopting a Monte Carlo importance sampling method. In order to demonstrate the applicability of this approach, we compare calculated real-time quantum dynamics of the spin-boson model with that from numerically exact iterative quasiadiabatic propagator path integral (QUAPI) technique. The comparison shows that our variational approach with the single Davydov Ansätze is in excellent agreement with the QUAPI method at high temperatures, while the two differ at low temperatures. Accuracy in dynamics calculations employing a multitude of Davydov trial states is found to improve substantially over the single Davydov Ansatz, especially at low temperatures. At a moderate computational cost, our variational approach with the multiple Davydov Ansatz is shown to provide accurate spin-boson dynamics over a wide range of temperatures and bath spectral densities.
Direct estimation of tidally induced Earth rotation variations observed by VLBI
NASA Astrophysics Data System (ADS)
Englich, S.; Heinkelmann, R.; BOHM, J.; Schuh, H.
2009-09-01
The subject of our study is the investigation of periodical variations induced by solid Earth tides and ocean tides in Earth rotation parameters (ERP: polar motion, UT1)observed by VLBI. There are two strategies to determine the amplitudes and phases of Earth rotation variations from observations of space geodetic techniques. The common way is to derive time series of Earth rotation parameters first and to estimate amplitudes and phases in a second step. Results obtained by this means were shown in previous studies for zonal tidal variations (Englich et al.; 2008a) and variations caused by ocean tides (Englich et al.; 2008b). The alternative method is to estimate the tidal parameters directly within the VLBI data analysis procedure together with other parameters such as station coordinates, tropospheric delays, clocks etc. The purpose of this work was the application of this direct method to a combined VLBI data analysis using the software packages OCCAM (Version 6.1, Gauss-Markov-Model) and DOGSCS (Gerstl et al.; 2001). The theoretical basis and the preparatory steps for the implementation of this approach are presented here.
Quick, J.C.; Brill, T.
2002-01-01
We observe a 1.3 kg C/net GJ variation of carbon emissions due to inertinite abundance in some commercially available bituminous coal. An additional 0.9 kg C/net GJ variation of carbon emissions is expected due to the extent of coalification through the bituminous rank stages. Each percentage of sulfur in bituminous coal reduces carbon emissions by about 0.08 kg C/net GJ. Other factors, such as mineral content, liptinite abundance and individual macerals, also influence carbon emissions, but their quantitative effect is less certain. The large range of carbon emissions within the bituminous rank class suggests that rank- specific carbon emission factors are provincial rather than global. Although carbon emission factors that better account for this provincial variation might be calculated, we show that the data used for this calculation may vary according to the methods used to sample and analyze coal. Provincial variation of carbon emissions and the use of different coal sampling and analytical methods complicate the verification of national greenhouse gas inventories. Published by Elsevier Science B.V.
Nordey, Thibault; Léchaudel, Mathieu; Génard, Michel; Joas, Jacques
2014-11-01
Managing fruit quality is complex because many different attributes have to be taken into account, which are themselves subjected to spatial and temporal variations. Heterogeneous fruit quality has been assumed to be partly related to temperature and maturity gradients within the fruit. To test this assumption, we measured the spatial variability of certain mango fruit quality traits: colour of the peel and of the flesh, and sourness and sweetness, at different stages of fruit maturity using destructive methods as well as vis-NIR reflectance. The spatial variability of mango quality traits was compared to internal variations in thermal time, simulated by a physical model, and to internal variations in maturity, using ethylene content as an indicator. All the fruit quality indicators analysed showed significant spatial and temporal variations, regardless of the measurement method used. The heterogeneity of internal fruit quality traits was not correlated with the marked internal temperature gradient we modelled. However, variations in ethylene content revealed a strong internal maturity gradient which was correlated with the spatial variations in measured mango quality traits. Nonetheless, alone, the internal maturity gradient did not explain the variability of fruit quality traits, suggesting that other factors, such as gas, abscisic acid and water gradients, are also involved. Copyright © 2014 Elsevier GmbH. All rights reserved.
Simulated linear test applied to quantitative proteomics.
Pham, T V; Jimenez, C R
2016-09-01
Omics studies aim to find significant changes due to biological or functional perturbation. However, gene and protein expression profiling experiments contain inherent technical variation. In discovery proteomics studies where the number of samples is typically small, technical variation plays an important role because it contributes considerably to the observed variation. Previous methods place both technical and biological variations in tightly integrated mathematical models that are difficult to adapt for different technological platforms. Our aim is to derive a statistical framework that allows the inclusion of a wide range of technical variability. We introduce a new method called the simulated linear test, or the s-test, that is easy to implement and easy to adapt for different models of technical variation. It generates virtual data points from the observed values according to a pre-defined technical distribution and subsequently employs linear modeling for significance analysis. We demonstrate the flexibility of the proposed approach by deriving a new significance test for quantitative discovery proteomics for which missing values have been a major issue for traditional methods such as the t-test. We evaluate the result on two label-free (phospho) proteomics datasets based on ion-intensity quantitation. Available at http://www.oncoproteomics.nl/software/stest.html : t.pham@vumc.nl. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Optimal filtering and Bayesian detection for friction-based diagnostics in machines.
Ray, L R; Townsend, J R; Ramasubramanian, A
2001-01-01
Non-model-based diagnostic methods typically rely on measured signals that must be empirically related to process behavior or incipient faults. The difficulty in interpreting a signal that is indirectly related to the fundamental process behavior is significant. This paper presents an integrated non-model and model-based approach to detecting when process behavior varies from a proposed model. The method, which is based on nonlinear filtering combined with maximum likelihood hypothesis testing, is applicable to dynamic systems whose constitutive model is well known, and whose process inputs are poorly known. Here, the method is applied to friction estimation and diagnosis during motion control in a rotating machine. A nonlinear observer estimates friction torque in a machine from shaft angular position measurements and the known input voltage to the motor. The resulting friction torque estimate can be analyzed directly for statistical abnormalities, or it can be directly compared to friction torque outputs of an applicable friction process model in order to diagnose faults or model variations. Nonlinear estimation of friction torque provides a variable on which to apply diagnostic methods that is directly related to model variations or faults. The method is evaluated experimentally by its ability to detect normal load variations in a closed-loop controlled motor driven inertia with bearing friction and an artificially-induced external line contact. Results show an ability to detect statistically significant changes in friction characteristics induced by normal load variations over a wide range of underlying friction behaviors.
Path-space variational inference for non-equilibrium coarse-grained systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harmandaris, Vagelis, E-mail: harman@uoc.gr; Institute of Applied and Computational Mathematics; Kalligiannaki, Evangelia, E-mail: ekalligian@tem.uoc.gr
In this paper we discuss information-theoretic tools for obtaining optimized coarse-grained molecular models for both equilibrium and non-equilibrium molecular simulations. The latter are ubiquitous in physicochemical and biological applications, where they are typically associated with coupling mechanisms, multi-physics and/or boundary conditions. In general the non-equilibrium steady states are not known explicitly as they do not necessarily have a Gibbs structure. The presented approach can compare microscopic behavior of molecular systems to parametric and non-parametric coarse-grained models using the relative entropy between distributions on the path space and setting up a corresponding path-space variational inference problem. The methods can become entirelymore » data-driven when the microscopic dynamics are replaced with corresponding correlated data in the form of time series. Furthermore, we present connections and generalizations of force matching methods in coarse-graining with path-space information methods. We demonstrate the enhanced transferability of information-based parameterizations to different observables, at a specific thermodynamic point, due to information inequalities. We discuss methodological connections between information-based coarse-graining of molecular systems and variational inference methods primarily developed in the machine learning community. However, we note that the work presented here addresses variational inference for correlated time series due to the focus on dynamics. The applicability of the proposed methods is demonstrated on high-dimensional stochastic processes given by overdamped and driven Langevin dynamics of interacting particles.« less
Moving object detection via low-rank total variation regularization
NASA Astrophysics Data System (ADS)
Wang, Pengcheng; Chen, Qian; Shao, Na
2016-09-01
Moving object detection is a challenging task in video surveillance. Recently proposed Robust Principal Component Analysis (RPCA) can recover the outlier patterns from the low-rank data under some mild conditions. However, the l-penalty in RPCA doesn't work well in moving object detection because the irrepresentable condition is often not satisfied. In this paper, a method based on total variation (TV) regularization scheme is proposed. In our model, image sequences captured with a static camera are highly related, which can be described using a low-rank matrix. Meanwhile, the low-rank matrix can absorb background motion, e.g. periodic and random perturbation. The foreground objects in the sequence are usually sparsely distributed and drifting continuously, and can be treated as group outliers from the highly-related background scenes. Instead of l-penalty, we exploit the total variation of the foreground. By minimizing the total variation energy, the outliers tend to collapse and finally converge to be the exact moving objects. The TV-penalty is superior to the l-penalty especially when the outlier is in the majority for some pixels, and our method can estimate the outlier explicitly with less bias but higher variance. To solve the problem, a joint optimization function is formulated and can be effectively solved through the inexact Augmented Lagrange Multiplier (ALM) method. We evaluate our method along with several state-of-the-art approaches in MATLAB. Both qualitative and quantitative results demonstrate that our proposed method works effectively on a large range of complex scenarios.
NASA Astrophysics Data System (ADS)
Zhang, Z.; Werner, F.; Cho, H.-M.; Wind, G.; Platnick, S.; Ackerman, A. S.; Di Girolamo, L.; Marshak, A.; Meyer, K.
2016-06-01
The bispectral method retrieves cloud optical thickness (τ) and cloud droplet effective radius (re) simultaneously from a pair of cloud reflectance observations, one in a visible or near-infrared (VIS/NIR) band and the other in a shortwave infrared (SWIR) band. A cloudy pixel is usually assumed to be horizontally homogeneous in the retrieval. Ignoring subpixel variations of cloud reflectances can lead to a significant bias in the retrieved τ and re. In the literature, the retrievals of τ and re are often assumed to be independent and considered separately when investigating the impact of subpixel cloud reflectance variations on the bispectral method. As a result, the impact on τ is contributed only by the subpixel variation of VIS/NIR band reflectance and the impact on re only by the subpixel variation of SWIR band reflectance. In our new framework, we use the Taylor expansion of a two-variable function to understand and quantify the impacts of subpixel variances of VIS/NIR and SWIR cloud reflectances and their covariance on the τ and re retrievals. This framework takes into account the fact that the retrievals are determined by both VIS/NIR and SWIR band observations in a mutually dependent way. In comparison with previous studies, it provides a more comprehensive understanding of how subpixel cloud reflectance variations impact the τ and re retrievals based on the bispectral method. In particular, our framework provides a mathematical explanation of how the subpixel variation in VIS/NIR band influences the re retrieval and why it can sometimes outweigh the influence of variations in the SWIR band and dominate the error in re retrievals, leading to a potential contribution of positive bias to the re retrieval. We test our framework using synthetic cloud fields from a large-eddy simulation and real observations from Moderate Resolution Imaging Spectroradiometer. The predicted results based on our framework agree very well with the numerical simulations. Our framework can be used to estimate the retrieval uncertainty from subpixel reflectance variations in operational satellite cloud products and to help understand the differences in τ and re retrievals between two instruments.
NASA Technical Reports Server (NTRS)
Zhang, Z.; Werner, F.; Cho, H. -M.; Wind, G.; Platnick, S.; Ackerman, A. S.; Di Girolamo, L.; Marshak, A.; Meyer, Kerry
2016-01-01
The bi-spectral method retrieves cloud optical thickness and cloud droplet effective radius simultaneously from a pair of cloud reflectance observations, one in a visible or near-infrared (VISNIR) band and the other in a shortwave infrared (SWIR) band. A cloudy pixel is usually assumed to be horizontally homogeneous in the retrieval. Ignoring sub-pixel variations of cloud reflectances can lead to a significant bias in the retrieved and re. In the literature, the retrievals of and re are often assumed to be independent and considered separately when investigating the impact of sub-pixel cloud reflectance variations on the bi-spectral method. As a result, the impact on is contributed only by the sub-pixel variation of VISNIR band reflectance and the impact on re only by the sub-pixel variation of SWIR band reflectance. In our new framework, we use the Taylor expansion of a two-variable function to understand and quantify the impacts of sub-pixel variances of VISNIR and SWIR cloud reflectances and their covariance on the and re retrievals. This framework takes into account the fact that the retrievals are determined by both VISNIR and SWIR band observations in a mutually dependent way. In comparison with previous studies, it provides a more comprehensive understanding of how sub-pixel cloud reflectance variations impact the and re retrievals based on the bi-spectral method. In particular, our framework provides a mathematical explanation of how the sub-pixel variation in VISNIR band influences the re retrieval and why it can sometimes outweigh the influence of variations in the SWIR band and dominate the error in re retrievals, leading to a potential contribution of positive bias to the re retrieval. We test our framework using synthetic cloud fields from a large-eddy simulation and real observations from Moderate Resolution Imaging Spectroradiometer. The predicted results based on our framework agree very well with the numerical simulations. Our framework can be used to estimate the retrieval uncertainty from sub-pixel reflectance variations in operational satellite cloud products and to help understand the differences in and re retrievals between two instruments.
Santamaria-Fernandez, Rebeca; Giner Martínez-Sierra, Justo; Marchante-Gayón, J M; García-Alonso, J Ignacio; Hearn, Ruth
2009-05-01
A new method for the measurement of longitudinal variations of sulfur isotope amount ratios in single hair strands using a laser ablation system coupled to a multicollector inductively coupled plasma mass spectrometer (LA-MC-ICP-MS) is reported here for the first time. Ablation parameters have been optimized for the measurement of sulfur isotope ratios in scalp human hair strands of 80-120-microm thickness and different washing procedures have been evaluated. The repeatability of the method has been tested and the ability to measure sulfur isotopic variations in 1,000-microm-long hair segments has been evaluated. A horse hair sample previously characterized for carbon and nitrogen isotope ratios in an interlaboratory study has been characterized by LA-MC-ICP-MS to be used as an in-house standard for the bracketing of human hair strands. (34)S/(32)S isotope amount ratios have been measured and corrected for instrumental mass bias adopting the external standardization approach using National Institute of Standards and Technology (NIST) RM8553 and full uncertainty budgets have been calculated using the Kragten approach. Results are reported as both (34)S/(32)S isotope amount ratios and deltaS(V-CDT) values (sulfur isotopic differences relative to a reference sample expressed in the Vienna Canyon Diablo Troilite (V-CDT) scale) calculated using NIST RM8553, NIST RM8554, and NIST RM8556 to anchor results to the V-CDT scale. The main advantage of the new method versus conventional gas source isotope ratio mass spectrometry measurements is that longitudinal variations in sulfur isotope amount ratios can be resolved. Proof of concept is shown with human scalp hair strands from three individuals, two UK residents and one traveler (long periods of time abroad). The method enables monitoring of longitudinal isotope ratio variations in single hair strands. Absolute ratios are reported and delta(34)S(V-CDT) values are plotted for comparison. Slight variations of <1.2 per thousand were detected in the hair strands from UK residents whereas the traveler presented a variation of >5 per thousand. Thus, the measurement of sulfur isotopic variations in hair samples has potential to be an indicator of geographical origin and recent movements and could be used in combination with isotope ratio measurements in water/foodstuffs from different geographical locations to provide important information in nutritional and geographical studies.
NASA Technical Reports Server (NTRS)
Zhang, Z.; Werner, F.; Cho, H.-M.; Wind, G.; Platnick, S.; Ackerman, A. S.; Di Girolamo, L.; Marshak, A.; Meyer, K.
2016-01-01
The bispectral method retrieves cloud optical thickness (t) and cloud droplet effective radius (re) simultaneously from a pair of cloud reflectance observations, one in a visible or near-infrared (VIS/NIR) band and the other in a shortwave infrared (SWIR) band. A cloudy pixel is usually assumed to be horizontally homogeneous in the retrieval. Ignoring subpixel variations of cloud reflectances can lead to a significant bias in the retrieved t and re. In the literature, the retrievals of t and re are often assumed to be independent and considered separately when investigating the impact of subpixel cloud reflectance variations on the bispectral method. As a result, the impact on t is contributed only by the subpixel variation of VIS/NIR band reflectance and the impact on re only by the subpixel variation of SWIR band reflectance. In our new framework, we use the Taylor expansion of a two-variable function to understand and quantify the impacts of subpixel variances of VIS/NIR and SWIR cloud reflectances and their covariance on the t and re retrievals. This framework takes into account the fact that the retrievals are determined by both VIS/NIR and SWIR band observations in a mutually dependent way. In comparison with previous studies, it provides a more comprehensive understanding of how subpixel cloud reflectance variations impact the t and re retrievals based on the bispectral method. In particular, our framework provides a mathematical explanation of how the subpixel variation in VIS/NIR band influences the re retrieval and why it can sometimes outweigh the influence of variations in the SWIR band and dominate the error in re retrievals, leading to a potential contribution of positive bias to the re retrieval. We test our framework using synthetic cloud fields from a large-eddy simulation and real observations from Moderate Resolution Imaging Spectroradiometer. The predicted results based on our framework agree very well with the numerical simulations. Our framework can be used to estimate the retrieval uncertainty from subpixel reflectance variations in operational satellite cloud products and to help understand the differences in t and re retrievals between two instruments.
NASA Technical Reports Server (NTRS)
Shiau, Jyh-Jen; Wahba, Grace; Johnson, Donald R.
1986-01-01
A new method, based on partial spline models, is developed for including specified discontinuities in otherwise smooth two- and three-dimensional objective analyses. The method is appropriate for including tropopause height information in two- and three-dimensinal temperature analyses, using the O'Sullivan-Wahba physical variational method for analysis of satellite radiance data, and may in principle be used in a combined variational analysis of observed, forecast, and climate information. A numerical method for its implementation is described and a prototype two-dimensional analysis based on simulated radiosonde and tropopause height data is shown. The method may also be appropriate for other geophysical problems, such as modeling the ocean thermocline, fronts, discontinuities, etc.
A dynamic unilateral contact problem with adhesion and friction in viscoelasticity
NASA Astrophysics Data System (ADS)
Cocou, Marius; Schryve, Mathieu; Raous, Michel
2010-08-01
The aim of this paper is to study an interaction law coupling recoverable adhesion, friction and unilateral contact between two viscoelastic bodies of Kelvin-Voigt type. A dynamic contact problem with adhesion and nonlocal friction is considered and its variational formulation is written as the coupling between an implicit variational inequality and a parabolic variational inequality describing the evolution of the intensity of adhesion. The existence and approximation of variational solutions are analysed, based on a penalty method, some abstract results and compactness properties. Finally, some numerical examples are presented.
NASA Astrophysics Data System (ADS)
Pumpanen, Jukka; Shurpali, Narasinha; Kulmala, Liisa; Kolari, Pasi; Heinonsalo, Jussi
2017-04-01
Soil CO2 efflux forms a substantial part of the ecosystem carbon balance, and it can contribute more than half of the annual ecosystem respiration. Recently assimilated carbon which has been fixed in photosynthesis during the previous days plays an important role in soil CO2 efflux, and its contribution is seasonally variable. Moreover, the recently assimilated C has been shown to stimulate the decomposition of recalcitrant C in soil and increase the mineralization of nitrogen, the most important macronutrient limiting gross primary productivity (GPP) in boreal ecosystems. Podzolic soils, typical in boreal zone, have distinctive layers with different biological and chemical properties. The biological activity in different soil layers has large seasonal variation due to vertical gradient in temperature, soil organic matter and root biomass. Thus, the source of CO2 and its components have a vertical gradient which is seasonally variable. The contribution of recently assimilated C and its seasonal as well as spatial variation in soil are difficult to assess without disturbing the system. The most common method of partitioning soil respiration into its components is trenching which entails the roots being cut or girdling where the flow of carbohydrates from the canopy to roots has been isolated by cutting of the phloem. Other methods for determining the contribution of autotrophic (Ra) and heterotrophic (Rh) respiration components in soil CO2 efflux are pulse labelling with 13CO2 or 14CO2 or the natural abundance of 13C and/or 14C isotopes. Also differences in seasonal and short-term temperature response of soil respiration have been used to separate Ra and Rh. We compared the seasonal variation in Ra and Rh using the trenching method and differences between seasonal and short-term temperature responses of soil respiration. I addition, we estimated the vertical variation in soil biological activity using soil CO2 concentration and the natural abundance of 13C and 12C in CO2 in different soil layers in a boreal forest in Southern Finland and compared them to seasonal variation in GPP. Our results show that Ra followed a seasonal variation in GPP with a time lag of about 2 weeks. The contribution of Ra on soil CO2 efflux was largest in July and August. There was also a distinct seasonal pattern in the vertical distribution of soil CO2 concentration and the abundances of natural isotopes 13C/12C in soil CO2 which reflected the changes in biological activity in the soil profile. Our results indicate that all methods were able to distinguish seasonal variability in Ra and Rh. The soil CO2 gradient method was able to reproduce the temporal variation in soil CO2 effluxes relatively well when compared to those measured with chambers. However, variation in soil moisture also causes significant variation in soil air CO2 concentrations which interferes with the variation resulted from soil temperatures and belowground allocation of carbon from recent photosynthate. Also, the assumptions used in gradient method calculations, such as soil porosity and transport distances have to be taken into account when interpreting the results.
Most genetic risk for autism resides with common variation.
Gaugler, Trent; Klei, Lambertus; Sanders, Stephan J; Bodea, Corneliu A; Goldberg, Arthur P; Lee, Ann B; Mahajan, Milind; Manaa, Dina; Pawitan, Yudi; Reichert, Jennifer; Ripke, Stephan; Sandin, Sven; Sklar, Pamela; Svantesson, Oscar; Reichenberg, Abraham; Hultman, Christina M; Devlin, Bernie; Roeder, Kathryn; Buxbaum, Joseph D
2014-08-01
A key component of genetic architecture is the allelic spectrum influencing trait variability. For autism spectrum disorder (herein termed autism), the nature of the allelic spectrum is uncertain. Individual risk-associated genes have been identified from rare variation, especially de novo mutations. From this evidence, one might conclude that rare variation dominates the allelic spectrum in autism, yet recent studies show that common variation, individually of small effect, has substantial impact en masse. At issue is how much of an impact relative to rare variation this common variation has. Using a unique epidemiological sample from Sweden, new methods that distinguish total narrow-sense heritability from that due to common variation and synthesis of results from other studies, we reach several conclusions about autism's genetic architecture: its narrow-sense heritability is ∼52.4%, with most due to common variation, and rare de novo mutations contribute substantially to individual liability, yet their contribution to variance in liability, 2.6%, is modest compared to that for heritable variation.
A total variation diminishing finite difference algorithm for sonic boom propagation models
NASA Technical Reports Server (NTRS)
Sparrow, Victor W.
1993-01-01
It is difficult to accurately model the rise phases of sonic boom waveforms with traditional finite difference algorithms because of finite difference phase dispersion. This paper introduces the concept of a total variation diminishing (TVD) finite difference method as a tool for accurately modeling the rise phases of sonic booms. A standard second order finite difference algorithm and its TVD modified counterpart are both applied to the one-way propagation of a square pulse. The TVD method clearly outperforms the non-TVD method, showing great potential as a new computational tool in the analysis of sonic boom propagation.
Elimination of RF inhomogeneity effects in segmentation.
Agus, Onur; Ozkan, Mehmed; Aydin, Kubilay
2007-01-01
There are various methods proposed for the segmentation and analysis of MR images. However the efficiency of these techniques is effected by various artifacts that occur in the imaging system. One of the most encountered problems is the intensity variation across an image. To overcome this problem different methods are used. In this paper we propose a method for the elimination of intensity artifacts in segmentation of MRI images. Inter imager variations are also minimized to produce the same tissue segmentation for the same patient. A well-known multivariate classification algorithm, maximum likelihood is employed to illustrate the enhancement in segmentation.
Suppression of vapor cell temperature error for spin-exchange-relaxation-free magnetometer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Jixi, E-mail: lujixi@buaa.edu.cn; Qian, Zheng; Fang, Jiancheng
2015-08-15
This paper presents a method to reduce the vapor cell temperature error of the spin-exchange-relaxation-free (SERF) magnetometer. The fluctuation of cell temperature can induce variations of the optical rotation angle, resulting in a scale factor error of the SERF magnetometer. In order to suppress this error, we employ the variation of the probe beam absorption to offset the variation of the optical rotation angle. The theoretical discussion of our method indicates that the scale factor error introduced by the fluctuation of the cell temperature could be suppressed by setting the optical depth close to one. In our experiment, we adjustmore » the probe frequency to obtain various optical depths and then measure the variation of scale factor with respect to the corresponding cell temperature changes. Our experimental results show a good agreement with our theoretical analysis. Under our experimental condition, the error has been reduced significantly compared with those when the probe wavelength is adjusted to maximize the probe signal. The cost of this method is the reduction of the scale factor of the magnetometer. However, according to our analysis, it only has minor effect on the sensitivity under proper operating parameters.« less
NASA Astrophysics Data System (ADS)
Montcel, Bruno; Chabrier, Renée; Poulet, Patrick
2006-12-01
Time-resolved diffuse optical methods have been applied to detect hemodynamic changes induced by cerebral activity. We describe a near infrared spectroscopic (NIRS) reconstruction free method which allows retrieving depth-related information on absorption variations. Variations in the absorption coefficient of tissues have been computed over the duration of the whole experiment, but also over each temporal step of the time-resolved optical signal, using the microscopic Beer-Lambert law.Finite element simulations show that time-resolved computation of the absorption difference as a function of the propagation time of detected photons is sensitive to the depth profile of optical absorption variations. Differences in deoxyhemoglobin and oxyhemoglobin concentrations can also be calculated from multi-wavelength measurements. Experimental validations of the simulated results have been obtained for resin phantoms. They confirm that time-resolved computation of the absorption differences exhibited completely different behaviours, depending on whether these variations occurred deeply or superficially. The hemodynamic response to a short finger tapping stimulus was measured over the motor cortex and compared to experiments involving Valsalva manoeuvres. Functional maps were also calculated for the hemodynamic response induced by finger tapping movements.
Montcel, Bruno; Chabrier, Renée; Poulet, Patrick
2006-12-11
Time-resolved diffuse optical methods have been applied to detect hemodynamic changes induced by cerebral activity. We describe a near infrared spectroscopic (NIRS) reconstruction free method which allows retrieving depth-related information on absorption variations. Variations in the absorption coefficient of tissues have been computed over the duration of the whole experiment, but also over each temporal step of the time-resolved optical signal, using the microscopic Beer-Lambert law.Finite element simulations show that time-resolved computation of the absorption difference as a function of the propagation time of detected photons is sensitive to the depth profile of optical absorption variations. Differences in deoxyhemoglobin and oxyhemoglobin concentrations can also be calculated from multi-wavelength measurements. Experimental validations of the simulated results have been obtained for resin phantoms. They confirm that time-resolved computation of the absorption differences exhibited completely different behaviours, depending on whether these variations occurred deeply or superficially. The hemodynamic response to a short finger tapping stimulus was measured over the motor cortex and compared to experiments involving Valsalva manoeuvres. Functional maps were also calculated for the hemodynamic response induced by finger tapping movements.
NASA Technical Reports Server (NTRS)
Gao, Bo-Cai; Wiscombe, W. J.
1993-01-01
A method for detecting cirrus clouds in terms of brightness temperature differences between narrow bands at 8, 11, and 12 mu m has been proposed by Ackerman et al. (1990). In this method, the variation of emissivity with wavelength for different surface targets was not taken into consideration. Based on state-of-the-art laboratory measurements of reflectance spectra of terrestrial materials by Salisbury and D'Aria (1992), we have found that the brightness temperature differences between the 8 and 11 mu m bands for soils, rocks and minerals, and dry vegetation can vary between approximately -8 K and +8 K due solely to surface emissivity variations. We conclude that although the method of Ackerman et al. is useful for detecting cirrus clouds over areas covered by green vegetation, water, and ice, it is less effective for detecting cirrus clouds over areas covered by bare soils, rocks and minerals, and dry vegetation. In addition, we recommend that in future the variation of surface emissivity with wavelength should be taken into account in algorithms for retrieving surface temperatures and low-level atmospheric temperature and water vapor profiles.
Quantification of intensity variations in functional MR images using rotated principal components
NASA Astrophysics Data System (ADS)
Backfrieder, W.; Baumgartner, R.; Sámal, M.; Moser, E.; Bergmann, H.
1996-08-01
In functional MRI (fMRI), the changes in cerebral haemodynamics related to stimulated neural brain activity are measured using standard clinical MR equipment. Small intensity variations in fMRI data have to be detected and distinguished from non-neural effects by careful image analysis. Based on multivariate statistics we describe an algorithm involving oblique rotation of the most significant principal components for an estimation of the temporal and spatial distribution of the stimulated neural activity over the whole image matrix. This algorithm takes advantage of strong local signal variations. A mathematical phantom was designed to generate simulated data for the evaluation of the method. In simulation experiments, the potential of the method to quantify small intensity changes, especially when processing data sets containing multiple sources of signal variations, was demonstrated. In vivo fMRI data collected in both visual and motor stimulation experiments were analysed, showing a proper location of the activated cortical regions within well known neural centres and an accurate extraction of the activation time profile. The suggested method yields accurate absolute quantification of in vivo brain activity without the need of extensive prior knowledge and user interaction.
Variational Methods For Sloshing Problems With Surface Tension
NASA Astrophysics Data System (ADS)
Tan, Chee Han; Carlson, Max; Hohenegger, Christel; Osting, Braxton
2016-11-01
We consider the sloshing problem for an incompressible, inviscid, irrotational fluid in a container, including effects due to surface tension on the free surface. We restrict ourselves to a constant contact angle and we seek time-harmonic solutions of the linearized problem, which describes the time-evolution of the fluid due to a small initial disturbance of the surface at rest. As opposed to the zero surface tension case, where the problem reduces to a partial differential equation for the velocity potential, we obtain a coupled system for the velocity potential and the free surface displacement. We derive a new variational formulation of the coupled problem and establish the existence of solutions using the direct method from the Calculus of Variations. In the limit of zero surface tension, we recover the variational formulation of the classical Steklov eigenvalue problem, as derived by B. A. Troesch. For the particular case of an axially symmetric container, we propose a finite element numerical method for computing the sloshing modes of the coupled system. The scheme is implemented in FEniCS and we obtain a qualitative description of the effect of surface tension on the sloshing modes.
Analysis of variability in additive manufactured open cell porous structures.
Evans, Sam; Jones, Eric; Fox, Pete; Sutcliffe, Chris
2017-06-01
In this article, a novel method of analysing build consistency of additively manufactured open cell porous structures is presented. Conventionally, methods such as micro computed tomography or scanning electron microscopy imaging have been applied to the measurement of geometric properties of porous material; however, high costs and low speeds make them unsuitable for analysing high volumes of components. Recent advances in the image-based analysis of open cell structures have opened up the possibility of qualifying variation in manufacturing of porous material. Here, a photogrammetric method of measurement, employing image analysis to extract values for geometric properties, is used to investigate the variation between identically designed porous samples measuring changes in material thickness and pore size, both intra- and inter-build. Following the measurement of 125 samples, intra-build material thickness showed variation of ±12%, and pore size ±4% of the mean measured values across five builds. Inter-build material thickness and pore size showed mean ranges higher than those of intra-build, ±16% and ±6% of the mean material thickness and pore size, respectively. Acquired measurements created baseline variation values and demonstrated techniques suitable for tracking build deviation and inspecting additively manufactured porous structures to indicate unwanted process fluctuations.
NASA Astrophysics Data System (ADS)
Peng, Chengtao; Qiu, Bensheng; Zhang, Cheng; Ma, Changyu; Yuan, Gang; Li, Ming
2017-07-01
Over the years, the X-ray computed tomography (CT) has been successfully used in clinical diagnosis. However, when the body of the patient to be examined contains metal objects, the image reconstructed would be polluted by severe metal artifacts, which affect the doctor's diagnosis of disease. In this work, we proposed a dynamic re-weighted total variation (DRWTV) technique combined with the statistic iterative reconstruction (SIR) method to reduce the artifacts. The DRWTV method is based on the total variation (TV) and re-weighted total variation (RWTV) techniques, but it provides a sparser representation than TV and protects the tissue details better than RWTV. Besides, the DRWTV can suppress the artifacts and noise, and the SIR convergence speed is also accelerated. The performance of the algorithm is tested on both simulated phantom dataset and clinical dataset, which are the teeth phantom with two metal implants and the skull with three metal implants, respectively. The proposed algorithm (SIR-DRWTV) is compared with two traditional iterative algorithms, which are SIR and SIR constrained by RWTV regulation (SIR-RWTV). The results show that the proposed algorithm has the best performance in reducing metal artifacts and protecting tissue details.
Salimon, Jumat; Omar, Talal A.; Salih, Nadia
2014-01-01
Two different procedures for the methylation of fatty acids (FAs) and trans fatty acids (TFAs) in food fats were compared using gas chromatography (GC-FID). The base-catalyzed followed by an acid-catalyzed method (KOCH3/HCl) and the base-catalyzed followed by (trimethylsilyl)diazomethane (TMS–DM) method were used to prepare FA methyl esters (FAMEs) from lipids extracted from food products. In general, both methods were suitable for the determination of cis/trans FAs. The correlation coefficients (r) between the methods were relatively small (ranging from 0.86 to 0.99) and had a high level of agreement for the most abundant FAs. The significant differences (P = 0.05) can be observed for unsaturated FAs (UFAs), specifically for TFAs. The results from the KOCH3/HCl method showed the lowest recovery values (%R) and higher variation (from 84% to 112%), especially for UFAs. The TMS-DM method had higher R values, less variation (from 90% to 106%), and more balance between variation and %RSD values in intraday and interday measurements (less than 4% and 6%, resp.) than the KOCH3/HCl method, except for C12:0, C14:0, and C18:0. Nevertheless, the KOCH3/HCl method required shorter time and was less expensive than the TMS-DM method which is more convenient for an accurate and thorough analysis of rich cis/trans UFA samples. PMID:24719581
Salimon, Jumat; Omar, Talal A; Salih, Nadia
2014-01-01
Two different procedures for the methylation of fatty acids (FAs) and trans fatty acids (TFAs) in food fats were compared using gas chromatography (GC-FID). The base-catalyzed followed by an acid-catalyzed method (KOCH3/HCl) and the base-catalyzed followed by (trimethylsilyl)diazomethane (TMS-DM) method were used to prepare FA methyl esters (FAMEs) from lipids extracted from food products. In general, both methods were suitable for the determination of cis/trans FAs. The correlation coefficients (r) between the methods were relatively small (ranging from 0.86 to 0.99) and had a high level of agreement for the most abundant FAs. The significant differences (P = 0.05) can be observed for unsaturated FAs (UFAs), specifically for TFAs. The results from the KOCH3/HCl method showed the lowest recovery values (%R) and higher variation (from 84% to 112%), especially for UFAs. The TMS-DM method had higher R values, less variation (from 90% to 106%), and more balance between variation and %RSD values in intraday and interday measurements (less than 4% and 6%, resp.) than the KOCH3/HCl method, except for C12:0, C14:0, and C18:0. Nevertheless, the KOCH3/HCl method required shorter time and was less expensive than the TMS-DM method which is more convenient for an accurate and thorough analysis of rich cis/trans UFA samples.
Computational methods to predict railcar response to track cross-level variations
DOT National Transportation Integrated Search
1976-09-01
The rocking response of railroad freight cars to track cross-level variations is studied using: (1) a reduced complexity digital simulation model, and (2) a quasi-linear describing function analysis. The reduced complexity digital simulation model em...
Analysis of Local Variations in Free Field Seismic Ground Motion.
1981-01-01
analysis) can conveniently account for material damping through the introduction of complex moduli into the equations of motion. This method can...determined, and the total response is obtained by superposition. This technique, however, can not properly account for the spatial variation of damping...2.9. Most available data only consider the variation of shear modulus and damping ratio with shear strain amplitude. In principle , two moduli and two
A Variational Assimilation Method for Satellite and Conventional Data: Model 2 (version 1)
NASA Technical Reports Server (NTRS)
Achtemeier, Gary L.
1991-01-01
The Model II variational data assimilation model is the second of the four variational models designed to blend diverse meteorological data into a dynamically constrained data set. Model II differs from Model I in that it includes the thermodynamic equation as the fifth dynamical constraint. Thus, Model II includes all five of the primative equations that govern atmospheric flow for a dry atmosphere.
Vogl, Claus; Das, Aparup; Beaumont, Mark; Mohanty, Sujata; Stephan, Wolfgang
2003-11-01
Population subdivision complicates analysis of molecular variation. Even if neutrality is assumed, three evolutionary forces need to be considered: migration, mutation, and drift. Simplification can be achieved by assuming that the process of migration among and drift within subpopulations is occurring fast compared to mutation and drift in the entire population. This allows a two-step approach in the analysis: (i) analysis of population subdivision and (ii) analysis of molecular variation in the migrant pool. We model population subdivision using an infinite island model, where we allow the migration/drift parameter Theta to vary among populations. Thus, central and peripheral populations can be differentiated. For inference of Theta, we use a coalescence approach, implemented via a Markov chain Monte Carlo (MCMC) integration method that allows estimation of allele frequencies in the migrant pool. The second step of this approach (analysis of molecular variation in the migrant pool) uses the estimated allele frequencies in the migrant pool for the study of molecular variation. We apply this method to a Drosophila ananassae sequence data set. We find little indication of isolation by distance, but large differences in the migration parameter among populations. The population as a whole seems to be expanding. A population from Bogor (Java, Indonesia) shows the highest variation and seems closest to the species center.
NASA Astrophysics Data System (ADS)
Sinha, Amit Kumar; Kim, Duck Young; Ceglarek, Darek
2013-10-01
Many advantages of laser welding technology such as high speed and non-contact welding make the use of the technology more attractive in the automotive industry. Many studies have been conducted to search the optimal welding condition experimentally that ensure the joining quality of laser welding that relies both on welding system configuration and welding parameter specification. Both non-destructive and destructive techniques, for example, ultrasonic inspection and tensile test are widely used in practice for estimating the joining quality. Non-destructive techniques are attractive as a rapid quality testing method despite relatively low accuracy. In this paper, we examine the relationship between the variation of weld seam and tensile shear strength in the laser welding of galvanized steel in a lap joint configuration in order to investigate the potential of the variation of weld seam as a joining quality estimator. From the experimental analysis, we identify a trend in between maximum tensile shear strength and the variation of weld seam that clearly supports the fact that laser welded parts having larger variation in the weld seam usually have lower tensile strength. The discovered relationship leads us to conclude that the variation of weld seam can be used as an indirect non-destructive testing method for estimating the tensile strength of the welded parts.
NASA Astrophysics Data System (ADS)
Boning, Duane S.; Chung, James E.
1998-11-01
Advanced process technology will require more detailed understanding and tighter control of variation in devices and interconnects. The purpose of statistical metrology is to provide methods to measure and characterize variation, to model systematic and random components of that variation, and to understand the impact of variation on both yield and performance of advanced circuits. Of particular concern are spatial or pattern-dependencies within individual chips; such systematic variation within the chip can have a much larger impact on performance than wafer-level random variation. Statistical metrology methods will play an important role in the creation of design rules for advanced technologies. For example, a key issue in multilayer interconnect is the uniformity of interlevel dielectric (ILD) thickness within the chip. For the case of ILD thickness, we describe phases of statistical metrology development and application to understanding and modeling thickness variation arising from chemical-mechanical polishing (CMP). These phases include screening experiments including design of test structures and test masks to gather electrical or optical data, techniques for statistical decomposition and analysis of the data, and approaches to calibrating empirical and physical variation models. These models can be integrated with circuit CAD tools to evaluate different process integration or design rule strategies. One focus for the generation of interconnect design rules are guidelines for the use of "dummy fill" or "metal fill" to improve the uniformity of underlying metal density and thus improve the uniformity of oxide thickness within the die. Trade-offs that can be evaluated via statistical metrology include the improvements to uniformity possible versus the effect of increased capacitance due to additional metal.
A variational dynamic programming approach to robot-path planning with a distance-safety criterion
NASA Technical Reports Server (NTRS)
Suh, Suk-Hwan; Shin, Kang G.
1988-01-01
An approach to robot-path planning is developed by considering both the traveling distance and the safety of the robot. A computationally-efficient algorithm is developed to find a near-optimal path with a weighted distance-safety criterion by using a variational calculus and dynamic programming (VCDP) method. The algorithm is readily applicable to any factory environment by representing the free workspace as channels. A method for deriving these channels is also proposed. Although it is developed mainly for two-dimensional problems, this method can be easily extended to a class of three-dimensional problems. Numerical examples are presented to demonstrate the utility and power of this method.
Predictive Array Design. A method for sampling combinatorial chemistry library space.
Lipkin, M J; Rose, V S; Wood, J
2002-01-01
A method, Predictive Array Design, is presented for sampling combinatorial chemistry space and selecting a subarray for synthesis based on the experimental design method of Latin Squares. The method is appropriate for libraries with three sites of variation. Libraries with four sites of variation can be designed using the Graeco-Latin Square. Simulated annealing is used to optimise the physicochemical property profile of the sub-array. The sub-array can be used to make predictions of the activity of compounds in the all combinations array if we assume each monomer has a relatively constant contribution to activity and that the activity of a compound is composed of the sum of the activities of its constitutive monomers.
NASA Astrophysics Data System (ADS)
Kaltenbacher, Barbara; Klassen, Andrej
2018-05-01
In this paper we provide a convergence analysis of some variational methods alternative to the classical Tikhonov regularization, namely Ivanov regularization (also called the method of quasi solutions) with some versions of the discrepancy principle for choosing the regularization parameter, and Morozov regularization (also called the method of the residuals). After motivating nonequivalence with Tikhonov regularization by means of an example, we prove well-definedness of the Ivanov and the Morozov method, convergence in the sense of regularization, as well as convergence rates under variational source conditions. Finally, we apply these results to some linear and nonlinear parameter identification problems in elliptic boundary value problems.
Applications of Sharp Interface Method for Flow Dynamics, Scattering and Control Problems
2012-07-30
Reynolds number, Advances in Applied Mathematics and Mechanics, to appear. 17. K. Ito and K. Kunisch, Optimal Control of Parabolic Variational ...provides more precise and detailed sensitivity of the solution and describes the dynamical change due to the variation in the Reynolds number. The immersed... Inequalities , Journal de Math. Pures et Appl, 93 (2010), no. 4, 329-360. 18. K. Ito and K. Kunisch, Semi-smooth Newton Methods for Time-Optimal Control for a
Efficient genotype compression and analysis of large genetic variation datasets
Layer, Ryan M.; Kindlon, Neil; Karczewski, Konrad J.; Quinlan, Aaron R.
2015-01-01
Genotype Query Tools (GQT) is a new indexing strategy that expedites analyses of genome variation datasets in VCF format based on sample genotypes, phenotypes and relationships. GQT’s compressed genotype index minimizes decompression for analysis, and performance relative to existing methods improves with cohort size. We show substantial (up to 443 fold) performance gains over existing methods and demonstrate GQT’s utility for exploring massive datasets involving thousands to millions of genomes. PMID:26550772
NASA Astrophysics Data System (ADS)
Black, Joshua A.; Knowles, Peter J.
2018-06-01
The performance of quasi-variational coupled-cluster (QV) theory applied to the calculation of activation and reaction energies has been investigated. A statistical analysis of results obtained for six different sets of reactions has been carried out, and the results have been compared to those from standard single-reference methods. In general, the QV methods lead to increased activation energies and larger absolute reaction energies compared to those obtained with traditional coupled-cluster theory.
Optimal Collision Avoidance Trajectories for Unmanned/Remotely Piloted Aircraft
2014-12-26
projected operational tempos (OPTEMPOs)” [15]. The Oce of the Secretary of Defense (OSD) Unmanned Systems Roadmap [15] goes on to say that the airspace...methods [63]. In an indirect method, the researcher derives the first- order necessary conditions for optimality “via the calculus of variations and...region around the ownship using a variation of a superquadric. From [116], the standard equation for a superellipsoid appears as: ✓ x a1 ◆ 2 ✏ 2
Monitoring total mixed rations and feed delivery systems.
Oelberg, Thomas J; Stone, William
2014-11-01
This article is intended to give practitioners a method to evaluate total mixed ration (TMR) consistency and to give them practical solutions to improve TMR consistency that will improve cattle performance and health. Practitioners will learn how to manage the variation in moisture and nutrients that exists in haylage and corn silage piles and in bales of hay, and methods to reduce variation in the TMR mixing and delivery process. Copyright © 2014 Elsevier Inc. All rights reserved.
On characterizing population commonalities and subject variations in brain networks.
Ghanbari, Yasser; Bloy, Luke; Tunc, Birkan; Shankar, Varsha; Roberts, Timothy P L; Edgar, J Christopher; Schultz, Robert T; Verma, Ragini
2017-05-01
Brain networks based on resting state connectivity as well as inter-regional anatomical pathways obtained using diffusion imaging have provided insight into pathology and development. Such work has underscored the need for methods that can extract sub-networks that can accurately capture the connectivity patterns of the underlying population while simultaneously describing the variation of sub-networks at the subject level. We have designed a multi-layer graph clustering method that extracts clusters of nodes, called 'network hubs', which display higher levels of connectivity within the cluster than to the rest of the brain. The method determines an atlas of network hubs that describes the population, as well as weights that characterize subject-wise variation in terms of within- and between-hub connectivity. This lowers the dimensionality of brain networks, thereby providing a representation amenable to statistical analyses. The applicability of the proposed technique is demonstrated by extracting an atlas of network hubs for a population of typically developing controls (TDCs) as well as children with autism spectrum disorder (ASD), and using the structural and functional networks of a population to determine the subject-level variation of these hubs and their inter-connectivity. These hubs are then used to compare ASD and TDCs. Our method is generalizable to any population whose connectivity (structural or functional) can be captured via non-negative network graphs. Copyright © 2015 Elsevier B.V. All rights reserved.
Alarcón-Ríos, Lucía; Velo-Antón, Guillermo; Kaliontzopoulou, Antigoni
2017-04-01
The study of morphological variation among and within taxa can shed light on the evolution of phenotypic diversification. In the case of urodeles, the dorso-ventral view of the head captures most of the ontogenetic and evolutionary variation of the entire head, which is a structure with a high potential for being a target of selection due to its relevance in ecological and social functions. Here, we describe a non-invasive procedure of geometric morphometrics for exploring morphological variation in the external dorso-ventral view of urodeles' head. To explore the accuracy of the method and its potential for describing morphological patterns we applied it to two populations of Salamandra salamandra gallaica from NW Iberia. Using landmark-based geometric morphometrics, we detected differences in head shape between populations and sexes, and an allometric relationship between shape and size. We also determined that not all differences in head shape are due to size variation, suggesting intrinsic shape differences across sexes and populations. These morphological patterns had not been previously explored in S. salamandra, despite the high levels of intraspecific diversity within this species. The methodological procedure presented here allows to detect shape variation at a very fine scale, and solves the drawbacks of using cranial samples, thus increasing the possibilities of using collection specimens and alive animals for exploring dorsal head shape variation and its evolutionary and ecological implications in urodeles. J. Morphol. 278:475-485, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Effects of mass variation on structures of differentially rotating polytropic stars
NASA Astrophysics Data System (ADS)
Kumar, Sunil; Saini, Seema; Singh, Kamal Krishan
2018-07-01
A method is proposed for determining equilibrium structures and various physical parameters of differentially rotating polytropic models of stars, taking into account the effect of mass variation inside the star and on its equipotential surfaces. The law of differential rotation has been assumed to be the form of ω2(s) =b1 +b2s2 +b3s4 . The proposed method utilizes the averaging approach of Kippenhahn and Thomas and concepts of Roche-equipotential to incorporate the effects of differential rotation on the equilibrium structures of polytropic stellar models. Mathematical expressions of determining the equipotential surfaces, volume, surface area and other physical parameters are also obtained under the effects of mass variation inside the stars. Some significant conclusions are also drawn.
Quantization of Non-Lagrangian Systems
NASA Astrophysics Data System (ADS)
Kochan, Denis
A novel method for quantization of non-Lagrangian (open) systems is proposed. It is argued that the essential object, which provides both classical and quantum evolution, is a certain canonical two-form defined in extended velocity space. In this setting classical dynamics is recovered from the stringy-type variational principle, which employs umbilical surfaces instead of histories of the system. Quantization is then accomplished in accordance with the introduced variational principle. The path integral for the transition probability amplitude (propagator) is rearranged to a surface functional integral. In the standard case of closed (Lagrangian) systems the presented method reduces to the standard Feynman's approach. The inverse problem of the calculus of variation, the problem of quantization ambiguity and the quantum mechanics in the presence of friction are analyzed in detail.
Gondim Teixeira, Pedro Augusto; Leplat, Christophe; Chen, Bailiang; De Verbizier, Jacques; Beaumont, Marine; Badr, Sammy; Cotten, Anne; Blum, Alain
2017-12-01
To evaluate intra-tumour and striated muscle T1 value heterogeneity and the influence of different methods of T1 estimation on the variability of quantitative perfusion parameters. Eighty-two patients with a histologically confirmed musculoskeletal tumour were prospectively included in this study and, with ethics committee approval, underwent contrast-enhanced MR perfusion and T1 mapping. T1 value variations in viable tumour areas and in normal-appearing striated muscle were assessed. In 20 cases, normal muscle perfusion parameters were calculated using three different methods: signal based and gadolinium concentration based on fixed and variable T1 values. Tumour and normal muscle T1 values were significantly different (p = 0.0008). T1 value heterogeneity was higher in tumours than in normal muscle (variation of 19.8% versus 13%). The T1 estimation method had a considerable influence on the variability of perfusion parameters. Fixed T1 values yielded higher coefficients of variation than variable T1 values (mean 109.6 ± 41.8% and 58.3 ± 14.1% respectively). Area under the curve was the least variable parameter (36%). T1 values in musculoskeletal tumours are significantly different and more heterogeneous than normal muscle. Patient-specific T1 estimation is needed for direct inter-patient comparison of perfusion parameters. • T1 value variation in musculoskeletal tumours is considerable. • T1 values in muscle and tumours are significantly different. • Patient-specific T1 estimation is needed for comparison of inter-patient perfusion parameters. • Technical variation is higher in permeability than semiquantitative perfusion parameters.
NASA Technical Reports Server (NTRS)
Bosilovich, Michael G.; Chern, Jiun-Dar
2005-01-01
An atmospheric general circulation model simulation for 1948-1997 of the water budgets for the MacKenzie, Mississippi and Amazon River basins is presented. In addition to the water budget, we include passive tracers to identify the geographic sources of water for the basins, and the analysis focuses on the mechanisms contributing to precipitation recycling in each basin. While each basin s precipitation recycling has a strong dependency on evaporation during the mean annual cycle, the interannual variability of the recycling shows important relationships with the atmospheric circulation. The MacKenzie River basin has only a weak interannual dependency on evaporation, where the variations in zonal moisture transport from the Pacific Ocean can affect the basin water cycle. On the other hand, the Mississippi River basin has strong interannual dependencies on evaporation. While the precipitation recycling weakens with increased low level jet intensity, the evaporation variations exert stronger influence in providing water vapor for convective precipitation at the convective cloud base. High precipitation recycling is also found to be partly connected to warm SSTs in the tropical Pacific Ocean. The Amazon River basin evaporation exhibits small interannual variations, so that the interannual variations of precipitation recycling are related to atmospheric moisture transport from the tropical south Atlantic Ocean. Increasing SSTs over the 50-year period are causing increased easterly transport across the basin. As moisture transport increases, the Amazon precipitation recycling decreases (without real time varying vegetation changes). In addition, precipitation recycling from a bulk diagnostic method is compared to the passive tracer method used in the analysis. While the mean values are different, the interannual variations are comparable between each method. The methods also exhibit similar relationships to the terms of the basin scale water budgets.
TECHNIQUES TO DETERMINE SPATIAL VARIATIONS IN HYDRAULIC CONDUCTIVITY OF SAND AND GRAVEL
Methods for determining small-scale variations in aquifer properties were investigated for a sand and gravel aquifer on Cape Cod, Massachusetts. easurements of aquifer properties, in particular hydraulic conductivity, are needed for further investigations into the effects of aqui...
Modeling of resistive sheets in finite element solutions
NASA Technical Reports Server (NTRS)
Jin, J. M.; Volakis, John L.; Yu, C. L.; Woo, Alex C.
1992-01-01
A formulation is presented for modeling a resistive card in the context of the finite element method. The appropriate variational function is derived and for variational purposes results are presented for the scattering by a metal-backed cavity loaded with a resistive card.
NASA Astrophysics Data System (ADS)
Diffey, Jenny; Berks, Michael; Hufton, Alan; Chung, Camilla; Verow, Rosanne; Morrison, Joanna; Wilson, Mary; Boggis, Caroline; Morris, Julie; Maxwell, Anthony; Astley, Susan
2010-04-01
Breast density is positively linked to the risk of developing breast cancer. We have developed a semi-automated, stepwedge-based method that has been applied to the mammograms of 1,289 women in the UK breast screening programme to measure breast density by volume and area. 116 images were analysed by three independent operators to assess inter-observer variability; 24 of these were analysed on 10 separate occasions by the same operator to determine intra-observer variability. 168 separate images were analysed using the stepwedge method and by two radiologists who independently estimated percentage breast density by area. There was little intra-observer variability in the stepwedge method (average coefficients of variation 3.49% - 5.73%). There were significant differences in the volumes of glandular tissue obtained by the three operators. This was attributed to variations in the operators' definition of the breast edge. For fatty and dense breasts, there was good correlation between breast density assessed by the stepwedge method and the radiologists. This was also observed between radiologists, despite significant inter-observer variation. Based on analysis of thresholds used in the stepwedge method, radiologists' definition of a dense pixel is one in which the percentage of glandular tissue is between 10 and 20% of the total thickness of tissue.
Halder, Indrani; Yang, Bao-Zhu; Kranzler, Henry R.; Stein, Murray B.; Shriver, Mark D.; Gelernter, Joel
2010-01-01
Variation in individual admixture proportions leads to heterogeneity within populations. Though novel methods and marker panels have been developed to quantify individual admixture, empirical data describing individual admixture distributions are limited. We investigated variation in individual admixture in four US populations [European American (EA), African American (AA) and Hispanics from Connecticut (EC) and California (WC)] assuming three-way intermixture among Europeans, Africans and Indigenous Americans. Admixture estimates were inferred using a panel of 36 microsatellites and 1 SNP, which have significant allele frequency differences between ancestral populations, and by using both a maximum likelihood (ML) based method and a Bayesian method implemented in the program STRUCTURE. Simulation studies showed that estimates obtained with this marker panel are within 96% of expected values. EAs had the lowest non-European admixture with both methods, but showed greater homogeneity with STRUCTURE than with ML. All other samples showed a high degree of variation in admixture estimates with both methods, were highly concordant and showed evidence of admixture stratification. With both methods, AA subjects had 16% European and <10% Indigenous American admixture on average. EC Hispanics had higher mean African admixture and the WC Hispanics higher mean Indigenous American admixture, possibly reflecting their different continental origins. PMID:19572378
NASA Astrophysics Data System (ADS)
Zhu, Ge; Yao, Xu-Ri; Qiu, Peng; Mahmood, Waqas; Yu, Wen-Kai; Sun, Zhi-Bin; Zhai, Guang-Jie; Zhao, Qing
2018-02-01
In general, the sound waves can cause the vibration of the objects that are encountered in the traveling path. If we make a laser beam illuminate the rough surface of an object, it will be scattered into a speckle pattern that vibrates with these sound waves. Here, an efficient variance-based method is proposed to recover the sound information from speckle patterns captured by a high-speed camera. This method allows us to select the proper pixels that have large variances of the gray-value variations over time, from a small region of the speckle patterns. The gray-value variations of these pixels are summed together according to a simple model to recover the sound with a high signal-to-noise ratio. Meanwhile, our method will significantly simplify the computation compared with the traditional digital-image-correlation technique. The effectiveness of the proposed method has been verified by applying a variety of objects. The experimental results illustrate that the proposed method is robust to the quality of the speckle patterns and costs more than one-order less time to perform the same number of the speckle patterns. In our experiment, a sound signal of time duration 1.876 s is recovered from various objects with time consumption of 5.38 s only.
Valentine, Andrew J S; Talapin, Dmitri V; Mazziotti, David A
2017-04-27
Recent work found that soldering CdTe quantum dots together with a molecular CdTe polymer yielded field-effect transistors with much greater electron mobility than quantum dots alone. We present a computational study of the CdTe polymer using the active-space variational two-electron reduced density matrix (2-RDM) method. While analogous complete active-space self-consistent field (CASSCF) methods scale exponentially with the number of active orbitals, the active-space variational 2-RDM method exhibits polynomial scaling. A CASSCF calculation using the (48o,64e) active space studied in this paper requires 10 24 determinants and is therefore intractable, while the variational 2-RDM method in the same active space requires only 2.1 × 10 7 variables. Natural orbitals, natural-orbital occupations, charge gaps, and Mulliken charges are reported as a function of polymer length. The polymer, we find, is strongly correlated, despite possessing a simple sp 3 -hybridized bonding scheme. Calculations reveal the formation of a nearly saturated valence band as the polymer grows and a charge gap that decreases sharply with polymer length.
Traditional and modern plant breeding methods with examples in rice (Oryza sativa L.).
Breseghello, Flavio; Coelho, Alexandre Siqueira Guedes
2013-09-04
Plant breeding can be broadly defined as alterations caused in plants as a result of their use by humans, ranging from unintentional changes resulting from the advent of agriculture to the application of molecular tools for precision breeding. The vast diversity of breeding methods can be simplified into three categories: (i) plant breeding based on observed variation by selection of plants based on natural variants appearing in nature or within traditional varieties; (ii) plant breeding based on controlled mating by selection of plants presenting recombination of desirable genes from different parents; and (iii) plant breeding based on monitored recombination by selection of specific genes or marker profiles, using molecular tools for tracking within-genome variation. The continuous application of traditional breeding methods in a given species could lead to the narrowing of the gene pool from which cultivars are drawn, rendering crops vulnerable to biotic and abiotic stresses and hampering future progress. Several methods have been devised for introducing exotic variation into elite germplasm without undesirable effects. Cases in rice are given to illustrate the potential and limitations of different breeding approaches.
NASA Astrophysics Data System (ADS)
Leirião, Sílvia; He, Xin; Christiansen, Lars; Andersen, Ole B.; Bauer-Gottwein, Peter
2009-02-01
SummaryTotal water storage change in the subsurface is a key component of the global, regional and local water balances. It is partly responsible for temporal variations of the earth's gravity field in the micro-Gal (1 μGal = 10 -8 m s -2) range. Measurements of temporal gravity variations can thus be used to determine the water storage change in the hydrological system. A numerical method for the calculation of temporal gravity changes from the output of hydrological models is developed. Gravity changes due to incremental prismatic mass storage in the hydrological model cells are determined to give an accurate 3D gravity effect. The method is implemented in MATLAB and can be used jointly with any hydrological simulation tool. The method is composed of three components: the prism formula, the MacMillan formula and the point-mass approximation. With increasing normalized distance between the storage prism and the measurement location the algorithm switches first from the prism equation to the MacMillan formula and finally to the simple point-mass approximation. The method was used to calculate the gravity signal produced by an aquifer pump test. Results are in excellent agreement with the direct numerical integration of the Theis well solution and the semi-analytical results presented in [Damiata, B.N., and Lee, T.-C., 2006. Simulated gravitational response to hydraulic testing of unconfined aquifers. Journal of Hydrology 318, 348-359]. However, the presented method can be used to forward calculate hydrology-induced temporal variations in gravity from any hydrological model, provided earth curvature effects can be neglected. The method allows for the routine assimilation of ground-based gravity data into hydrological models.
Hoyle, J; Yentis, S M
2015-04-01
There are multiple methods of assessing the height of block before caesarean section under regional anaesthesia, and surveys of practice suggest considerable variation in practice. So far, little emphasis has been placed on the guidance to be gained from published research literature or textbooks. We therefore set out to investigate the methods of block assessment documented in published articles and textbooks over the past 30 years. We performed two searches of PubMed for randomised clinical trials with caesarean section and either spinal anaesthesia or epidural anaesthesia as major Medical Subject Headings. A total of 284 papers, from 1984 to 2013, were analysed for methods of assessment of sensory and motor block, and the height of block deemed adequate for surgery. We also examined 45 editions of seven anaesthetic textbooks spanning 1950-2014 for recommended methods of assessment and height of block required for caesarean section. Analysis of published papers demonstrated a wide variation in techniques, though there has been a trend towards the increased use of touch, and an increased use of a block height of T5 over the study period. Only 115/284 (40.5%) papers described the method of assessing motor block, with most of those that did (102/115; 88.7%) describing it as the 'Bromage scale', although only five of these (4.9%) matched the original description by Bromage. The required height of block recommended by textbooks has risen over the last 30 years to T4, although only four textbooks made any recommendation about the preferred sensory modality. The variation in methods suggested by surveys of practice is reflected in variation in published trials, and there is little consensus or guidance in anaesthetic textbooks. © 2014 The Association of Anaesthetists of Great Britain and Ireland.
The energetic cost of walking: a comparison of predictive methods.
Kramer, Patricia Ann; Sylvester, Adam D
2011-01-01
The energy that animals devote to locomotion has been of intense interest to biologists for decades and two basic methodologies have emerged to predict locomotor energy expenditure: those based on metabolic and those based on mechanical energy. Metabolic energy approaches share the perspective that prediction of locomotor energy expenditure should be based on statistically significant proxies of metabolic function, while mechanical energy approaches, which derive from many different perspectives, focus on quantifying the energy of movement. Some controversy exists as to which mechanical perspective is "best", but from first principles all mechanical methods should be equivalent if the inputs to the simulation are of similar quality. Our goals in this paper are 1) to establish the degree to which the various methods of calculating mechanical energy are correlated, and 2) to investigate to what degree the prediction methods explain the variation in energy expenditure. We use modern humans as the model organism in this experiment because their data are readily attainable, but the methodology is appropriate for use in other species. Volumetric oxygen consumption and kinematic and kinetic data were collected on 8 adults while walking at their self-selected slow, normal and fast velocities. Using hierarchical statistical modeling via ordinary least squares and maximum likelihood techniques, the predictive ability of several metabolic and mechanical approaches were assessed. We found that all approaches are correlated and that the mechanical approaches explain similar amounts of the variation in metabolic energy expenditure. Most methods predict the variation within an individual well, but are poor at accounting for variation between individuals. Our results indicate that the choice of predictive method is dependent on the question(s) of interest and the data available for use as inputs. Although we used modern humans as our model organism, these results can be extended to other species.
A comparison of two closely-related approaches to aerodynamic design optimization
NASA Technical Reports Server (NTRS)
Shubin, G. R.; Frank, P. D.
1991-01-01
Two related methods for aerodynamic design optimization are compared. The methods, called the implicit gradient approach and the variational (or optimal control) approach, both attempt to obtain gradients necessary for numerical optimization at a cost significantly less than that of the usual black-box approach that employs finite difference gradients. While the two methods are seemingly quite different, they are shown to differ (essentially) in that the order of discretizing the continuous problem, and of applying calculus, is interchanged. Under certain circumstances, the two methods turn out to be identical. We explore the relationship between these methods by applying them to a model problem for duct flow that has many features in common with transonic flow over an airfoil. We find that the gradients computed by the variational method can sometimes be sufficiently inaccurate to cause the optimization to fail.
Stefano Filho, Carlos A; Attux, Romis; Castellano, Gabriela
2017-01-01
Hands motor imagery (MI) has been reported to alter synchronization patterns amongst neurons, yielding variations in the mu and beta bands' power spectral density (PSD) of the electroencephalography (EEG) signal. These alterations have been used in the field of brain-computer interfaces (BCI), in an attempt to assign distinct MI tasks to commands of such a system. Recent studies have highlighted that information may be missing if knowledge about brain functional connectivity is not considered. In this work, we modeled the brain as a graph in which each EEG electrode represents a node. Our goal was to understand if there exists any linear correlation between variations in the synchronization patterns-that is, variations in the PSD of mu and beta bands-induced by MI and alterations in the corresponding functional networks. Moreover, we (1) explored the feasibility of using functional connectivity parameters as features for a classifier in the context of an MI-BCI; (2) investigated three different types of feature selection (FS) techniques; and (3) compared our approach to a more traditional method using the signal PSD as classifier inputs. Ten healthy subjects participated in this study. We observed significant correlations ( p < 0.05) with values ranging from 0.4 to 0.9 between PSD variations and functional network alterations for some electrodes, prominently in the beta band. The PSD method performed better for data classification, with mean accuracies of (90 ± 8)% and (87 ± 7)% for the mu and beta band, respectively, versus (83 ± 8)% and (83 ± 7)% for the same bands for the graph method. Moreover, the number of features for the graph method was considerably larger. However, results for both methods were relatively close, and even overlapped when the uncertainties of the accuracy rates were considered. Further investigation regarding a careful exploration of other graph metrics may provide better alternatives.
A systematic evaluation of normalization methods in quantitative label-free proteomics.
Välikangas, Tommi; Suomi, Tomi; Elo, Laura L
2018-01-01
To date, mass spectrometry (MS) data remain inherently biased as a result of reasons ranging from sample handling to differences caused by the instrumentation. Normalization is the process that aims to account for the bias and make samples more comparable. The selection of a proper normalization method is a pivotal task for the reliability of the downstream analysis and results. Many normalization methods commonly used in proteomics have been adapted from the DNA microarray techniques. Previous studies comparing normalization methods in proteomics have focused mainly on intragroup variation. In this study, several popular and widely used normalization methods representing different strategies in normalization are evaluated using three spike-in and one experimental mouse label-free proteomic data sets. The normalization methods are evaluated in terms of their ability to reduce variation between technical replicates, their effect on differential expression analysis and their effect on the estimation of logarithmic fold changes. Additionally, we examined whether normalizing the whole data globally or in segments for the differential expression analysis has an effect on the performance of the normalization methods. We found that variance stabilization normalization (Vsn) reduced variation the most between technical replicates in all examined data sets. Vsn also performed consistently well in the differential expression analysis. Linear regression normalization and local regression normalization performed also systematically well. Finally, we discuss the choice of a normalization method and some qualities of a suitable normalization method in the light of the results of our evaluation. © The Author 2016. Published by Oxford University Press.
NASA Astrophysics Data System (ADS)
Vaillant de Guélis, Thibault; Chepfer, Hélène; Noel, Vincent; Guzman, Rodrigo; Winker, David M.; Plougonven, Riwal
2017-12-01
Measurements of the longwave cloud radiative effect (LWCRE) at the top of the atmosphere assess the contribution of clouds to the Earth warming but do not quantify the cloud property variations that are responsible for the LWCRE variations. The CALIPSO space lidar observes directly the detailed profile of cloud, cloud opacity, and cloud cover. Here we use these observations to quantify the influence of cloud properties on the variations of the LWCRE observed between 2008 and 2015 in the tropics and at global scale. At global scale, the method proposed here gives good results except over the Southern Ocean. We find that the global LWCRE variations observed over ocean are mostly due to variations in the opaque cloud properties (82%); transparent cloud columns contributed 18%. Variation of opaque cloud cover is the first contributor to the LWCRE evolution (58%); opaque cloud temperature is the second contributor (28%).
NASA Astrophysics Data System (ADS)
Stemkens, Bjorn; Glitzner, Markus; Kontaxis, Charis; de Senneville, Baudouin Denis; Prins, Fieke M.; Crijns, Sjoerd P. M.; Kerkmeijer, Linda G. W.; Lagendijk, Jan J. W.; van den Berg, Cornelis A. T.; Tijssen, Rob H. N.
2017-09-01
Stereotactic body radiation therapy (SBRT) has shown great promise in increasing local control rates for renal-cell carcinoma (RCC). Characterized by steep dose gradients and high fraction doses, these hypo-fractionated treatments are, however, prone to dosimetric errors as a result of variations in intra-fraction respiratory-induced motion, such as drifts and amplitude alterations. This may lead to significant variations in the deposited dose. This study aims to develop a method for calculating the accumulated dose for MRI-guided SBRT of RCC in the presence of intra-fraction respiratory variations and determine the effect of such variations on the deposited dose. For this, RCC SBRT treatments were simulated while the underlying anatomy was moving, based on motion information from three motion models with increasing complexity: (1) STATIC, in which static anatomy was assumed, (2) AVG-RESP, in which 4D-MRI phase-volumes were time-weighted, and (3) PCA, a method that generates 3D volumes with sufficient spatio-temporal resolution to capture respiration and intra-fraction variations. Five RCC patients and two volunteers were included and treatments delivery was simulated, using motion derived from subject-specific MR imaging. Motion was most accurately estimated using the PCA method with root-mean-squared errors of 2.7, 2.4, 1.0 mm for STATIC, AVG-RESP and PCA, respectively. The heterogeneous patient group demonstrated relatively large dosimetric differences between the STATIC and AVG-RESP, and the PCA reconstructed dose maps, with hotspots up to 40% of the D99 and an underdosed GTV in three out of the five patients. This shows the potential importance of including intra-fraction motion variations in dose calculations.
Sensitive detection of KIT D816V in patients with mastocytosis.
Tan, Angela; Westerman, David; McArthur, Grant A; Lynch, Kevin; Waring, Paul; Dobrovic, Alexander
2006-12-01
The 2447 A > T pathogenic variation at codon 816 of exon 17 (D816V) in the KIT gene, occurring in systemic mastocytosis (SM), leads to constitutive activation of tyrosine kinase activity and confers resistance to the tyrosine kinase inhibitor imatinib mesylate. Thus detection of this variation in SM patients is important for determining treatment strategy, but because the population of malignant cells carrying this variation is often small relative to the normal cell population, standard molecular detection methods can be unsuccessful. We developed 2 methods for detection of KIT D816V in SM patients. The first uses enriched sequencing of mutant alleles (ESMA) after BsmAI restriction enzyme digestion, and the second uses an allele-specific competitive blocker PCR (ACB-PCR) assay. We used these methods to assess 26 patients undergoing evaluation for SM, 13 of whom had SM meeting WHO classification criteria (before variation testing), and we compared the results with those obtained by direct sequencing. The sensitivities of the ESMA and the ACB-PCR assays were 1% and 0.1%, respectively. According to the ACB-PCR assay results, 65% (17/26) of patients were positive for D816V. Of the 17 positive cases, only 23.5% (4/17) were detected by direct sequencing. ESMA detected 2 additional exon 17 pathogenic variations, D816Y and D816N, but detected only 12 (70.5%) of the 17 D816V-positive cases. Overall, 100% (15/15) of the WHO-classified SM cases were codon 816 pathogenic variation positive. These findings demonstrate that the ACB-PCR assay combined with ESMA is a rapid and highly sensitive approach for detection of KIT D816V in SM patients.
Determination of wave-function functionals: The constrained-search variational method
NASA Astrophysics Data System (ADS)
Pan, Xiao-Yin; Sahni, Viraht; Massa, Lou
2005-09-01
In a recent paper [Phys. Rev. Lett. 93, 130401 (2004)], we proposed the idea of expanding the space of variations in variational calculations of the energy by considering the approximate wave function ψ to be a functional of functions χ , ψ=ψ[χ] , rather than a function. A constrained search is first performed over all functions χ such that the wave-function functional ψ[χ] satisfies a physical constraint or leads to the known value of an observable. A rigorous upper bound to the energy is then obtained via the variational principle. In this paper we generalize the constrained-search variational method, applicable to both ground and excited states, to the determination of arbitrary Hermitian single-particle operators as applied to two-electron atomic and ionic systems. We construct analytical three-parameter ground-state functionals for the H- ion and the He atom through the constraint of normalization. We present the results for the total energy E , the expectations of the single-particle operators W=∑irin , n=-2,-1,1,2 , W=∑iδ(ri) , and W=∑iδ(ri-r) , the structure of the nonlocal Coulomb hole charge ρc(rr') , and the expectations of the two particle operators u2,u,1/u,1/u2 , where u=∣ri-rj∣ . The results for all the expectation values are remarkably accurate when compared with the 1078-parameter wave function of Pekeris, and other wave functions that are not functionals. We conclude by describing our current work on how the constrained-search variational method in conjunction with quantal density-functional theory is being applied to the many-electron case.
Mining geographic variations of Plasmodium vivax for active surveillance: a case study in China.
Shi, Benyun; Tan, Qi; Zhou, Xiao-Nong; Liu, Jiming
2015-05-27
Geographic variations of an infectious disease characterize the spatial differentiation of disease incidences caused by various impact factors, such as environmental, demographic, and socioeconomic factors. Some factors may directly determine the force of infection of the disease (namely, explicit factors), while many other factors may indirectly affect the number of disease incidences via certain unmeasurable processes (namely, implicit factors). In this study, the impact of heterogeneous factors on geographic variations of Plasmodium vivax incidences is systematically investigate in Tengchong, Yunnan province, China. A space-time model that resembles a P. vivax transmission model and a hidden time-dependent process, is presented by taking into consideration both explicit and implicit factors. Specifically, the transmission model is built upon relevant demographic, environmental, and biophysical factors to describe the local infections of P. vivax. While the hidden time-dependent process is assessed by several socioeconomic factors to account for the imported cases of P. vivax. To quantitatively assess the impact of heterogeneous factors on geographic variations of P. vivax infections, a Markov chain Monte Carlo (MCMC) simulation method is developed to estimate the model parameters by fitting the space-time model to the reported spatial-temporal disease incidences. Since there is no ground-truth information available, the performance of the MCMC method is first evaluated against a synthetic dataset. The results show that the model parameters can be well estimated using the proposed MCMC method. Then, the proposed model is applied to investigate the geographic variations of P. vivax incidences among all 18 towns in Tengchong, Yunnan province, China. Based on the geographic variations, the 18 towns can be further classify into five groups with similar socioeconomic causality for P. vivax incidences. Although this study focuses mainly on the transmission of P. vivax, the proposed space-time model is general and can readily be extended to investigate geographic variations of other diseases. Practically, such a computational model will offer new insights into active surveillance and strategic planning for disease surveillance and control.
Coherent Anomaly Method Calculation on the Cluster Variation Method. II.
NASA Astrophysics Data System (ADS)
Wada, Koh; Watanabe, Naotosi; Uchida, Tetsuya
The critical exponents of the bond percolation model are calculated in the D(= 2,3,…)-dimensional simple cubic lattice on the basis of Suzuki's coherent anomaly method (CAM) by making use of a series of the pair, the square-cactus and the square approximations of the cluster variation method (CVM) in the s-state Potts model. These simple approximations give reasonable values of critical exponents α, β, γ and ν in comparison with ones estimated by other methods. It is also shown that the results of the pair and the square-cactus approximations can be derived as exact results of the bond percolation model on the Bethe and the square-cactus lattice, respectively, in the presence of ghost field without recourse to the s→1 limit of the s-state Potts model.
NASA Astrophysics Data System (ADS)
Akbar, M. S.; Setiawan; Suhartono; Ruchjana, B. N.; Riyadi, M. A. A.
2018-03-01
Ordinary Least Squares (OLS) is general method to estimates Generalized Space Time Autoregressive (GSTAR) parameters. But in some cases, the residuals of GSTAR are correlated between location. If OLS is applied to this case, then the estimators are inefficient. Generalized Least Squares (GLS) is a method used in Seemingly Unrelated Regression (SUR) model. This method estimated parameters of some models with residuals between equations are correlated. Simulation study shows that GSTAR with GLS method for estimating parameters (GSTAR-SUR) is more efficient than GSTAR-OLS method. The purpose of this research is to apply GSTAR-SUR with calendar variation and intervention as exogenous variable (GSTARX-SUR) for forecast outflow of currency in Java, Indonesia. As a result, GSTARX-SUR provides better performance than GSTARX-OLS.
NASA Astrophysics Data System (ADS)
Chatzistergos, Theodosios; Ermolli, Ilaria; Solanki, Sami K.; Krivova, Natalie A.
2018-01-01
Context. Historical Ca II K spectroheliograms (SHG) are unique in representing long-term variations of the solar chromospheric magnetic field. They usually suffer from numerous problems and lack photometric calibration. Thus accurate processing of these data is required to get meaningful results from their analysis. Aims: In this paper we aim at developing an automatic processing and photometric calibration method that provides precise and consistent results when applied to historical SHG. Methods: The proposed method is based on the assumption that the centre-to-limb variation of the intensity in quiet Sun regions does not vary with time. We tested the accuracy of the proposed method on various sets of synthetic images that mimic problems encountered in historical observations. We also tested our approach on a large sample of images randomly extracted from seven different SHG archives. Results: The tests carried out on the synthetic data show that the maximum relative errors of the method are generally <6.5%, while the average error is <1%, even if rather poor quality observations are considered. In the absence of strong artefacts the method returns images that differ from the ideal ones by <2% in any pixel. The method gives consistent values for both plage and network areas. We also show that our method returns consistent results for images from different SHG archives. Conclusions: Our tests show that the proposed method is more accurate than other methods presented in the literature. Our method can also be applied to process images from photographic archives of solar observations at other wavelengths than Ca II K.
Predictive modeling and reducing cyclic variability in autoignition engines
Hellstrom, Erik; Stefanopoulou, Anna; Jiang, Li; Larimore, Jacob
2016-08-30
Methods and systems are provided for controlling a vehicle engine to reduce cycle-to-cycle combustion variation. A predictive model is applied to predict cycle-to-cycle combustion behavior of an engine based on observed engine performance variables. Conditions are identified, based on the predicted cycle-to-cycle combustion behavior, that indicate high cycle-to-cycle combustion variation. Corrective measures are then applied to prevent the predicted high cycle-to-cycle combustion variation.
Accelerated Simulation of Kinetic Transport Using Variational Principles and Sparsity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Caflisch, Russel
This project is centered on the development and application of techniques of sparsity and compressed sensing for variational principles, PDEs and physics problems, in particular for kinetic transport. This included derivation of sparse modes for elliptic and parabolic problems coming from variational principles. The research results of this project are on methods for sparsity in differential equations and their applications and on application of sparsity ideas to kinetic transport of plasmas.
Characterisation of longitudinal variation in photonic crystal fibre
NASA Astrophysics Data System (ADS)
Francis-Jones, Robert J. A.; Mosley, Peter J.
2016-10-01
We present a method by which the degree of longitudinal variation in photonic crystal fibre (PCF) may be characterised through seeded four-wave mixing (FWM). Using an iterative numerical reconstruction, we created a model PCF that displays similar FWM phasematching properties across all measured length scales. Our results demonstrate that the structure of our PCF varies by less than 1% and that the characteristic length of the variations is approximately 15 cm.
Christensen, A L; Lundbye-Christensen, S; Dethlefsen, C
2011-12-01
Several statistical methods of assessing seasonal variation are available. Brookhart and Rothman [3] proposed a second-order moment-based estimator based on the geometrical model derived by Edwards [1], and reported that this estimator is superior in estimating the peak-to-trough ratio of seasonal variation compared with Edwards' estimator with respect to bias and mean squared error. Alternatively, seasonal variation may be modelled using a Poisson regression model, which provides flexibility in modelling the pattern of seasonal variation and adjustments for covariates. Based on a Monte Carlo simulation study three estimators, one based on the geometrical model, and two based on log-linear Poisson regression models, were evaluated in regards to bias and standard deviation (SD). We evaluated the estimators on data simulated according to schemes varying in seasonal variation and presence of a secular trend. All methods and analyses in this paper are available in the R package Peak2Trough[13]. Applying a Poisson regression model resulted in lower absolute bias and SD for data simulated according to the corresponding model assumptions. Poisson regression models had lower bias and SD for data simulated to deviate from the corresponding model assumptions than the geometrical model. This simulation study encourages the use of Poisson regression models in estimating the peak-to-trough ratio of seasonal variation as opposed to the geometrical model. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Meiotic gene-conversion rate and tract length variation in the human genome.
Padhukasahasram, Badri; Rannala, Bruce
2013-02-27
Meiotic recombination occurs in the form of two different mechanisms called crossing-over and gene-conversion and both processes have an important role in shaping genetic variation in populations. Although variation in crossing-over rates has been studied extensively using sperm-typing experiments, pedigree studies and population genetic approaches, our knowledge of variation in gene-conversion parameters (ie, rates and mean tract lengths) remains far from complete. To explore variability in population gene-conversion rates and its relationship to crossing-over rate variation patterns, we have developed and validated using coalescent simulations a comprehensive Bayesian full-likelihood method that can jointly infer crossing-over and gene-conversion rates as well as tract lengths from population genomic data under general variable rate models with recombination hotspots. Here, we apply this new method to SNP data from multiple human populations and attempt to characterize for the first time the fine-scale variation in gene-conversion parameters along the human genome. We find that the estimated ratio of gene-conversion to crossing-over rates varies considerably across genomic regions as well as between populations. However, there is a great degree of uncertainty associated with such estimates. We also find substantial evidence for variation in the mean conversion tract length. The estimated tract lengths did not show any negative relationship with the local heterozygosity levels in our analysis.European Journal of Human Genetics advance online publication, 27 February 2013; doi:10.1038/ejhg.2013.30.
NASA Astrophysics Data System (ADS)
Batterman, Stuart; Cook, Richard; Justin, Thomas
2015-04-01
Traffic activity encompasses the number, mix, speed and acceleration of vehicles on roadways. The temporal pattern and variation of traffic activity reflects vehicle use, congestion and safety issues, and it represents a major influence on emissions and concentrations of traffic-related air pollutants. Accurate characterization of vehicle flows is critical in analyzing and modeling urban and local-scale pollutants, especially in near-road environments and traffic corridors. This study describes methods to improve the characterization of temporal variation of traffic activity. Annual, monthly, daily and hourly temporal allocation factors (TAFs), which describe the expected temporal variation in traffic activity, were developed using four years of hourly traffic activity data recorded at 14 continuous counting stations across the Detroit, Michigan, U.S. region. Five sites also provided vehicle classification. TAF-based models provide a simple means to apportion annual average estimates of traffic volume to hourly estimates. The analysis shows the need to separate TAFs for total and commercial vehicles, and weekdays, Saturdays, Sundays and observed holidays. Using either site-specific or urban-wide TAFs, nearly all of the variation in historical traffic activity at the street scale could be explained; unexplained variation was attributed to adverse weather, traffic accidents and construction. The methods and results presented in this paper can improve air quality dispersion modeling of mobile sources, and can be used to evaluate and model temporal variation in ambient air quality monitoring data and exposure estimates.
Batterman, Stuart; Cook, Richard; Justin, Thomas
2015-01-01
Traffic activity encompasses the number, mix, speed and acceleration of vehicles on roadways. The temporal pattern and variation of traffic activity reflects vehicle use, congestion and safety issues, and it represents a major influence on emissions and concentrations of traffic-related air pollutants. Accurate characterization of vehicle flows is critical in analyzing and modeling urban and local-scale pollutants, especially in near-road environments and traffic corridors. This study describes methods to improve the characterization of temporal variation of traffic activity. Annual, monthly, daily and hourly temporal allocation factors (TAFs), which describe the expected temporal variation in traffic activity, were developed using four years of hourly traffic activity data recorded at 14 continuous counting stations across the Detroit, Michigan, U.S. region. Five sites also provided vehicle classification. TAF-based models provide a simple means to apportion annual average estimates of traffic volume to hourly estimates. The analysis shows the need to separate TAFs for total and commercial vehicles, and weekdays, Saturdays, Sundays and observed holidays. Using either site-specific or urban-wide TAFs, nearly all of the variation in historical traffic activity at the street scale could be explained; unexplained variation was attributed to adverse weather, traffic accidents and construction. The methods and results presented in this paper can improve air quality dispersion modeling of mobile sources, and can be used to evaluate and model temporal variation in ambient air quality monitoring data and exposure estimates. PMID:25844042
A screening tool for delineating subregions of steady recharge within groundwater models
Dickinson, Jesse; Ferré, T.P.A.; Bakker, Mark; Crompton, Becky
2014-01-01
We have developed a screening method for simplifying groundwater models by delineating areas within the domain that can be represented using steady-state groundwater recharge. The screening method is based on an analytical solution for the damping of sinusoidal infiltration variations in homogeneous soils in the vadose zone. The damping depth is defined as the depth at which the flux variation damps to 5% of the variation at the land surface. Groundwater recharge may be considered steady where the damping depth is above the depth of the water table. The analytical solution approximates the vadose zone diffusivity as constant, and we evaluated when this approximation is reasonable. We evaluated the analytical solution through comparison of the damping depth computed by the analytic solution with the damping depth simulated by a numerical model that allows variable diffusivity. This comparison showed that the screening method conservatively identifies areas of steady recharge and is more accurate when water content and diffusivity are nearly constant. Nomograms of the damping factor (the ratio of the flux amplitude at any depth to the amplitude at the land surface) and the damping depth were constructed for clay and sand for periodic variations between 1 and 365 d and flux means and amplitudes from nearly 0 to 1 × 10−3 m d−1. We applied the screening tool to Central Valley, California, to identify areas of steady recharge. A MATLAB script was developed to compute the damping factor for any soil and any sinusoidal flux variation.
Tarafder, Abhijit; Iraneta, Pamela; Guiochon, Georges; Kaczmarski, Krzysztof; Poe, Donald P
2014-10-31
We propose to use constant enthalpy or isenthalpic diagrams as a tool to estimate the extent of the temperature variations caused by the mobile phase pressure drop along a chromatographic column, e.g. of its cooling in supercritical fluid and its heating in ultra-performance liquid chromatography. Temperature strongly affects chromatographic phenomena. Any of its variations inside the column, whether intended or not, can lead to significant changes in separation performance. Although instruments use column ovens in order to keep constant the column temperature, operating conditions leading to a high pressure drop may cause significant variations of the column temperature, both in the axial and the radial directions, from the set value. Different ways of measuring these temperature variations are available but they are too inconvenient to be employed in many practical situations. In contrast, the thermodynamic plot-based method that we describe here can easily be used with only a ruler and a pencil. They should be helpful in developing methods or in analyzing results in analytical laboratories. Although the most effective application area for this approach should be SFC (supercritical fluid chromatography), it can be applied to any chromatographic conditions in which temperature variations take place along the column due to the pressure drop, e.g. in ultra-high pressure liquid chromatography (UHPLC). The method proposed here is applicable to isocractic conditions only. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Fillion, Anthony; Bocquet, Marc; Gratton, Serge
2018-04-01
The analysis in nonlinear variational data assimilation is the solution of a non-quadratic minimization. Thus, the analysis efficiency relies on its ability to locate a global minimum of the cost function. If this minimization uses a Gauss-Newton (GN) method, it is critical for the starting point to be in the attraction basin of a global minimum. Otherwise the method may converge to a local extremum, which degrades the analysis. With chaotic models, the number of local extrema often increases with the temporal extent of the data assimilation window, making the former condition harder to satisfy. This is unfortunate because the assimilation performance also increases with this temporal extent. However, a quasi-static (QS) minimization may overcome these local extrema. It accomplishes this by gradually injecting the observations in the cost function. This method was introduced by Pires et al. (1996) in a 4D-Var context. We generalize this approach to four-dimensional strong-constraint nonlinear ensemble variational (EnVar) methods, which are based on both a nonlinear variational analysis and the propagation of dynamical error statistics via an ensemble. This forces one to consider the cost function minimizations in the broader context of cycled data assimilation algorithms. We adapt this QS approach to the iterative ensemble Kalman smoother (IEnKS), an exemplar of nonlinear deterministic four-dimensional EnVar methods. Using low-order models, we quantify the positive impact of the QS approach on the IEnKS, especially for long data assimilation windows. We also examine the computational cost of QS implementations and suggest cheaper algorithms.
Terahertz Mapping of Microstructure and Thickness Variations
NASA Technical Reports Server (NTRS)
Roth, Donald J.; Seebo, Jeffrey P.; Winfree, William P.
2010-01-01
A noncontact method has been devised for mapping or imaging spatial variations in the thickness and microstructure of a layer of a dielectric material. The method involves (1) placement of the dielectric material on a metal substrate, (2) through-the-thickness pulse-echo measurements by use of electromagnetic waves in the terahertz frequency range with a raster scan in a plane parallel to the substrate surface that do not require coupling of any kind, and (3) appropriate processing of the digitized measurement data.
Multi-disciplinary optimization of aeroservoelastic systems
NASA Technical Reports Server (NTRS)
Karpel, Mordechay
1991-01-01
New methods were developed for efficient aeroservoelastic analysis and optimization. The main target was to develop a method for investigating large structural variations using a single set of modal coordinates. This task was accomplished by basing the structural modal coordinates on normal modes calculated with a set of fictitious masses loading the locations of anticipated structural changes. The following subject areas are covered: (1) modal coordinates for aeroelastic analysis with large local structural variations; and (2) time simulation of flutter with large stiffness changes.
Most genetic risk for autism resides with common variation
Gaugler, Trent; Klei, Lambertus; Sanders, Stephan J.; Bodea, Corneliu A.; Goldberg, Arthur P.; Lee, Ann B.; Mahajan, Milind; Manaa, Dina; Pawitan, Yudi; Reichert, Jennifer; Ripke, Stephan; Sandin, Sven; Sklar, Pamela; Svantesson, Oscar; Reichenberg, Abraham; Hultman, Christina M.; Devlin, Bernie
2014-01-01
A key component of genetic architecture is the allelic spectrum influencing trait variability. For autism spectrum disorder (henceforth autism) the nature of its allelic spectrum is uncertain. Individual risk genes have been identified from rare variation, especially de novo mutations1–8. From this evidence one might conclude that rare variation dominates its allelic spectrum, yet recent studies show that common variation, individually of small effect, has substantial impact en masse9,10. At issue is how much of an impact relative to rare variation. Using a unique epidemiological sample from Sweden, novel methods that distinguish total narrow-sense heritability from that due to common variation, and by synthesizing results from other studies, we reach several conclusions about autism’s genetic architecture: its narrow-sense heritability is ≈54% and most traces to common variation; rare de novo mutations contribute substantially to individuals’ liability; still their contribution to variance in liability, 2.6%, is modest compared to heritable variation. PMID:25038753
Blood lipid measurements. Variations and practical utility.
Cooper, G R; Myers, G L; Smith, S J; Schlant, R C
1992-03-25
To describe the magnitude and impact of the major biological and analytical sources of variation in serum lipid and lipoprotein levels on risk of coronary heart disease; to present a way to qualitatively estimate the total intraindividual variation; and to demonstrate how to determine the number of specimens required to estimate, with 95% confidence, the "true" underlying total cholesterol value in the serum of a patient. Representative references on each source of variation were selected from more than 300 reviewed publications, most published within the past 5 years, to document current findings and concepts. Most articles reviewed were in English. Studies on biological sources of variation were selected using the following criteria: representative of published findings, clear statement of either significant or insignificant results, and acquisition of clinical and laboratory data under standardized conditions. Representative results for special populations such as women and children are reported when results differ from those of adult men. References were selected based on acceptable experimental design and use of standardized laboratory lipid measurements. The lipid levels considered representative for a selected source of variation arose from quantitative measurements by a suitably standardized laboratory. Statistical analysis of data was examined to assure reliability. The proposed method of estimating the biological coefficient of variation must be considered to give qualitative results, because only two or three serial specimens are collected in most cases for the estimation. Concern has arisen about the magnitude, impact, and interpretation of preanalytical as well as analytical sources of variation on reported results of lipid measurements of an individual. Preanalytical sources of variation from behavioral, clinical, and sampling sources constitute about 60% of the total variation in a reported lipid measurement of an individual. A technique is presented to allow physicians to qualitatively estimate the intraindividual biological variation of a patient from the results of two or more specimens reported from a standardized laboratory and to determine whether additional specimens are needed to meet the National Cholesterol Education Program recommendation that the intraindividual serum total cholesterol coefficient of variation not exceed 5.0. A National Reference Method Network has been established to help solve analytical problems.
Learn Locally, Act Globally: Learning Language from Variation Set Cues
Onnis, Luca; Waterfall, Heidi R.; Edelman, Shimon
2011-01-01
Variation set structure — partial overlap of successive utterances in child-directed speech — has been shown to correlate with progress in children’s acquisition of syntax. We demonstrate the benefits of variation set structure directly: in miniature artificial languages, arranging a certain proportion of utterances in a training corpus in variation sets facilitated word and phrase constituent learning in adults. Our findings have implications for understanding the mechanisms of L1 acquisition by children, and for the development of more efficient algorithms for automatic language acquisition, as well as better methods for L2 instruction. PMID:19019350
Spectra of variations and anisotropy of cosmic rays during GLE of May 17, 2012
NASA Astrophysics Data System (ADS)
Kravtsova, Marina; Sdobnov, Valery
Using ground-based observations of cosmic rays (CRs) from the World Network of Neutron Monitor Stations and a method of spectrographic global survey, we have examined variations in the rigidity spectrum and anisotropy of CRs during the ground level enhancement (GLE) of May 17, 2012. We showed the rigidity spectrum of amplitudes of CR variations, the behavior of pitch-angle anisotropy amplitudes, and the relative variations in intensity of CRs with rigidities of 2, 4, and 10 GV in the solar-ecliptic geocentric coordinate system in some periods of the event under study.
Parity-expanded variational analysis for nonzero momentum
NASA Astrophysics Data System (ADS)
Stokes, Finn M.; Kamleh, Waseem; Leinweber, Derek B.; Mahbub, M. Selim; Menadue, Benjamin J.; Owen, Benjamin J.
2015-12-01
In recent years, the use of variational analysis techniques in lattice QCD has been demonstrated to be successful in the investigation of the rest-mass spectrum of many hadrons. However, due to parity mixing, more care must be taken for investigations of boosted states to ensure that the projected correlation functions provided by the variational analysis correspond to the same states at zero momentum. In this paper we present the parity-expanded variational analysis (PEVA) technique, a novel method for ensuring the successful and consistent isolation of boosted baryons through a parity expansion of the operator basis used to construct the correlation matrix.
2012-01-01
Background From the viewpoint of human physiological adaptability, we previously investigated seasonal variation in the amount of unabsorbed dietary carbohydrates from the intestine after breakfast in Japanese, Polish and Thai participants. In this investigation we found that there were significant seasonal variations in the amount of unabsorbed dietary carbohydrates in Japanese and Polish participants, while we could not find significant seasonal variation in Thai participants. These facts prompted us to examine seasonal variations in the respiratory quotient after an overnight fast (an indicator of the ratio of carbohydrate and fat oxidized after the last meal) with female university students living in Osaka (Japan), Poznan (Poland) and Chiang Mai (Thailand). Methods We enrolled 30, 33 and 32 paid participants in Japan, Poland and Thailand, respectively, and measurements were taken over the course of one full year. Fasting respiratory quotient was measured with the participants in their postabsorptive state (after 12 hours or more fasting before respiratory quotient measurement). Respiratory quotient measurements were carried out by means of indirect calorimetry using the mixing chamber method. The percent body fat was measured using an electric bioelectrical impedance analysis scale. Food intake of the participants in Osaka and Poznan were carried out by the Food Frequency Questionnaire method. Results There were different seasonal variations in the fasting respiratory quotient values in the three different populations; with a significant seasonal variation in the fasting respiratory quotient values in Japanese participants, while those in Polish and Thai participants were non-significant. We found that there were significant seasonal changes in the percent body fat in the three populations but we could not find any significant correlation between the fasting respiratory quotient values and the percent body fat. Conclusions There were different seasonal variations in the fasting respiratory quotient values in the three different populations. There were significant seasonal changes in the percent body fat in the three populations but no significant correlation between the fasting respiratory quotient values and the percent body fat. PMID:22738323
A GAS PRESSURE DISCHARGE TUBE FOR SEVERAL DIFFERENT LIQUID GAS CONTAINERS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thunborg, S.
1963-04-01
A discharge tube was developed to fit several different liquefied-gas containers. A rubber stopper was used to adjust to variations between neck openings. A method of compensating for variations in the depths of the containers was also incorporated. (M.C.G.)
1999-08-01
information may be crucial for the early identification of any range of potential health Variation in Quarters Rates 28 threats from food contamination to... GMO Physician 309.89 .0000 Over Family Practice 682.06 .0000 Over Aeromed Physician
Genomic Copy Number Variation in Disorders of Cognitive Development
ERIC Educational Resources Information Center
Morrow, Eric M.
2010-01-01
Objective: To highlight recent discoveries in the area of genomic copy number variation in neuropsychiatric disorders including intellectual disability, autism, and schizophrenia. To emphasize new principles emerging from this area, involving the genetic architecture of disease, pathophysiology, and diagnosis. Method: Review of studies published…
Independent origins of diploidy in the entomopathogen Metarhizium
USDA-ARS?s Scientific Manuscript database
Understanding of ploidal variation in fungi lags behind that for plants and animals because cytogenetic tools are often unable to accurately resolve and size the typically small genomes of fungi by traditional optical methods. Variation in ploidal status is frequently associated with changes in phen...
Ground-state calculations of confined hydrogen molecule H2 using variational Monte Carlo method
NASA Astrophysics Data System (ADS)
Doma, S. B.; El-Gammal, F. N.; Amer, A. A.
2018-07-01
The variational Monte Carlo method is used to evaluate the ground-state energy of a confined hydrogen molecule H2. Accordingly, we considered the.me case of hydrogen molecule confined by a hard prolate spheroidal cavity when the nuclear positions are clamped at the foci (on-focus case). Also, the case of off-focus nuclei in which the two nuclei are not clamped to the foci is studied. This case provides flexibility for the treatment of the molecular properties by selecting an arbitrary size and shape for the confining spheroidal box. A simple chemical analysis concerning the catalytic role of enzyme is investigated. An accurate trial wave function depending on many variational parameters is used for this purpose. The obtained results for the case of clamped foci exhibit good accuracy compared with the high precision variational data presented previously. In the case of off-focus nuclei, an improvement is obtained with respect to the most recent uncorrelated results existing in the literature.
A Blocked Linear Method for Optimizing Large Parameter Sets in Variational Monte Carlo
Zhao, Luning; Neuscamman, Eric
2017-05-17
We present a modification to variational Monte Carlo’s linear method optimization scheme that addresses a critical memory bottleneck while maintaining compatibility with both the traditional ground state variational principle and our recently-introduced variational principle for excited states. For wave function ansatzes with tens of thousands of variables, our modification reduces the required memory per parallel process from tens of gigabytes to hundreds of megabytes, making the methodology a much better fit for modern supercomputer architectures in which data communication and per-process memory consumption are primary concerns. We verify the efficacy of the new optimization scheme in small molecule tests involvingmore » both the Hilbert space Jastrow antisymmetric geminal power ansatz and real space multi-Slater Jastrow expansions. Satisfied with its performance, we have added the optimizer to the QMCPACK software package, with which we demonstrate on a hydrogen ring a prototype approach for making systematically convergent, non-perturbative predictions of Mott-insulators’ optical band gaps.« less
Technical variations in low-input RNA-seq methodologies.
Bhargava, Vipul; Head, Steven R; Ordoukhanian, Phillip; Mercola, Mark; Subramaniam, Shankar
2014-01-14
Recent advances in RNA-seq methodologies from limiting amounts of mRNA have facilitated the characterization of rare cell-types in various biological systems. So far, however, technical variations in these methods have not been adequately characterized, vis-à-vis sensitivity, starting with reduced levels of mRNA. Here, we generated sequencing libraries from limiting amounts of mRNA using three amplification-based methods, viz. Smart-seq, DP-seq and CEL-seq, and demonstrated significant technical variations in these libraries. Reduction in mRNA levels led to inefficient amplification of the majority of low to moderately expressed transcripts. Furthermore, noise in primer hybridization and/or enzyme incorporation was magnified during the amplification step resulting in significant distortions in fold changes of the transcripts. Consequently, the majority of the differentially expressed transcripts identified were either high-expressed and/or exhibited high fold changes. High technical variations ultimately masked subtle biological differences mandating the development of improved amplification-based strategies for quantitative transcriptomics from limiting amounts of mRNA.
Reschovsky, James D; Hadley, Jack; Romano, Patrick S
2013-10-01
Control for area differences in population health (casemix adjustment) is necessary to measure geographic variations in medical spending. Studies use various casemix adjustment methods, resulting in very different geographic variation estimates. We study casemix adjustment methodological issues and evaluate alternative approaches using claims from 1.6 million Medicare beneficiaries in 60 representative communities. Two key casemix adjustment methods-controlling for patient conditions obtained from diagnoses on claims and expenditures of those at the end of life-were evaluated. We failed to find evidence of bias in the former approach attributable to area differences in physician diagnostic patterns, as others have found, and found that the assumption underpinning the latter approach-that persons close to death are equally sick across areas-cannot be supported. Diagnosis-based approaches are more appropriate when current rather than prior year diagnoses are used. Population health likely explains more than 75% to 85% of cost variations across fixed sets of areas.
Variationally Optimized Free-Energy Flooding for Rate Calculation.
McCarty, James; Valsson, Omar; Tiwary, Pratyush; Parrinello, Michele
2015-08-14
We propose a new method to obtain kinetic properties of infrequent events from molecular dynamics simulation. The procedure employs a recently introduced variational approach [Valsson and Parrinello, Phys. Rev. Lett. 113, 090601 (2014)] to construct a bias potential as a function of several collective variables that is designed to flood the associated free energy surface up to a predefined level. The resulting bias potential effectively accelerates transitions between metastable free energy minima while ensuring bias-free transition states, thus allowing accurate kinetic rates to be obtained. We test the method on a few illustrative systems for which we obtain an order of magnitude improvement in efficiency relative to previous approaches and several orders of magnitude relative to unbiased molecular dynamics. We expect an even larger improvement in more complex systems. This and the ability of the variational approach to deal efficiently with a large number of collective variables will greatly enhance the scope of these calculations. This work is a vindication of the potential that the variational principle has if applied in innovative ways.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zilberman, E.; Wachs, D.
Geomorphological and geophysical methods combined with borehole information were employed to search for possible subrecent small-scale vertical movement along the anticlinal fold belt of the central Negev, Israel. Such tectonic deformation might indicate displacement on the buried reverse faults underneath the anticlines. Variations in the thickness of the alluvial fill in the study area, which are in accordance with the fold structures, could be an indication of recent folding activity along the anticlinal system. In order to detect these thickness variations in the alluvial fill, seismic refraction and electrical resistivity measurements were carries out along the valley of Nahal Besor,more » which crosses the anticlinal belt. The thickness variations of the alluvial fill along the valley were not found to indicate any significant tectonic movement along the anticlines during the Pleistocene. The thickest alluvium was found overlying a karst bedrock, hence karst relief is suggested to be responsible for these variations.« less
A frequency control method for regulating wireless power to implantable devices.
Ping Si; Hu, A P; Malpas, S; Budgett, D
2008-03-01
This paper presents a method to regulate the power transferred over a wireless link by adjusting the resonant operating frequency of the primary converter. A significant advantage of this method is that effective power regulation is maintained under variations in load, coupling and circuit parameters. This is particularly important when the wireless supply is used to power implanted medical devices where substantial coupling variations between internal and external systems is expected. The operating frequency is changed dynamically by altering the effective tuning capacitance through soft switched phase control. A thorough analysis of the proposed system has been undertaken, and experimental results verify its functionality.
NASA Technical Reports Server (NTRS)
Xue, W.-M.; Atluri, S. N.
1985-01-01
In this paper, all possible forms of mixed-hybrid finite element methods that are based on multi-field variational principles are examined as to the conditions for existence, stability, and uniqueness of their solutions. The reasons as to why certain 'simplified hybrid-mixed methods' in general, and the so-called 'simplified hybrid-displacement method' in particular (based on the so-called simplified variational principles), become unstable, are discussed. A comprehensive discussion of the 'discrete' BB-conditions, and the rank conditions, of the matrices arising in mixed-hybrid methods, is given. Some recent studies aimed at the assurance of such rank conditions, and the related problem of the avoidance of spurious kinematic modes, are presented.
Characterization of dielectric materials
DOE Office of Scientific and Technical Information (OSTI.GOV)
King, Danny J.; Babinec, Susan; Hagans, Patrick L.
2017-06-27
A system and a method for characterizing a dielectric material are provided. The system and method generally include applying an excitation signal to electrodes on opposing sides of the dielectric material to evaluate a property of the dielectric material. The method can further include measuring the capacitive impedance across the dielectric material, and determining a variation in the capacitive impedance with respect to either or both of a time domain and a frequency domain. The measured property can include pore size and surface imperfections. The method can still further include modifying a processing parameter as the dielectric material is formedmore » in response to the detected variations in the capacitive impedance, which can correspond to a non-uniformity in the dielectric material.« less
Born iterative reconstruction using perturbed-phase field estimates.
Astheimer, Jeffrey P; Waag, Robert C
2008-10-01
A method of image reconstruction from scattering measurements for use in ultrasonic imaging is presented. The method employs distorted-wave Born iteration but does not require using a forward-problem solver or solving large systems of equations. These calculations are avoided by limiting intermediate estimates of medium variations to smooth functions in which the propagated fields can be approximated by phase perturbations derived from variations in a geometric path along rays. The reconstruction itself is formed by a modification of the filtered-backpropagation formula that includes correction terms to account for propagation through an estimated background. Numerical studies that validate the method for parameter ranges of interest in medical applications are presented. The efficiency of this method offers the possibility of real-time imaging from scattering measurements.
NASA Astrophysics Data System (ADS)
Liao, S.; Chen, L.; Li, J.; Xiong, W.; Wu, Q.
2015-07-01
Existing spatiotemporal database supports spatiotemporal aggregation query over massive moving objects datasets. Due to the large amounts of data and single-thread processing method, the query speed cannot meet the application requirements. On the other hand, the query efficiency is more sensitive to spatial variation then temporal variation. In this paper, we proposed a spatiotemporal aggregation query method using multi-thread parallel technique based on regional divison and implemented it on the server. Concretely, we divided the spatiotemporal domain into several spatiotemporal cubes, computed spatiotemporal aggregation on all cubes using the technique of multi-thread parallel processing, and then integrated the query results. By testing and analyzing on the real datasets, this method has improved the query speed significantly.
Does geography or ecology best explain 'cultural' variation among chimpanzee communities?
Kamilar, Jason M; Marshack, Joshua L
2012-02-01
Much attention has been paid to geographic variation in chimpanzee behavior, but few studies have applied quantitative techniques to explain this variation. Here, we apply methods typically utilized in macroecology to explain variation in the putative cultural traits of chimpanzees. We analyzed published data containing 39 behavioral traits from nine chimpanzee communities. We used a canonical correspondence analysis to examine the relative importance of environmental characteristics and geography, which may be a proxy for inter-community gene flow and/or social transmission, for explaining geographic variation in chimpanzee behavior. We found that geography, and longitude in particular, was the best predictor of behavioral variation. Chimpanzee communities in close longitudinal proximity to each other exhibit similar behavioral repertoires, independent of local ecological factors. No ecological variables were significantly related to behavioral variation. These results support the idea that inter-community dispersal patterns have played a major role in structuring behavioral variation. We cannot be certain whether behavioral variation has a genetic basis, is the result of innovation and diffusion, or a combination of the two. Copyright © 2011 Elsevier Ltd. All rights reserved.
Lateral temperature variations at the core-mantle boundary deduced from the magnetic field
NASA Technical Reports Server (NTRS)
Bloxham, Jeremy; Jackson, Andrew
1990-01-01
Recent studies of the secular variation of the earth's magnetic field over periods of a few centuries have suggested that the pattern of fluid motion near the surface of earth's outer core may be strongly influenced by lateral temperature variations in the lowermost mantle. This paper introduces a self-consistent method for finding the temperature variations near the core surface by assuming that the dynamical balance there is geostrophic and that lateral density variations there are thermal in origin. As expected, the lateral temperature variations are very small. Some agreement is found between this pattern and the pattern of topography of the core-mantle boundary, but this does not conclusively answer to what extent core surface motions are controlled by the mantle, rather than being determined by processes in the core.
Improved flaw detection and characterization with difference thermography
NASA Astrophysics Data System (ADS)
Winfree, William P.; Zalameda, Joseph N.; Howell, Patricia A.
2011-05-01
Flaw detection and characterization with thermographic techniques in graphite polymer composites is often limited by localized variations in the thermographic response. Variations in properties such as acceptable porosity, variations in fiber volume content and surface polymer thickness result in variations in the thermal response that in general cause significant variations in the initial thermal response. These variations result in a noise floor that increases the difficulty of detecting and characterizing deeper flaws. The paper investigates comparing thermographic responses taken before and after a change in state in a composite to improve the detection of subsurface flaws. A method is presented for registration of the responses before finding the difference. A significant improvement in the detectability is achieved by comparing the differences in response. Examples of changes in state due to application of a load and impact are presented.
Improved Flaw Detection and Characterization with Difference Thermography
NASA Technical Reports Server (NTRS)
Winfree, William P.; Zalameda, Joseph N.; Howell, Patricia A.
2011-01-01
Flaw detection and characterization with thermographic techniques in graphite polymer composites is often limited by localized variations in the thermographic response. Variations in properties such as acceptable porosity, variations in fiber volume content and surface polymer thickness result in variations in the thermal response that in general cause significant variations in the initial thermal response. These variations result in a noise floor that increases the difficulty of detecting and characterizing deeper flaws. The paper investigates comparing thermographic responses taken before and after a change in state in a composite to improve the detection of subsurface flaws. A method is presented for registration of the responses before finding the difference. A significant improvement in the detectability is achieved by comparing the differences in response. Examples of changes in state due to application of a load and impact are presented.
Linearized Programming of Memristors for Artificial Neuro-Sensor Signal Processing
Yang, Changju; Kim, Hyongsuk
2016-01-01
A linearized programming method of memristor-based neural weights is proposed. Memristor is known as an ideal element to implement a neural synapse due to its embedded functions of analog memory and analog multiplication. Its resistance variation with a voltage input is generally a nonlinear function of time. Linearization of memristance variation about time is very important for the easiness of memristor programming. In this paper, a method utilizing an anti-serial architecture for linear programming is proposed. The anti-serial architecture is composed of two memristors with opposite polarities. It linearizes the variation of memristance due to complimentary actions of two memristors. For programming a memristor, additional memristor with opposite polarity is employed. The linearization effect of weight programming of an anti-serial architecture is investigated and memristor bridge synapse which is built with two sets of anti-serial memristor architecture is taken as an application example of the proposed method. Simulations are performed with memristors of both linear drift model and nonlinear model. PMID:27548186
Analysis of the effect of waste's particle size variations on biodrying method
NASA Astrophysics Data System (ADS)
Kristanto, Gabriel Andari; Zikrina, Masayu Nadiya
2017-11-01
The use of municipal solid waste as energy source can be a solution for Indonesia's increasing energy demand. However, its high moisture content limits the use of solid waste as energy. Biodrying is a method of lowering wastes' moisture content using biological process. This study investigated the effect of wastes' particle size variations on biodrying method. The experiment was performed on 3 lab-scale reactors with the same specifications. Organic wastes with the composition of 50% vegetable wastes and 50% garden wastes were used as substrates. The feedstock was manually shredded into 3 size variations, which were 10 - 40 mm, 50 - 80 mm, and 100 - 300 mm. The experiment lasted for 21 days. After 21 days, it was shown that the waste with the size of 100 - 300 mm has the lowest moisture content, which is 50.99%, and the volatile solids content is still 74.3% TS. This may be caused by the higher free air space of the reactor with the bigger sized substrate.
Carcass Functions in Variational Calculations for Few-Body Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Donchev, A.G.; Kalachev, S.A.; Kolesnikov, N.N.
For variational calculations of molecular and nuclear systems involving a few particles, it is proposed to use carcass basis functions that generalize exponential and Gaussian trial functions. It is shown that the matrix elements of the Hamiltonian are expressed in a closed form for a Coulomb potential, as well as for other popular particle-interaction potentials. The use of such carcass functions in two-center Coulomb problems reduces, in relation to other methods, the number of terms in a variational expansion by a few orders of magnitude at a commensurate or even higher accuracy. The efficiency of the method is illustrated bymore » calculations of the three-particle Coulomb systems {mu}{mu}e, ppe, dde, and tte and the four-particle molecular systems H{sub 2} and HeH{sup +} of various isotopic composition. By considering the example of the {sub {lambda}}{sup 9}Be hypernucleus, it is shown that the proposed method can be used in calculating nuclear systems as well.« less
Schmieder, Daniela A.; Benítez, Hugo A.; Borissov, Ivailo M.; Fruciano, Carmelo
2015-01-01
External morphology is commonly used to identify bats as well as to investigate flight and foraging behavior, typically relying on simple length and area measures or ratios. However, geometric morphometrics is increasingly used in the biological sciences to analyse variation in shape and discriminate among species and populations. Here we compare the ability of traditional versus geometric morphometric methods in discriminating between closely related bat species – in this case European horseshoe bats (Rhinolophidae, Chiroptera) – based on morphology of the wing, body and tail. In addition to comparing morphometric methods, we used geometric morphometrics to detect interspecies differences as shape changes. Geometric morphometrics yielded improved species discrimination relative to traditional methods. The predicted shape for the variation along the between group principal components revealed that the largest differences between species lay in the extent to which the wing reaches in the direction of the head. This strong trend in interspecific shape variation is associated with size, which we interpret as an evolutionary allometry pattern. PMID:25965335
Linearized Programming of Memristors for Artificial Neuro-Sensor Signal Processing.
Yang, Changju; Kim, Hyongsuk
2016-08-19
A linearized programming method of memristor-based neural weights is proposed. Memristor is known as an ideal element to implement a neural synapse due to its embedded functions of analog memory and analog multiplication. Its resistance variation with a voltage input is generally a nonlinear function of time. Linearization of memristance variation about time is very important for the easiness of memristor programming. In this paper, a method utilizing an anti-serial architecture for linear programming is proposed. The anti-serial architecture is composed of two memristors with opposite polarities. It linearizes the variation of memristance due to complimentary actions of two memristors. For programming a memristor, additional memristor with opposite polarity is employed. The linearization effect of weight programming of an anti-serial architecture is investigated and memristor bridge synapse which is built with two sets of anti-serial memristor architecture is taken as an application example of the proposed method. Simulations are performed with memristors of both linear drift model and nonlinear model.
NASA Astrophysics Data System (ADS)
Feng, Xinzeng; Hormuth, David A.; Yankeelov, Thomas E.
2018-06-01
We present an efficient numerical method to quantify the spatial variation of glioma growth based on subject-specific medical images using a mechanically-coupled tumor model. The method is illustrated in a murine model of glioma in which we consider the tumor as a growing elastic mass that continuously deforms the surrounding healthy-appearing brain tissue. As an inverse parameter identification problem, we quantify the volumetric growth of glioma and the growth component of deformation by fitting the model predicted cell density to the cell density estimated using the diffusion-weighted magnetic resonance imaging data. Numerically, we developed an adjoint-based approach to solve the optimization problem. Results on a set of experimentally measured, in vivo rat glioma data indicate good agreement between the fitted and measured tumor area and suggest a wide variation of in-plane glioma growth with the growth-induced Jacobian ranging from 1.0 to 6.0.
NASA Astrophysics Data System (ADS)
Jung, I. I.; Lee, J. H.; Lee, C. S.; Choi, Y.-W.
2011-02-01
We propose a novel circuit to be applied to the front-end integrated circuits of gamma-ray spectroscopy systems. Our circuit is designed as a type of current conveyor (ICON) employing a constant- gm (transconductance) method which can significantly improve the linearity in the amplified signals by using a large time constant and the time-invariant characteristics of an amplifier. The constant- gm method is obtained by a feedback control which keeps the transconductance of the input transistor constant. To verify the performance of the propose circuit, the time constant variations for the channel resistances are simulated with the TSMC 0.18 μm transistor parameters using HSPICE, and then compared with those of a conventional ICON. As a result, the proposed ICON shows only 0.02% output linearity variation and 0.19% time constant variation for the input amplitude up to 100 mV. These are significantly small values compared to a conventional ICON's 1.39% and 19.43%, respectively, for the same conditions.
Cornette, Raphaël; Baylac, Michel; Souter, Thibaud; Herrel, Anthony
2013-01-01
Morpho-functional patterns are important drivers of phenotypic diversity given their importance in a fitness-related context. Although modularity of the mandible and skull has been studied extensively in mammals, few studies have explored shape co-variation between these two structures. Despite being developmentally independent, the skull and mandible form a functionally integrated unit. In the present paper we use 3D surface geometric morphometric methods allowing us to explore the form of both skull and mandible in its 3D complexity using the greater white-toothed shrew as a model. This approach allows an accurate 3D description of zones devoid of anatomical landmarks that are functionally important. Two-block partial least-squares approaches were used to describe the co-variation of form between skull and mandible. Moreover, a 3D biomechanical model was used to explore the functional consequences of the observed patterns of co-variation. Our results show the efficiency of the method in investigations of complex morpho-functional patterns. Indeed, the description of shape co-variation between the skull and the mandible highlighted the location and the intensity of their functional relationships through the jaw adductor muscles linking these two structures. Our results also demonstrated that shape co-variation in form between the skull and mandible has direct functional consequences on the recruitment of muscles during biting. PMID:23964811
The affect of tissue depth variation on craniofacial reconstructions.
Starbuck, John M; Ward, Richard E
2007-10-25
We examined the affect of tissue depth variation on the reconstruction of facial form, through the application of the American method, utilizing published tissue depth measurements for emaciated, normal, and obese faces. In this preliminary study, three reconstructions were created on reproductions of the same skull for each set of tissue depth measurements. The resulting morphological variation was measured quantitatively using the anthropometric craniofacial variability index (CVI). This method employs 16 standard craniofacial anthropometric measurements and the results reflect "pattern variation" or facial harmony. We report no appreciable variation in the quantitative measure of the pattern facial form obtained from the three different sets of tissue depths. Facial similarity was assessed qualitatively utilizing surveys of photographs of the three reconstructions. Surveys indicated that subjects frequently perceived the reconstructions as representing different individuals. This disagreement indicates that size of the face may blind observers to similarities in facial form. This research is significant because it illustrates the confounding effect that normal human variation contributes in the successful recognition of individuals from a representational three-dimensional facial reconstruction. Research results suggest that successful identification could be increased if multiple reconstructions were created which reflect a wide range of possible outcomes for facial form. The creation of multiple facial images, from a single skull, will be facilitated as computerized versions of facial reconstruction are further developed and refined.
Importance of Geosat orbit and tidal errors in the estimation of large-scale Indian Ocean variations
NASA Technical Reports Server (NTRS)
Perigaud, Claire; Zlotnicki, Victor
1992-01-01
To improve the estimate accuracy of large-scale meridional sea-level variations, Geosat ERM data on the Indian Ocean for a 26-month period were processed using two different techniques of orbit error reduction. The first technique removes an along-track polynomial of degree 1 over about 5000 km and the second technique removes an along-track once-per-revolution sine wave about 40,000 km. Results obtained show that the polynomial technique produces stronger attenuation of both the tidal error and the large-scale oceanic signal. After filtering, the residual difference between the two methods represents 44 percent of the total variance and 23 percent of the annual variance. The sine-wave method yields a larger estimate of annual and interannual meridional variations.
Cornillon, P A; Pontier, D; Rochet, M J
2000-02-21
Comparative methods are used to investigate the attributes of present species or higher taxa. Difficulties arise from the phylogenetic heritage: taxa are not independent and neglecting phylogenetic inertia can lead to inaccurate results. Within-species variations in life-history traits are also not negligible, but most comparative methods are not designed to take them into account. Taxa are generally described by a single value for each trait. We have developed a new model which permits the incorporation of both the phylogenetic relationships among populations and within-species variations. This is an extension of classical autoregressive models. This family of models was used to study the effect of fishing on six demographic traits measured on 77 populations of teleost fishes. Copyright 2000 Academic Press.
NASA Astrophysics Data System (ADS)
Qian, Tingting; Wang, Lianlian; Lu, Guanghua
2017-07-01
Radar correlated imaging (RCI) introduces the optical correlated imaging technology to traditional microwave imaging, which has raised widespread concern recently. Conventional RCI methods neglect the structural information of complex extended target, which makes the quality of recovery result not really perfect, thus a novel combination of negative exponential restraint and total variation (NER-TV) algorithm for extended target imaging is proposed in this paper. The sparsity is measured by a sequential order one negative exponential function, then the 2D total variation technique is introduced to design a novel optimization problem for extended target imaging. And the proven alternating direction method of multipliers is applied to solve the new problem. Experimental results show that the proposed algorithm could realize high resolution imaging efficiently for extended target.
Sung, Kyunghyun; Nayak, Krishna S
2008-03-01
To measure and characterize variations in the transmitted radio frequency (RF) (B1+) field in cardiac magnetic resonance imaging (MRI) at 3 Tesla. Knowledge of the B1+ field is necessary for the calibration of pulse sequences, image-based quantitation, and signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) optimization. A variation of the saturated double-angle method for cardiac B1+ mapping is described. A total of eight healthy volunteers and two cardiac patients were scanned using six parallel short-axis slices spanning the left ventricle (LV). B1+ profiles were analyzed to determine the amount of variation and dominant patterns of variation across the LV. A total of five to 10 measurements were obtained in each volunteer to determine an upper bound of measurement repeatability. The amount of flip angle variation was found to be 23% to 48% over the LV in mid-short-axis slices and 32% to 63% over the entire LV volume. The standard deviation (SD) of multiple flip angle measurements was <1.4 degrees over the LV in all subjects, indicating excellent repeatability of the proposed measurement method. The pattern of in-plane flip angle variation was found to be primarily unidirectional across the LV, with a residual variation of < or =3% in all subjects. The in-plane B1+ variation over the LV at 3T with body-coil transmission is on the order of 32% to 63% and is predominantly unidirectional in short-axis slices. Reproducible B1+ measurements over the whole heart can be obtained in a single breathhold of 16 heartbeats.
NASA Technical Reports Server (NTRS)
Belcastro, Christine M.; Chang, B.-C.; Fischl, Robert
1989-01-01
In the design and analysis of robust control systems for uncertain plants, the technique of formulating what is termed an M-delta model has become widely accepted and applied in the robust control literature. The M represents the transfer function matrix M(s) of the nominal system, and delta represents an uncertainty matrix acting on M(s). The uncertainty can arise from various sources, such as structured uncertainty from parameter variations or multiple unstructured uncertainties from unmodeled dynamics and other neglected phenomena. In general, delta is a block diagonal matrix, and for real parameter variations the diagonal elements are real. As stated in the literature, this structure can always be formed for any linear interconnection of inputs, outputs, transfer functions, parameter variations, and perturbations. However, very little of the literature addresses methods for obtaining this structure, and none of this literature addresses a general methodology for obtaining a minimal M-delta model for a wide class of uncertainty. Since have a delta matrix of minimum order would improve the efficiency of structured singular value (or multivariable stability margin) computations, a method of obtaining a minimal M-delta model would be useful. A generalized method of obtaining a minimal M-delta structure for systems with real parameter variations is given.
On the effect of standard PFEM remeshing on volume conservation in free-surface fluid flow problems
NASA Astrophysics Data System (ADS)
Franci, Alessandro; Cremonesi, Massimiliano
2017-07-01
The aim of this work is to analyze the remeshing procedure used in the particle finite element method (PFEM) and to investigate how this operation may affect the numerical results. The PFEM remeshing algorithm combines the Delaunay triangulation and the Alpha Shape method to guarantee a good quality of the Lagrangian mesh also in large deformation processes. However, this strategy may lead to local variations of the topology that may cause an artificial change of the global volume. The issue of volume conservation is here studied in detail. An accurate description of all the situations that may induce a volume variation during the PFEM regeneration of the mesh is provided. Moreover, the crucial role of the parameter α used in the Alpha Shape method is highlighted and a range of values of α for which the differences between the numerical results are negligible, is found. Furthermore, it is shown that the variation of volume induced by the remeshing reduces by refining the mesh. This check of convergence is of paramount importance for the reliability of the PFEM. The study is carried out for 2D free-surface fluid dynamics problems, however the conclusions can be extended to 3D and to all those problems characterized by significant variations of internal and external boundaries.
Salient Point Detection in Protrusion Parts of 3D Object Robust to Isometric Variations
NASA Astrophysics Data System (ADS)
Mirloo, Mahsa; Ebrahimnezhad, Hosein
2018-03-01
In this paper, a novel method is proposed to detect 3D object salient points robust to isometric variations and stable against scaling and noise. Salient points can be used as the representative points from object protrusion parts in order to improve the object matching and retrieval algorithms. The proposed algorithm is started by determining the first salient point of the model based on the average geodesic distance of several random points. Then, according to the previous salient point, a new point is added to this set of points in each iteration. By adding every salient point, decision function is updated. Hence, a condition is created for selecting the next point in which the iterative point is not extracted from the same protrusion part so that drawing out of a representative point from every protrusion part is guaranteed. This method is stable against model variations with isometric transformations, scaling, and noise with different levels of strength due to using a feature robust to isometric variations and considering the relation between the salient points. In addition, the number of points used in averaging process is decreased in this method, which leads to lower computational complexity in comparison with the other salient point detection algorithms.
NASA Technical Reports Server (NTRS)
Brown, James L.; Naughton, Jonathan W.
1999-01-01
A thin film of oil on a surface responds primarily to the wall shear stress generated on that surface by a three-dimensional flow. The oil film is also subject to wall pressure gradients, surface tension effects and gravity. The partial differential equation governing the oil film flow is shown to be related to Burgers' equation. Analytical and numerical methods for solving the thin oil film equation are presented. A direct numerical solver is developed where the wall shear stress variation on the surface is known and which solves for the oil film thickness spatial and time variation on the surface. An inverse numerical solver is also developed where the oil film thickness spatial variation over the surface at two discrete times is known and which solves for the wall shear stress variation over the test surface. A One-Time-Level inverse solver is also demonstrated. The inverse numerical solver provides a mathematically rigorous basis for an improved form of a wall shear stress instrument suitable for application to complex three-dimensional flows. To demonstrate the complexity of flows for which these oil film methods are now suitable, extensive examination is accomplished for these analytical and numerical methods as applied to a thin oil film in the vicinity of a three-dimensional saddle of separation.
Methods for Gas Sensing with Single-Walled Carbon Nanotubes
NASA Technical Reports Server (NTRS)
Kaul, Anupama B. (Inventor)
2013-01-01
Methods for gas sensing with single-walled carbon nanotubes are described. The methods comprise biasing at least one carbon nanotube and exposing to a gas environment to detect variation in temperature as an electrical response.
Eigenvalue sensitivity analysis of planar frames with variable joint and support locations
NASA Technical Reports Server (NTRS)
Chuang, Ching H.; Hou, Gene J. W.
1991-01-01
Two sensitivity equations are derived in this study based upon the continuum approach for eigenvalue sensitivity analysis of planar frame structures with variable joint and support locations. A variational form of an eigenvalue equation is first derived in which all of the quantities are expressed in the local coordinate system attached to each member. Material derivative of this variational equation is then sought to account for changes in member's length and orientation resulting form the perturbation of joint and support locations. Finally, eigenvalue sensitivity equations are formulated in either domain quantities (by the domain method) or boundary quantities (by the boundary method). It is concluded that the sensitivity equation derived by the boundary method is more efficient in computation but less accurate than that of the domain method. Nevertheless, both of them in terms of computational efficiency are superior to the conventional direct differentiation method and the finite difference method.
Method to improve the blade tip-timing accuracy of fiber bundle sensor under varying tip clearance
NASA Astrophysics Data System (ADS)
Duan, Fajie; Zhang, Jilong; Jiang, Jiajia; Guo, Haotian; Ye, Dechao
2016-01-01
Blade vibration measurement based on the blade tip-timing method has become an industry-standard procedure. Fiber bundle sensors are widely used for tip-timing measurement. However, the variation of clearance between the sensor and the blade will bring a tip-timing error to fiber bundle sensors due to the change in signal amplitude. This article presents methods based on software and hardware to reduce the error caused by the tip clearance change. The software method utilizes both the rising and falling edges of the tip-timing signal to determine the blade arrival time, and a calibration process suitable for asymmetric tip-timing signals is presented. The hardware method uses an automatic gain control circuit to stabilize the signal amplitude. Experiments are conducted and the results prove that both methods can effectively reduce the impact of tip clearance variation on the blade tip-timing and improve the accuracy of measurements.
Application of the variational-asymptotical method to composite plates
NASA Technical Reports Server (NTRS)
Hodges, Dewey H.; Lee, Bok W.; Atilgan, Ali R.
1992-01-01
A method is developed for the 3D analysis of laminated plate deformation which is an extension of a variational-asymptotical method by Atilgan and Hodges (1991). Both methods are based on the treatment of plate deformation by splitting the 3D analysis into linear through-the-thickness analysis and 2D plate analysis. Whereas the first technique tackles transverse shear deformation in the second asymptotical approximation, the present method simplifies its treatment and restricts it to the first approximation. Both analytical techniques are applied to the linear cylindrical bending problem, and the strain and stress distributions are derived and compared with those of the exact solution. The present theory provides more accurate results than those of the classical laminated-plate theory for the transverse displacement of 2-, 3-, and 4-layer cross-ply laminated plates. The method can give reliable estimates of the in-plane strain and displacement distributions.
Total Variation with Overlapping Group Sparsity for Image Deblurring under Impulse Noise
Liu, Gang; Huang, Ting-Zhu; Liu, Jun; Lv, Xiao-Guang
2015-01-01
The total variation (TV) regularization method is an effective method for image deblurring in preserving edges. However, the TV based solutions usually have some staircase effects. In order to alleviate the staircase effects, we propose a new model for restoring blurred images under impulse noise. The model consists of an ℓ1-fidelity term and a TV with overlapping group sparsity (OGS) regularization term. Moreover, we impose a box constraint to the proposed model for getting more accurate solutions. The solving algorithm for our model is under the framework of the alternating direction method of multipliers (ADMM). We use an inner loop which is nested inside the majorization minimization (MM) iteration for the subproblem of the proposed method. Compared with other TV-based methods, numerical results illustrate that the proposed method can significantly improve the restoration quality, both in terms of peak signal-to-noise ratio (PSNR) and relative error (ReE). PMID:25874860
Time-frequency domain SNR estimation and its application in seismic data processing
NASA Astrophysics Data System (ADS)
Zhao, Yan; Liu, Yang; Li, Xuxuan; Jiang, Nansen
2014-08-01
Based on an approach estimating frequency domain signal-to-noise ratio (FSNR), we propose a method to evaluate time-frequency domain signal-to-noise ratio (TFSNR). This method adopts short-time Fourier transform (STFT) to estimate instantaneous power spectrum of signal and noise, and thus uses their ratio to compute TFSNR. Unlike FSNR describing the variation of SNR with frequency only, TFSNR depicts the variation of SNR with time and frequency, and thus better handles non-stationary seismic data. By considering TFSNR, we develop methods to improve the effects of inverse Q filtering and high frequency noise attenuation in seismic data processing. Inverse Q filtering considering TFSNR can better solve the problem of amplitude amplification of noise. The high frequency noise attenuation method considering TFSNR, different from other de-noising methods, distinguishes and suppresses noise using an explicit criterion. Examples of synthetic and real seismic data illustrate the correctness and effectiveness of the proposed methods.
Fuglsang, Karsten; Pedersen, Niels Hald; Larsen, Anna Warberg; Astrup, Thomas Fruergaard
2014-02-01
A dedicated sampling and measurement method was developed for long-term measurements of biogenic and fossil-derived CO(2) from thermal waste-to-energy processes. Based on long-term sampling of CO(2) and (14)C determination, plant-specific emission factors can be determined more accurately, and the annual emission of fossil CO(2) from waste-to-energy plants can be monitored according to carbon trading schemes and renewable energy certificates. Weekly and monthly measurements were performed at five Danish waste incinerators. Significant variations between fractions of biogenic CO(2) emitted were observed, not only over time, but also between plants. From the results of monthly samples at one plant, the annual mean fraction of biogenic CO(2) was found to be 69% of the total annual CO(2) emissions. From weekly samples, taken every 3 months at the five plants, significant seasonal variations in biogenic CO(2) emissions were observed (between 56% and 71% biogenic CO(2)). These variations confirmed that biomass fractions in the waste can vary considerably, not only from day to day but also from month to month. An uncertainty budget for the measurement method itself showed that the expanded uncertainty of the method was ± 4.0 pmC (95 % confidence interval) at 62 pmC. The long-term sampling method was found to be useful for waste incinerators for determination of annual fossil and biogenic CO(2) emissions with relatively low uncertainty.
NASA Astrophysics Data System (ADS)
Djoko, Martin; Kofane, T. C.
2018-06-01
We investigate the propagation characteristics and stabilization of generalized-Gaussian pulse in highly nonlinear homogeneous media with higher-order dispersion terms. The optical pulse propagation has been modeled by the higher-order (3+1)-dimensional cubic-quintic-septic complex Ginzburg-Landau [(3+1)D CQS-CGL] equation. We have used the variational method to find a set of differential equations characterizing the variation of the pulse parameters in fiber optic-links. The variational equations we obtained have been integrated numerically by the means of the fourth-order Runge-Kutta (RK4) method, which also allows us to investigate the evolution of the generalized-Gaussian beam and the pulse evolution along an optical doped fiber. Then, we have solved the original nonlinear (3+1)D CQS-CGL equation with the split-step Fourier method (SSFM), and compare the results with those obtained, using the variational approach. A good agreement between analytical and numerical methods is observed. The evolution of the generalized-Gaussian beam has shown oscillatory propagation, and bell-shaped dissipative optical bullets have been obtained under certain parameter values in both anomalous and normal chromatic dispersion regimes. Using the natural control parameter of the solution as it evolves, named the total energy Q, our numerical simulations reveal the existence of 3D stable vortex dissipative light bullets, 3D stable spatiotemporal optical soliton, stationary and pulsating optical bullets, depending on the used initial input condition (symmetric or elliptic).
NASA Astrophysics Data System (ADS)
Ioannidis, P.; Schmitt, J. H. M. M.
2016-10-01
The deviations of the mid-transit times of an exoplanet from a linear ephemeris are usually the result of gravitational interactions with other bodies in the system. However, these types of transit timing variations (TTV) can also be introduced by the influences of star spots on the shape of the transit profile. Here we use the method of unsharp masking to investigate the photometric light curves of planets with ambiguous TTV to compare the features in their O-C diagram with the occurrence and in-transit positions of spot-crossing events. This method seems to be particularly useful for the examination of transit light curves with only small numbers of in-transit data points, I.e., the long cadence light curves from Kepler satellite. As a proof of concept we apply this method to the light curve and the estimated eclipse timing variations of the eclipsing binary KOI-1452, for which we prove their non-gravitational nature. Furthermore, we use the method to study the rotation properties of the primary star of the system KOI-1452 and show that the spots responsible for the timing variations rotate with different periods than the most prominent periods of the system's light curve. We argue that the main contribution in the measured photometric variability of KOI-1452 originates in g-mode oscillations, which makes the primary star of the system a γ-Dor type variable candidate.
Variational-based segmentation of bio-pores in tomographic images
NASA Astrophysics Data System (ADS)
Bauer, Benjamin; Cai, Xiaohao; Peth, Stephan; Schladitz, Katja; Steidl, Gabriele
2017-01-01
X-ray computed tomography (CT) combined with a quantitative analysis of the resulting volume images is a fruitful technique in soil science. However, the variations in X-ray attenuation due to different soil components keep the segmentation of single components within these highly heterogeneous samples a challenging problem. Particularly demanding are bio-pores due to their elongated shape and the low gray value difference to the surrounding soil structure. Recently, variational models in connection with algorithms from convex optimization were successfully applied for image segmentation. In this paper we apply these methods for the first time for the segmentation of bio-pores in CT images of soil samples. We introduce a novel convex model which enforces smooth boundaries of bio-pores and takes the varying attenuation values in the depth into account. Segmentation results are reported for different real-world 3D data sets as well as for simulated data. These results are compared with two gray value thresholding methods, namely indicator kriging and a global thresholding procedure, and with a morphological approach. Pros and cons of the methods are assessed by considering geometric features of the segmented bio-pore systems. The variational approach features well-connected smooth pores while not detecting smaller or shallower pores. This is an advantage in cases where the main bio-pores network is of interest and where infillings, e.g., excrements of earthworms, would result in losing pore connections as observed for the other thresholding methods.
Disaggregating tree and grass phenology in tropical savannas
NASA Astrophysics Data System (ADS)
Zhou, Qiang
Savannas are mixed tree-grass systems and as one of the world's largest biomes represent an important component of the Earth system affecting water and energy balances, carbon sequestration and biodiversity as well as supporting large human populations. Savanna vegetation structure and its distribution, however, may change because of major anthropogenic disturbances from climate change, wildfire, agriculture, and livestock production. The overstory and understory may have different water use strategies, different nutrient requirements and have different responses to fire and climate variation. The accurate measurement of the spatial distribution and structure of the overstory and understory are essential for understanding the savanna ecosystem. This project developed a workflow for separating the dynamics of the overstory and understory fractional cover in savannas at the continental scale (Australia, South America, and Africa). Previous studies have successfully separated the phenology of Australian savanna vegetation into persistent and seasonal greenness using time series decomposition, and into fractions of photosynthetic vegetation (PV), non-photosynthetic vegetation (NPV) and bare soil (BS) using linear unmixing. This study combined these methods to separate the understory and overstory signal in both the green and senescent phenological stages using remotely sensed imagery from the MODIS (MODerate resolution Imaging Spectroradiometer) sensor. The methods and parameters were adjusted based on the vegetation variation. The workflow was first tested at the Australian site. Here the PV estimates for overstory and understory showed best performance, however NPV estimates exhibited spatial variation in validation relationships. At the South American site (Cerrado), an additional method based on frequency unmixing was developed to separate green vegetation components with similar phenology. When the decomposition and frequency methods were compared, the frequency method was better for extracting the green tree phenology, but the original decomposition method was better for retrieval of understory grass phenology. Both methods, however, were less accurate than in the Cerrado than in Australia due to intermingling and intergrading of grass and small woody components. Since African savanna trees are predominantly deciduous, the frequency method was combined with the linear unmixing of fractional cover to attempt to separate the relatively similar phenology of deciduous trees and seasonal grasses. The results for Africa revealed limitations associated with both methods. There was spatial and seasonal variation in the spectral indices used to unmix fractional cover resulting in poor validation for NPV in particular. The frequency analysis revealed significant phase variation indicative of different phenology, but these could not be clearly ascribed to separate grass and tree components. Overall findings indicate that site-specific variation and vegetation structure and composition, along with MODIS pixel resolution, and the simple vegetation index approach used was not robust across the different savanna biomes. The approach showed generally better performance for estimating PV fraction, and separating green phenology, but there were major inconsistencies, errors and biases in estimation of NPV and BS outside of the Australian savanna environment.
Interior region-of-interest reconstruction using a small, nearly piecewise constant subregion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taguchi, Katsuyuki; Xu Jingyan; Srivastava, Somesh
2011-03-15
Purpose: To develop a method to reconstruct an interior region-of-interest (ROI) image with sufficient accuracy that uses differentiated backprojection (DBP) projection onto convex sets (POCS) [H. Kudo et al., ''Tiny a priori knowledge solves the interior problem in computed tomography'', Phys. Med. Biol. 53, 2207-2231 (2008)] and a tiny knowledge that there exists a nearly piecewise constant subregion. Methods: The proposed method first employs filtered backprojection to reconstruct an image on which a tiny region P with a small variation in the pixel values is identified inside the ROI. Total variation minimization [H. Yu and G. Wang, ''Compressed sensing basedmore » interior tomography'', Phys. Med. Biol. 54, 2791-2805 (2009); W. Han et al., ''A general total variation minimization theorem for compressed sensing based interior tomography'', Int. J. Biomed. Imaging 2009, Article 125871 (2009)] is then employed to obtain pixel values in the subregion P, which serve as a priori knowledge in the next step. Finally, DBP-POCS is performed to reconstruct f(x,y) inside the ROI. Clinical data and the reconstructed image obtained by an x-ray computed tomography system (SOMATOM Definition; Siemens Healthcare) were used to validate the proposed method. The detector covers an object with a diameter of {approx}500 mm. The projection data were truncated either moderately to limit the detector coverage to diameter 350 mm of the object or severely to cover diameter 199 mm. Images were reconstructed using the proposed method. Results: The proposed method provided ROI images with correct pixel values in all areas except near the edge of the ROI. The coefficient of variation, i.e., the root mean square error divided by the mean pixel values, was less than 2.0% or 4.5% with the moderate or severe truncation cases, respectively, except near the boundary of the ROI. Conclusions: The proposed method allows for reconstructing interior ROI images with sufficient accuracy with a tiny knowledge that there exists a nearly piecewise constant subregion.« less
Surface energy and surface stress on vicinals by revisiting the Shuttleworth relation
NASA Astrophysics Data System (ADS)
Hecquet, Pascal
2018-04-01
In 1998 [Surf. Sci. 412/413, 639 (1998)], we showed that the step stress on vicinals varies as 1/L, L being the distance between steps, while the inter-step interaction energy primarily follows the law as 1/L2 from the well-known Marchenko-Parshin model. In this paper, we give a better understanding of the interaction term of the step stress. The step stress is calculated with respect to the nominal surface stress. Consequently, we calculate the diagonal surface stresses in both the vicinal system (x, y, z) where z is normal to the vicinal and the projected system (x, b, c) where b is normal to the nominal terrace. Moreover, we calculate the surface stresses by using two methods: the first called the 'Zero' method, from the surface pressure forces and the second called the 'One' method, by homogeneously deforming the vicinal in the parallel direction, x or y, and by calculating the surface energy excess proportional to the deformation. By using the 'One' method on the vicinal Cu(0 1 M), we find that the step deformations, due to the applied deformation, vary as 1/L by the same factor for the tensor directions bb and cb, and by twice the same factor for the parallel direction yy. Due to the vanishing of the surface stress normal to the vicinal, the variation of the step stress in the direction yy is better described by using only the step deformation in the same direction. We revisit the Shuttleworth formula, for while the variation of the step stress in the direction xx is the same between the two methods, the variation in the direction yy is higher by 76% for the 'Zero' method with respect to the 'One' method. In addition to the step energy, we confirm that the variation of the step stress must be taken into account for the understanding of the equilibrium of vicinals when they are not deformed.
Finite-frequency sensitivity kernels for global seismic wave propagation based upon adjoint methods
NASA Astrophysics Data System (ADS)
Liu, Qinya; Tromp, Jeroen
2008-07-01
We determine adjoint equations and Fréchet kernels for global seismic wave propagation based upon a Lagrange multiplier method. We start from the equations of motion for a rotating, self-gravitating earth model initially in hydrostatic equilibrium, and derive the corresponding adjoint equations that involve motions on an earth model that rotates in the opposite direction. Variations in the misfit function χ then may be expressed as , where δlnm = δm/m denotes relative model perturbations in the volume V, δlnd denotes relative topographic variations on solid-solid or fluid-solid boundaries Σ, and ∇Σδlnd denotes surface gradients in relative topographic variations on fluid-solid boundaries ΣFS. The 3-D Fréchet kernel Km determines the sensitivity to model perturbations δlnm, and the 2-D kernels Kd and Kd determine the sensitivity to topographic variations δlnd. We demonstrate also how anelasticity may be incorporated within the framework of adjoint methods. Finite-frequency sensitivity kernels are calculated by simultaneously computing the adjoint wavefield forward in time and reconstructing the regular wavefield backward in time. Both the forward and adjoint simulations are based upon a spectral-element method. We apply the adjoint technique to generate finite-frequency traveltime kernels for global seismic phases (P, Pdiff, PKP, S, SKS, depth phases, surface-reflected phases, surface waves, etc.) in both 1-D and 3-D earth models. For 1-D models these adjoint-generated kernels generally agree well with results obtained from ray-based methods. However, adjoint methods do not have the same theoretical limitations as ray-based methods, and can produce sensitivity kernels for any given phase in any 3-D earth model. The Fréchet kernels presented in this paper illustrate the sensitivity of seismic observations to structural parameters and topography on internal discontinuities. These kernels form the basis of future 3-D tomographic inversions.
Wong, Kin-Yiu; Gao, Jiali
2008-09-09
In this paper, we describe an automated integration-free path-integral (AIF-PI) method, based on Kleinert's variational perturbation (KP) theory, to treat internuclear quantum-statistical effects in molecular systems. We have developed an analytical method to obtain the centroid potential as a function of the variational parameter in the KP theory, which avoids numerical difficulties in path-integral Monte Carlo or molecular dynamics simulations, especially at the limit of zero-temperature. Consequently, the variational calculations using the KP theory can be efficiently carried out beyond the first order, i.e., the Giachetti-Tognetti-Feynman-Kleinert variational approach, for realistic chemical applications. By making use of the approximation of independent instantaneous normal modes (INM), the AIF-PI method can readily be applied to many-body systems. Previously, we have shown that in the INM approximation, the AIF-PI method is accurate for computing the quantum partition function of a water molecule (3 degrees of freedom) and the quantum correction factor for the collinear H(3) reaction rate (2 degrees of freedom). In this work, the accuracy and properties of the KP theory are further investigated by using the first three order perturbations on an asymmetric double-well potential, the bond vibrations of H(2), HF, and HCl represented by the Morse potential, and a proton-transfer barrier modeled by the Eckart potential. The zero-point energy, quantum partition function, and tunneling factor for these systems have been determined and are found to be in excellent agreement with the exact quantum results. Using our new analytical results at the zero-temperature limit, we show that the minimum value of the computed centroid potential in the KP theory is in excellent agreement with the ground state energy (zero-point energy) and the position of the centroid potential minimum is the expectation value of particle position in wave mechanics. The fast convergent property of the KP theory is further examined in comparison with results from the traditional Rayleigh-Ritz variational approach and Rayleigh-Schrödinger perturbation theory in wave mechanics. The present method can be used for thermodynamic and quantum dynamic calculations, including to systematically determine the exact value of zero-point energy and to study kinetic isotope effects for chemical reactions in solution and in enzymes.
Marker Registration Technique for Handwritten Text Marker in Augmented Reality Applications
NASA Astrophysics Data System (ADS)
Thanaborvornwiwat, N.; Patanukhom, K.
2018-04-01
Marker registration is a fundamental process to estimate camera poses in marker-based Augmented Reality (AR) systems. We developed AR system that creates correspondence virtual objects on handwritten text markers. This paper presents a new method for registration that is robust for low-content text markers, variation of camera poses, and variation of handwritten styles. The proposed method uses Maximally Stable Extremal Regions (MSER) and polygon simplification for a feature point extraction. The experiment shows that we need to extract only five feature points per image which can provide the best registration results. An exhaustive search is used to find the best matching pattern of the feature points in two images. We also compared performance of the proposed method to some existing registration methods and found that the proposed method can provide better accuracy and time efficiency.
A Dictionary Learning Method with Total Generalized Variation for MRI Reconstruction
Lu, Hongyang; Wei, Jingbo; Wang, Yuhao; Deng, Xiaohua
2016-01-01
Reconstructing images from their noisy and incomplete measurements is always a challenge especially for medical MR image with important details and features. This work proposes a novel dictionary learning model that integrates two sparse regularization methods: the total generalized variation (TGV) approach and adaptive dictionary learning (DL). In the proposed method, the TGV selectively regularizes different image regions at different levels to avoid oil painting artifacts largely. At the same time, the dictionary learning adaptively represents the image features sparsely and effectively recovers details of images. The proposed model is solved by variable splitting technique and the alternating direction method of multiplier. Extensive simulation experimental results demonstrate that the proposed method consistently recovers MR images efficiently and outperforms the current state-of-the-art approaches in terms of higher PSNR and lower HFEN values. PMID:27110235
A Dictionary Learning Method with Total Generalized Variation for MRI Reconstruction.
Lu, Hongyang; Wei, Jingbo; Liu, Qiegen; Wang, Yuhao; Deng, Xiaohua
2016-01-01
Reconstructing images from their noisy and incomplete measurements is always a challenge especially for medical MR image with important details and features. This work proposes a novel dictionary learning model that integrates two sparse regularization methods: the total generalized variation (TGV) approach and adaptive dictionary learning (DL). In the proposed method, the TGV selectively regularizes different image regions at different levels to avoid oil painting artifacts largely. At the same time, the dictionary learning adaptively represents the image features sparsely and effectively recovers details of images. The proposed model is solved by variable splitting technique and the alternating direction method of multiplier. Extensive simulation experimental results demonstrate that the proposed method consistently recovers MR images efficiently and outperforms the current state-of-the-art approaches in terms of higher PSNR and lower HFEN values.
NASA Astrophysics Data System (ADS)
Wada, Koh; Watanabe, Naotosi; Uchida, Tetsuya
1991-10-01
The critical exponents of the bond percolation model are calculated in the D(=2, 3, \\cdots)-dimensional simple cubic lattice on the basis of Suzuki’s coherent anomaly method (CAM) by making use of a series of the pair, the square-cactus and the square approximations of the cluster variation method (CVM) in the s-state Potts model. These simple approximations give reasonable values of critical exponents α, β, γ and ν in comparison with ones estimated by other methods. It is also shown that the results of the pair and the square-cactus approximations can be derived as exact results of the bond percolation model on the Bethe and the square-cactus lattice, respectively, in the presence of ghost field without recourse to the s→1 limit of the s-state Potts model.
NASA Astrophysics Data System (ADS)
Leijala, Ulpu; Björkqvist, Jan-Victor; Johansson, Milla M.; Pellikka, Havu
2017-04-01
Future coastal management continuously strives for more location-exact and precise methods to investigate possible extreme sea level events and to face flooding hazards in the most appropriate way. Evaluating future flooding risks by understanding the behaviour of the joint effect of sea level variations and wind waves is one of the means to make more comprehensive flooding hazard analysis, and may at first seem like a straightforward task to solve. Nevertheless, challenges and limitations such as availability of time series of the sea level and wave height components, the quality of data, significant locational variability of coastal wave height, as well as assumptions to be made depending on the study location, make the task more complicated. In this study, we present a statistical method for combining location-specific probability distributions of water level variations (including local sea level observations and global mean sea level rise) and wave run-up (based on wave buoy measurements). The goal of our method is to obtain a more accurate way to account for the waves when making flooding hazard analysis on the coast compared to the approach of adding a separate fixed wave action height on top of sea level -based flood risk estimates. As a result of our new method, we gain maximum elevation heights with different return periods of the continuous water mass caused by a combination of both phenomena, "the green water". We also introduce a sensitivity analysis to evaluate the properties and functioning of our method. The sensitivity test is based on using theoretical wave distributions representing different alternatives of wave behaviour in relation to sea level variations. As these wave distributions are merged with the sea level distribution, we get information on how the different wave height conditions and shape of the wave height distribution influence the joint results. Our method presented here can be used as an advanced tool to minimize over- and underestimation of the combined effect of sea level variations and wind waves, and to help coastal infrastructure planning and support smooth and safe operation of coastal cities in a changing climate.
Agreement between ambulatory, home, and office blood pressure variability.
Juhanoja, Eeva P; Niiranen, Teemu J; Johansson, Jouni K; Puukka, Pauli J; Jula, Antti M
2016-01-01
Ambulatory, home, and office blood pressure (BP) variability are often treated as a single entity. Our aim was to assess the agreement between these three methods for measuring BP variability. Twenty-four-hour ambulatory BP monitoring, 28 home BP measurements, and eight office BP measurements were performed on 461 population-based or hypertensive participants. Five variability indices were calculated for all measurement methods: SD, coefficient of variation, maximum-minimum difference, variability independent of the mean, and average real variability. Pearson's correlation coefficients were calculated for indices measured with different methods. The agreement between different measurement methods on the diagnoses of extreme BP variability (participants in the highest decile of variability) was assessed with kappa (κ) coefficients. SBP/DBP variability was greater in daytime (coefficient of variation: 9.8 ± 2.9/11.9 ± 3.6) and night-time ambulatory measurements (coefficient of variation: 8.6 ± 3.4/12.1 ± 4.5) than in home (coefficient of variation: 4.4 ± 1.8/4.7 ± 1.9) and office (coefficient of variation: 4.6 ± 2.4/5.2 ± 2.6) measurements (P < 0.001/0.001 for all). Pearson's correlation coefficients for systolic/diastolic daytime or night-time ambulatory-home, ambulatory-office, and home-office variability indices ranged between 0.07-0.25/0.12-0.23, 0.13-0.26/0.03-0.22 and 0.13-0.24/0.10-0.19, respectively, indicating, at most, a weak positive (r < 0.3) relationship. The agreement between measurement methods on diagnoses of extreme SBP/DBP variability was only slight (κ < 0.2), with the κ coefficients for daytime and night-time ambulatory-home, ambulatory-office, and home-office agreement varying between-0.014-0.20/0.061-0.15, 0.037-0.18/0.082-0.15, and 0.082-0.13/0.045-0.15, respectively. Shorter-term and longer-term BP variability assessed by different methods of BP measurement seem to correlate only weakly with each other. Our study suggests that BP variability measured by different methods and timeframes may reflect different phenomena, not a single entity.
Within subject variation of satiety hormone responses to a standard lunch
USDA-ARS?s Scientific Manuscript database
Background: Insulin (Ins), leptin (Lep), GLP-1, and glucagon (Glg) are known regulators of glucose metabolism and food intake, but reproducibility in response to a meal challenge is not well characterized. We assessed within-subject variation of these hormones in 14 young adult women.Methods: Subjec...
Mechanisms of Vowel Variation in African American English
ERIC Educational Resources Information Center
Holt, Yolanda Feimster
2018-01-01
Purpose: This research explored mechanisms of vowel variation in African American English by comparing 2 geographically distant groups of African American and White American English speakers for participation in the African American Shift and the Southern Vowel Shift. Method: Thirty-two male (African American: n = 16, White American controls: n =…
Seasonal Variation in Epidemiology
ERIC Educational Resources Information Center
Marrero, Osvaldo
2013-01-01
Seasonality analyses are important in medical research. If the incidence of a disease shows a seasonal pattern, then an environmental factor must be considered in its etiology. We discuss a method for the simultaneous analysis of seasonal variation in multiple groups. The nuts and bolts are explained using simple trigonometry, an elementary…
Variation and Commonality in Phenomenographic Research Methods
ERIC Educational Resources Information Center
Akerlind, Gerlese S.
2012-01-01
This paper focuses on the data analysis stage of phenomenographic research, elucidating what is involved in terms of both commonality and variation in accepted practice. The analysis stage of phenomenographic research is often not well understood. This paper helps to clarify the process, initially by collecting together in one location the more…
Neighborhood Disadvantage and Variations in Blood Pressure
ERIC Educational Resources Information Center
Cathorall, Michelle L.; Xin, Huaibo; Peachey, Andrew; Bibeau, Daniel L.; Schulz, Mark; Aronson, Robert
2015-01-01
Purpose: To examine the extent to which neighborhood disadvantage accounts for variation in blood pressure. Methods: Demographic, biometric, and self-reported data from 19,261 health screenings were used. Addresses of participants were geocoded and located within census block groups (n = 14,510, 75.3%). Three hierarchical linear models were…
Predictors of Between-Family and Within-Family Variation in Parent-Child Relationships
ERIC Educational Resources Information Center
O'Connor, Thomas G.; Dunn, Judy; Jenkins, Jennifer M.; Rasbash, Jon
2006-01-01
Background: Previous studies have found that multiple factors are associated with parent-child relationship quality, but have not distinguished potential sources of between-family and within-family variation in parent-child relationship quality. Methods: Approximately equal numbers of biological (non-stepfamilies), single-mother, stepfather, and…
Language Variation and Limits to Communication. Technical Report No. 3.
ERIC Educational Resources Information Center
Simons, Gary Francis
Strategies are developed for understanding how language variation limits communication. Methods of measuring communication are discussed, including an intelligibility measure used in the Solomon Islands. The analysis of data gathered using communication measurement is discussed. The result of the analysis is a determination of the number of…
Statistical image reconstruction from correlated data with applications to PET
Alessio, Adam; Sauer, Ken; Kinahan, Paul
2008-01-01
Most statistical reconstruction methods for emission tomography are designed for data modeled as conditionally independent Poisson variates. In reality, due to scanner detectors, electronics and data processing, correlations are introduced into the data resulting in dependent variates. In general, these correlations are ignored because they are difficult to measure and lead to computationally challenging statistical reconstruction algorithms. This work addresses the second concern, seeking to simplify the reconstruction of correlated data and provide a more precise image estimate than the conventional independent methods. In general, correlated variates have a large non-diagonal covariance matrix that is computationally challenging to use as a weighting term in a reconstruction algorithm. This work proposes two methods to simplify the use of a non-diagonal covariance matrix as the weighting term by (a) limiting the number of dimensions in which the correlations are modeled and (b) adopting flexible, yet computationally tractable, models for correlation structure. We apply and test these methods with simple simulated PET data and data processed with the Fourier rebinning algorithm which include the one-dimensional correlations in the axial direction and the two-dimensional correlations in the transaxial directions. The methods are incorporated into a penalized weighted least-squares 2D reconstruction and compared with a conventional maximum a posteriori approach. PMID:17921576
A New Cloud and Aerosol Layer Detection Method Based on Micropulse Lidar Measurements
NASA Astrophysics Data System (ADS)
Wang, Q.; Zhao, C.; Wang, Y.; Li, Z.; Wang, Z.; Liu, D.
2014-12-01
A new algorithm is developed to detect aerosols and clouds based on micropulse lidar (MPL) measurements. In this method, a semi-discretization processing (SDP) technique is first used to inhibit the impact of increasing noise with distance, then a value distribution equalization (VDE) method is introduced to reduce the magnitude of signal variations with distance. Combined with empirical threshold values, clouds and aerosols are detected and separated. This method can detect clouds and aerosols with high accuracy, although classification of aerosols and clouds is sensitive to the thresholds selected. Compared with the existing Atmospheric Radiation Measurement (ARM) program lidar-based cloud product, the new method detects more high clouds. The algorithm was applied to a year of observations at both the U.S. Southern Great Plains (SGP) and China Taihu site. At SGP, the cloud frequency shows a clear seasonal variation with maximum values in winter and spring, and shows bi-modal vertical distributions with maximum frequency at around 3-6 km and 8-12 km. The annual averaged cloud frequency is about 50%. By contrast, the cloud frequency at Taihu shows no clear seasonal variation and the maximum frequency is at around 1 km. The annual averaged cloud frequency is about 15% higher than that at SGP.
Armour, John A. L.; Palla, Raquel; Zeeuwen, Patrick L. J. M.; den Heijer, Martin; Schalkwijk, Joost; Hollox, Edward J.
2007-01-01
Recent work has demonstrated an unexpected prevalence of copy number variation in the human genome, and has highlighted the part this variation may play in predisposition to common phenotypes. Some important genes vary in number over a high range (e.g. DEFB4, which commonly varies between two and seven copies), and have posed formidable technical challenges for accurate copy number typing, so that there are no simple, cheap, high-throughput approaches suitable for large-scale screening. We have developed a simple comparative PCR method based on dispersed repeat sequences, using a single pair of precisely designed primers to amplify products simultaneously from both test and reference loci, which are subsequently distinguished and quantified via internal sequence differences. We have validated the method for the measurement of copy number at DEFB4 by comparison of results from >800 DNA samples with copy number measurements by MAPH/REDVR, MLPA and array-CGH. The new Paralogue Ratio Test (PRT) method can require as little as 10 ng genomic DNA, appears to be comparable in accuracy to the other methods, and for the first time provides a rapid, simple and inexpensive method for copy number analysis, suitable for application to typing thousands of samples in large case-control association studies. PMID:17175532
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Yu Mao, E-mail: yumaowu@fudan.edu.cn; Teng, Si Jia, E-mail: sjteng12@fudan.edu.cn
In this work, we develop the numerical steepest descent path (NSDP) method to calculate the physical optics (PO) radiations with the quadratic concave phase variations. With the surface integral equation method, the physical optics (PO) scattered fields are formulated and further reduced to the surface integrals. The high frequency physical critical points contributions, including the stationary phase points, the boundary resonance points and the vertex points are comprehensively studied via the proposed NSDP method. The key contributions of this work are twofold. One is that together with the PO integrals taking the quadratic parabolic and hyperbolic phase terms, this workmore » makes the NSDP theory be complete for treating the PO integrals with quadratic phase variations. Another is that, in order to illustrate the transition effect of the high frequency physical critical points, in this work, we consider and further extend the NSDP method to calculate the PO integrals with the coalescence of the high frequency critical points. Numerical results for the highly oscillatory PO integral with the coalescence of the critical points are given to verify the efficiency of the proposed NSDP method. The NSDP method could achieve the frequency independent computational workload and error controllable accuracy in all the numerical experiments, especially for the case of the coalescence of the high frequency critical points.« less
A Variational Bayes Genomic-Enabled Prediction Model with Genotype × Environment Interaction
Montesinos-López, Osval A.; Montesinos-López, Abelardo; Crossa, José; Montesinos-López, José Cricelio; Luna-Vázquez, Francisco Javier; Salinas-Ruiz, Josafhat; Herrera-Morales, José R.; Buenrostro-Mariscal, Raymundo
2017-01-01
There are Bayesian and non-Bayesian genomic models that take into account G×E interactions. However, the computational cost of implementing Bayesian models is high, and becomes almost impossible when the number of genotypes, environments, and traits is very large, while, in non-Bayesian models, there are often important and unsolved convergence problems. The variational Bayes method is popular in machine learning, and, by approximating the probability distributions through optimization, it tends to be faster than Markov Chain Monte Carlo methods. For this reason, in this paper, we propose a new genomic variational Bayes version of the Bayesian genomic model with G×E using half-t priors on each standard deviation (SD) term to guarantee highly noninformative and posterior inferences that are not sensitive to the choice of hyper-parameters. We show the complete theoretical derivation of the full conditional and the variational posterior distributions, and their implementations. We used eight experimental genomic maize and wheat data sets to illustrate the new proposed variational Bayes approximation, and compared its predictions and implementation time with a standard Bayesian genomic model with G×E. Results indicated that prediction accuracies are slightly higher in the standard Bayesian model with G×E than in its variational counterpart, but, in terms of computation time, the variational Bayes genomic model with G×E is, in general, 10 times faster than the conventional Bayesian genomic model with G×E. For this reason, the proposed model may be a useful tool for researchers who need to predict and select genotypes in several environments. PMID:28391241
A Variational Bayes Genomic-Enabled Prediction Model with Genotype × Environment Interaction.
Montesinos-López, Osval A; Montesinos-López, Abelardo; Crossa, José; Montesinos-López, José Cricelio; Luna-Vázquez, Francisco Javier; Salinas-Ruiz, Josafhat; Herrera-Morales, José R; Buenrostro-Mariscal, Raymundo
2017-06-07
There are Bayesian and non-Bayesian genomic models that take into account G×E interactions. However, the computational cost of implementing Bayesian models is high, and becomes almost impossible when the number of genotypes, environments, and traits is very large, while, in non-Bayesian models, there are often important and unsolved convergence problems. The variational Bayes method is popular in machine learning, and, by approximating the probability distributions through optimization, it tends to be faster than Markov Chain Monte Carlo methods. For this reason, in this paper, we propose a new genomic variational Bayes version of the Bayesian genomic model with G×E using half-t priors on each standard deviation (SD) term to guarantee highly noninformative and posterior inferences that are not sensitive to the choice of hyper-parameters. We show the complete theoretical derivation of the full conditional and the variational posterior distributions, and their implementations. We used eight experimental genomic maize and wheat data sets to illustrate the new proposed variational Bayes approximation, and compared its predictions and implementation time with a standard Bayesian genomic model with G×E. Results indicated that prediction accuracies are slightly higher in the standard Bayesian model with G×E than in its variational counterpart, but, in terms of computation time, the variational Bayes genomic model with G×E is, in general, 10 times faster than the conventional Bayesian genomic model with G×E. For this reason, the proposed model may be a useful tool for researchers who need to predict and select genotypes in several environments. Copyright © 2017 Montesinos-López et al.
ERIC Educational Resources Information Center
Dolan, Thomas G.
2003-01-01
Describes project delivery methods that are replacing the traditional Design/Bid/Build linear approach to the management, design, and construction of new facilities. These variations can enhance construction management and teamwork. (SLD)
TEMPERATURE SCENARIO DEVELOPMENT USING REGRESSION METHODS
A method of developing scenarios of future temperature conditions resulting from climatic change is presented. he method is straightforward and can be used to provide information about daily temperature variations and diurnal ranges, monthly average high, and low temperatures, an...
Assessment of the Performance of a Scanning Wind Doppler Lidar at an Urban-Mountain Site in Seoul
NASA Astrophysics Data System (ADS)
Park, S.; Kim, S. W.
2017-12-01
Winds in the planetary boundary layer (PBL) are important factors for accurate modelling of air quality, numerical weather prediction and conversion of satellite measurements to near-surface air quality information (Seibert et al., AE, 2000; Emeis et al., Meteorol. Z., 2008). In this study, we (1) evaluate wind speed (WS) and direction (WD) retrieved from Wind Doppler Lidar (WDL) measurements by two methods [so called, `sine-fitting (SF) method' and `singular value decomposition (SVD) method'] and (2) analyze the WDL data at Seoul National University (SNU), Seoul, to investigate the diurnal evolution of winds and aerosol characteristics in PBL. Evaluation of the two methods used in retrieving wind from radial velocity was done through comparison with radiosonde soundings from the same site. Winds retrieved using the SVD method from mean radial velocity of 15 minutes showed good agreement with radiosonde profiles (i.e., bias of 0.03 m s-1 and root mean square of 1.70 m s-1 in WS). However, the WDL was found to have difficulty retrieving signals under clean conditions (i.e., too small signal to noise ratio) or under the presence of near-surface optically-thick aerosol/cloud layer (i.e., strong signal attenuation). Despite this shortcoming, the WDL was able to successfully capture the diurnal variation of PBL wind. Two major wind patterns were observed at SNU; first of all, when convective boundary layer was strongly developed, thermally induced winds with large variation of vertical WS in the afternoon and a diurnal variation in WD showing characteristics of mountain and valley winds were observed. Secondly, small variation in WS and WD throughout the day was a major characteristic of cases when wind was largely influenced by the synoptic weather pattern.
A sibling method for identifying vQTLs
Domingue, Ben; Dawes, Christopher; Boardman, Jason; Siegal, Mark
2018-01-01
The propensity of a trait to vary within a population may have evolutionary, ecological, or clinical significance. In the present study we deploy sibling models to offer a novel and unbiased way to ascertain loci associated with the extent to which phenotypes vary (variance-controlling quantitative trait loci, or vQTLs). Previous methods for vQTL-mapping either exclude genetically related individuals or treat genetic relatedness among individuals as a complicating factor addressed by adjusting estimates for non-independence in phenotypes. The present method uses genetic relatedness as a tool to obtain unbiased estimates of variance effects rather than as a nuisance. The family-based approach, which utilizes random variation between siblings in minor allele counts at a locus, also allows controls for parental genotype, mean effects, and non-linear (dominance) effects that may spuriously appear to generate variation. Simulations show that the approach performs equally well as two existing methods (squared Z-score and DGLM) in controlling type I error rates when there is no unobserved confounding, and performs significantly better than these methods in the presence of small degrees of confounding. Using height and BMI as empirical applications, we investigate SNPs that alter within-family variation in height and BMI, as well as pathways that appear to be enriched. One significant SNP for BMI variability, in the MAST4 gene, replicated. Pathway analysis revealed one gene set, encoding members of several signaling pathways related to gap junction function, which appears significantly enriched for associations with within-family height variation in both datasets (while not enriched in analysis of mean levels). We recommend approximating laboratory random assignment of genotype using family data and more careful attention to the possible conflation of mean and variance effects. PMID:29617452
Greek classicism in living structure? Some deductive pathways in animal morphology.
Zweers, G A
1985-01-01
Classical temples in ancient Greece show two deterministic illusionistic principles of architecture, which govern their functional design: geometric proportionalism and a set of illusion-strengthening rules in the proportionalism's "stochastic margin". Animal morphology, in its mechanistic-deductive revival, applies just one architectural principle, which is not always satisfactory. Whether a "Greek Classical" situation occurs in the architecture of living structure is to be investigated by extreme testing with deductive methods. Three deductive methods for explanation of living structure in animal morphology are proposed: the parts, the compromise, and the transformation deduction. The methods are based upon the systems concept for an organism, the flow chart for a functionalistic picture, and the network chart for a structuralistic picture, whereas the "optimal design" serves as the architectural principle for living structure. These methods show clearly the high explanatory power of deductive methods in morphology, but they also make one open end most explicit: neutral issues do exist. Full explanation of living structure asks for three entries: functional design within architectural and transformational constraints. The transformational constraint brings necessarily in a stochastic component: an at random variation being a sort of "free management space". This variation must be a variation from the deterministic principle of the optimal design, since any transformation requires space for plasticity in structure and action, and flexibility in role fulfilling. Nevertheless, finally the question comes up whether for animal structure a similar situation exists as in Greek Classical temples. This means that the at random variation, that is found when the optimal design is used to explain structure, comprises apart from a stochastic part also real deviations being yet another deterministic part. This deterministic part could be a set of rules that governs actualization in the "free management space".
Zhang, Peng; Li, Houqiang; Wang, Honghui; Wong, Stephen T C; Zhou, Xiaobo
2011-01-01
Peak detection is one of the most important steps in mass spectrometry (MS) analysis. However, the detection result is greatly affected by severe spectrum variations. Unfortunately, most current peak detection methods are neither flexible enough to revise false detection results nor robust enough to resist spectrum variations. To improve flexibility, we introduce peak tree to represent the peak information in MS spectra. Each tree node is a peak judgment on a range of scales, and each tree decomposition, as a set of nodes, is a candidate peak detection result. To improve robustness, we combine peak detection and common peak alignment into a closed-loop framework, which finds the optimal decomposition via both peak intensity and common peak information. The common peak information is derived and loopily refined from the density clustering of the latest peak detection result. Finally, we present an improved ant colony optimization biomarker selection method to build a whole MS analysis system. Experiment shows that our peak detection method can better resist spectrum variations and provide higher sensitivity and lower false detection rates than conventional methods. The benefits from our peak-tree-based system for MS disease analysis are also proved on real SELDI data.
Still-to-video face recognition in unconstrained environments
NASA Astrophysics Data System (ADS)
Wang, Haoyu; Liu, Changsong; Ding, Xiaoqing
2015-02-01
Face images from video sequences captured in unconstrained environments usually contain several kinds of variations, e.g. pose, facial expression, illumination, image resolution and occlusion. Motion blur and compression artifacts also deteriorate recognition performance. Besides, in various practical systems such as law enforcement, video surveillance and e-passport identification, only a single still image per person is enrolled as the gallery set. Many existing methods may fail to work due to variations in face appearances and the limit of available gallery samples. In this paper, we propose a novel approach for still-to-video face recognition in unconstrained environments. By assuming that faces from still images and video frames share the same identity space, a regularized least squares regression method is utilized to tackle the multi-modality problem. Regularization terms based on heuristic assumptions are enrolled to avoid overfitting. In order to deal with the single image per person problem, we exploit face variations learned from training sets to synthesize virtual samples for gallery samples. We adopt a learning algorithm combining both affine/convex hull-based approach and regularizations to match image sets. Experimental results on a real-world dataset consisting of unconstrained video sequences demonstrate that our method outperforms the state-of-the-art methods impressively.
Digital Signal Processing Methods for Ultrasonic Echoes.
Sinding, Kyle; Drapaca, Corina; Tittmann, Bernhard
2016-04-28
Digital signal processing has become an important component of data analysis needed in industrial applications. In particular, for ultrasonic thickness measurements the signal to noise ratio plays a major role in the accurate calculation of the arrival time. For this application a band pass filter is not sufficient since the noise level cannot be significantly decreased such that a reliable thickness measurement can be performed. This paper demonstrates the abilities of two regularization methods - total variation and Tikhonov - to filter acoustic and ultrasonic signals. Both of these methods are compared to a frequency based filtering for digitally produced signals as well as signals produced by ultrasonic transducers. This paper demonstrates the ability of the total variation and Tikhonov filters to accurately recover signals from noisy acoustic signals faster than a band pass filter. Furthermore, the total variation filter has been shown to reduce the noise of a signal significantly for signals with clear ultrasonic echoes. Signal to noise ratios have been increased over 400% by using a simple parameter optimization. While frequency based filtering is efficient for specific applications, this paper shows that the reduction of noise in ultrasonic systems can be much more efficient with regularization methods.
[An alternative continence mechanism for continent catheterisable micturation].
Honeck, P; Alken, P
2010-01-01
The creation of a stable, reliable, continent and easily catheterisable continence mechanism is an essential prerequisite for the construction of a continent cutaneous urinary reservoir. Although a substantial number of surgical methods has been described, construction is still a complex surgical procedure. The aim of this study was the evaluation of a new method for a continence mechanism using stapled small or large intestine. Small and large pig intestine was used for construction. For stapling the tube a 3 cm or 6 cm double row stapling system was used. Two variations using small and large intestine segments were constructed (IL 1, COL 1, COL 2). A 3 or 6 cm long stapler line was placed alongside a 12 Fr catheter positioned at the antimesenterial side creating a partially two-luminal segment. The open end of the non-catheterised lumen and the opposite intestinal end were closed by continuous sutures. The created tube was then embedded into the pouch. Pressure evaluation was performed for each variation. Intermittent external manual compression was used to simulate sudden pressure exposure. Construction times for the IL 1 and COL 1 variations were 10 +/- 1.5 min and 6.2 +/- 1.3 min for COL 2. All variations showed no leakage during filling or external compression. The maximum capacity was lower for the IL 1 compared to the COL variation. The maximum pressure levels reached did not differ significantly. The described technique is an easy and fast method to construct a continent and easy to catheterize continence mechanism using small or large intestine.
The 1995 revision of the joint US/UK geomagnetic field models - I. Secular variation
Macmillan, S.; Barraclough, D.R.; Quinn, J.M.; Coleman, R.J.
1997-01-01
We present the methods used to derive mathematical models of global secular variation of the main geomagnetic field for the period 1985 to 2000. These secular-variation models are used in the construction of the candidate US/UK models for the Definitive Geomagnetic Reference Field at 1990, the International Geomagnetic Reference Field for 1995 to 2000, and the World Magnetic Model for 1995 to 2000 (see paper II, Quinn et al., 1997). The main sources of data for the secular-variation models are geomagnetic observatories and repeat stations. Over the areas devoid of these data secular-variation information is extracted from aeromagnetic and satellite data. We describe how secular variation is predicted up to the year 2000 at the observatories and repeat stations, how the aeromagnetic and satellite data are used, and how all the data are combined to produce the required models.
Willan, Andrew R; Eckermann, Simon
2012-10-01
Previous applications of value of information methods for determining optimal sample size in randomized clinical trials have assumed no between-study variation in mean incremental net benefit. By adopting a hierarchical model, we provide a solution for determining optimal sample size with this assumption relaxed. The solution is illustrated with two examples from the literature. Expected net gain increases with increasing between-study variation, reflecting the increased uncertainty in incremental net benefit and reduced extent to which data are borrowed from previous evidence. Hence, a trial can become optimal where current evidence is sufficient assuming no between-study variation. However, despite the expected net gain increasing, the optimal sample size in the illustrated examples is relatively insensitive to the amount of between-study variation. Further percentage losses in expected net gain were small even when choosing sample sizes that reflected widely different between-study variation. Copyright © 2011 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Li, Xin; Babovic, Vladan
2017-04-01
Observed studies on inter-annual variation of precipitation provide insight into the response of precipitation to anthropogenic climate change and natural climate variability. Inter-annual variation of precipitation results from the concurrent variations of precipitation frequency and intensity, understanding of the relative importance of frequency and intensity in the variability of precipitation can help fathom its changing properties. Investigation of the long-term changes of precipitation schemes has been extensively carried out in many regions across the world, however, detailed studies of the relative importance of precipitation frequency and intensity in inter-annual variation of precipitation are still limited, especially in the tropics. Therefore, this study presents a comprehensive framework to investigate the inter-annual variation of precipitation and the dominance of precipitation frequency and intensity in a tropical urban city-state, Singapore, based on long-term (1980-2013) daily precipitation series from 22 rain gauges. First, an iterative Mann-Kendall trend test method is applied to detect long-term trends in precipitation total, frequency and intensity at both annual and seasonal time scales. Then, the relative importance of precipitation frequency and intensity in inducing the inter-annual variation of wet-day precipitation total is analyzed using a dominance analysis method based on linear regression. The results show statistically significant upward trends in wet-day precipitation total, frequency and intensity at annual time scale, however, these trends are not evident during the monsoon seasons. The inter-annual variation of wet-day precipitation is mainly dominated by precipitation intensity for most of the stations at annual time scale and during the Northeast monsoon season. However, during the Southwest monsoon season, the inter-annual variation of wet-day precipitation is mainly dominated by precipitation frequency. These results have implications for water resources management practices in Singapore.
Conditional Random Fields for Fast, Large-Scale Genome-Wide Association Studies
Huang, Jim C.; Meek, Christopher; Kadie, Carl; Heckerman, David
2011-01-01
Understanding the role of genetic variation in human diseases remains an important problem to be solved in genomics. An important component of such variation consist of variations at single sites in DNA, or single nucleotide polymorphisms (SNPs). Typically, the problem of associating particular SNPs to phenotypes has been confounded by hidden factors such as the presence of population structure, family structure or cryptic relatedness in the sample of individuals being analyzed. Such confounding factors lead to a large number of spurious associations and missed associations. Various statistical methods have been proposed to account for such confounding factors such as linear mixed-effect models (LMMs) or methods that adjust data based on a principal components analysis (PCA), but these methods either suffer from low power or cease to be tractable for larger numbers of individuals in the sample. Here we present a statistical model for conducting genome-wide association studies (GWAS) that accounts for such confounding factors. Our method scales in runtime quadratic in the number of individuals being studied with only a modest loss in statistical power as compared to LMM-based and PCA-based methods when testing on synthetic data that was generated from a generalized LMM. Applying our method to both real and synthetic human genotype/phenotype data, we demonstrate the ability of our model to correct for confounding factors while requiring significantly less runtime relative to LMMs. We have implemented methods for fitting these models, which are available at http://www.microsoft.com/science. PMID:21765897
Configuration optimization of space structures
NASA Technical Reports Server (NTRS)
Felippa, Carlos; Crivelli, Luis A.; Vandenbelt, David
1991-01-01
The objective is to develop a computer aid for the conceptual/initial design of aerospace structures, allowing configurations and shape to be apriori design variables. The topics are presented in viewgraph form and include the following: Kikuchi's homogenization method; a classical shape design problem; homogenization method steps; a 3D mechanical component design example; forming a homogenized finite element; a 2D optimization problem; treatment of volume inequality constraint; algorithms for the volume inequality constraint; object function derivatives--taking advantage of design locality; stiffness variations; variations of potential; and schematics of the optimization problem.
2005-09-01
Hancock, A.K. Godwin, And Anthony T. Yeung. Enzymatic and Chemical Cleavage Methods to Identify Genetic Variation. In Molecular Diagnostics (Ed. G...R.G.H. Cotton, L. Hancock, A.K. Godwin, And Anthony T. Yeung. Enzymatic and Chemical Cleavage Methods to Identify Genetic Variation. In Molecular ... Diagnostics (Ed. G. Patrinos and W Ansorge) in press 2005. 9 Godwin, A.K., Ph.D. E-CONCLUSIONS: E.1. "BRCC36, a Novel Subunit of a BRCA1/2 E3 Ubiquitin
NASA Technical Reports Server (NTRS)
Barranger, John P.
1990-01-01
A novel optical method of measuring 2-D surface strain is proposed. Two linear strains along orthogonal axes and the shear strain between those axes is determined by a variation of Yamaguchi's laser-speckle strain gage technique. It offers the advantages of shorter data acquisition times, less stringent alignment requirements, and reduced decorrelation effects when compared to a previously implemented optical strain rosette technique. The method automatically cancels the translational and rotational components of rigid body motion while simplifying the optical system and improving the speed of response.
Rashev, Svetoslav; Moule, David C
2012-02-15
We perform large scale converged variational vibrational calculations on S(0) formaldehyde up to very high excess vibrational energies (E(v)), E(v)∼17,000cm(-1), using our vibrational method, consisting of a specific search/selection/Lanczos iteration procedure. Using the same method we investigate the vibrational level structure and intramolecular vibrational redistribution (IVR) characteristics for various vibrational levels in this energy range in order to assess the onset of IVR. Copyright © 2011 Elsevier B.V. All rights reserved.
General constraints on sampling wildlife on FIA plots
Bailey, L.L.; Sauer, J.R.; Nichols, J.D.; Geissler, P.H.; McRoberts, Ronald E.; Reams, Gregory A.; Van Deusen, Paul C.; McWilliams, William H.; Cieszewski, Chris J.
2005-01-01
This paper reviews the constraints to sampling wildlife populations at FIA points. Wildlife sampling programs must have well-defined goals and provide information adequate to meet those goals. Investigators should choose a State variable based on information needs and the spatial sampling scale. We discuss estimation-based methods for three State variables: species richness, abundance, and patch occupancy. All methods incorporate two essential sources of variation: detectability estimation and spatial variation. FIA sampling imposes specific space and time criteria that may need to be adjusted to meet local wildlife objectives.
Interplanetary magnetic flux - Measurement and balance
NASA Technical Reports Server (NTRS)
Mccomas, D. J.; Gosling, J. T.; Phillips, J. L.
1992-01-01
A new method for determining the approximate amount of magnetic flux in various solar wind structures in the ecliptic (and solar rotation) plane is developed using single-spacecraft measurements in interplanetary space and making certain simplifying assumptions. The method removes the effect of solar wind velocity variations and can be applied to specific, limited-extent solar wind structures as well as to long-term variations. Over the 18-month interval studied, the ecliptic plane flux of coronal mass ejections was determined to be about 4 times greater than that of HFDs.
NASA Astrophysics Data System (ADS)
Sumihara, K.
Based upon legitimate variational principles, one microscopic-macroscopic finite element formulation for linear dynamics is presented by Hybrid Stress Finite Element Method. The microscopic application of Geometric Perturbation introduced by Pian and the introduction of infinitesimal limit core element (Baby Element) have been consistently combined according to the flexible and inherent interpretation of the legitimate variational principles initially originated by Pian and Tong. The conceptual development based upon Hybrid Finite Element Method is extended to linear dynamics with the introduction of physically meaningful higher modes.
Time-series analysis of foreign exchange rates using time-dependent pattern entropy
NASA Astrophysics Data System (ADS)
Ishizaki, Ryuji; Inoue, Masayoshi
2013-08-01
Time-dependent pattern entropy is a method that reduces variations to binary symbolic dynamics and considers the pattern of symbols in a sliding temporal window. We use this method to analyze the instability of daily variations in foreign exchange rates, in particular, the dollar-yen rate. The time-dependent pattern entropy of the dollar-yen rate was found to be high in the following periods: before and after the turning points of the yen from strong to weak or from weak to strong, and the period after the Lehman shock.
A Tutorial Review on Fractal Spacetime and Fractional Calculus
NASA Astrophysics Data System (ADS)
He, Ji-Huan
2014-11-01
This tutorial review of fractal-Cantorian spacetime and fractional calculus begins with Leibniz's notation for derivative without limits which can be generalized to discontinuous media like fractal derivative and q-derivative of quantum calculus. Fractal spacetime is used to elucidate some basic properties of fractal which is the foundation of fractional calculus, and El Naschie's mass-energy equation for the dark energy. The variational iteration method is used to introduce the definition of fractional derivatives. Fractal derivative is explained geometrically and q-derivative is motivated by quantum mechanics. Some effective analytical approaches to fractional differential equations, e.g., the variational iteration method, the homotopy perturbation method, the exp-function method, the fractional complex transform, and Yang-Laplace transform, are outlined and the main solution processes are given.
Born iterative reconstruction using perturbed-phase field estimates
Astheimer, Jeffrey P.; Waag, Robert C.
2008-01-01
A method of image reconstruction from scattering measurements for use in ultrasonic imaging is presented. The method employs distorted-wave Born iteration but does not require using a forward-problem solver or solving large systems of equations. These calculations are avoided by limiting intermediate estimates of medium variations to smooth functions in which the propagated fields can be approximated by phase perturbations derived from variations in a geometric path along rays. The reconstruction itself is formed by a modification of the filtered-backpropagation formula that includes correction terms to account for propagation through an estimated background. Numerical studies that validate the method for parameter ranges of interest in medical applications are presented. The efficiency of this method offers the possibility of real-time imaging from scattering measurements. PMID:19062873
Mean Field Type Control with Congestion (II): An Augmented Lagrangian Method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Achdou, Yves, E-mail: achdou@ljll.univ-paris-diderot.fr; Laurière, Mathieu
This work deals with a numerical method for solving a mean-field type control problem with congestion. It is the continuation of an article by the same authors, in which suitably defined weak solutions of the system of partial differential equations arising from the model were discussed and existence and uniqueness were proved. Here, the focus is put on numerical methods: a monotone finite difference scheme is proposed and shown to have a variational interpretation. Then an Alternating Direction Method of Multipliers for solving the variational problem is addressed. It is based on an augmented Lagrangian. Two kinds of boundary conditionsmore » are considered: periodic conditions and more realistic boundary conditions associated to state constrained problems. Various test cases and numerical results are presented.« less
Variational methods for direct/inverse problems of atmospheric dynamics and chemistry
NASA Astrophysics Data System (ADS)
Penenko, Vladimir; Penenko, Alexey; Tsvetova, Elena
2013-04-01
We present a variational approach for solving direct and inverse problems of atmospheric hydrodynamics and chemistry. It is important that the accurate matching of numerical schemes has to be provided in the chain of objects: direct/adjoint problems - sensitivity relations - inverse problems, including assimilation of all available measurement data. To solve the problems we have developed a new enhanced set of cost-effective algorithms. The matched description of the multi-scale processes is provided by a specific choice of the variational principle functionals for the whole set of integrated models. Then all functionals of variational principle are approximated in space and time by splitting and decomposition methods. Such approach allows us to separately consider, for example, the space-time problems of atmospheric chemistry in the frames of decomposition schemes for the integral identity sum analogs of the variational principle at each time step and in each of 3D finite-volumes. To enhance the realization efficiency, the set of chemical reactions is divided on the subsets related to the operators of production and destruction. Then the idea of the Euler's integrating factors is applied in the frames of the local adjoint problem technique [1]-[3]. The analytical solutions of such adjoint problems play the role of integrating factors for differential equations describing atmospheric chemistry. With their help, the system of differential equations is transformed to the equivalent system of integral equations. As a result we avoid the construction and inversion of preconditioning operators containing the Jacobi matrixes which arise in traditional implicit schemes for ODE solution. This is the main advantage of our schemes. At the same time step but on the different stages of the "global" splitting scheme, the system of atmospheric dynamic equations is solved. For convection - diffusion equations for all state functions in the integrated models we have developed the monotone and stable discrete-analytical numerical schemes [1]-[3] conserving the positivity of the chemical substance concentrations and possessing the properties of energy and mass balance that are postulated in the general variational principle for integrated models. All algorithms for solution of transport, diffusion and transformation problems are direct (without iterations). The work is partially supported by the Programs No 4 of Presidium RAS and No 3 of Mathematical Department of RAS, by RFBR project 11-01-00187 and Integrating projects of SD RAS No 8 and 35. Our studies are in the line with the goals of COST Action ES1004. References Penenko V., Tsvetova E. Discrete-analytical methods for the implementation of variational principles in environmental applications// Journal of computational and applied mathematics, 2009, v. 226, 319-330. Penenko A.V. Discrete-analytic schemes for solving an inverse coefficient heat conduction problem in a layered medium with gradient methods// Numerical Analysis and Applications, 2012, V. 5, pp 326-341. V. Penenko, E. Tsvetova. Variational methods for constructing the monotone approximations for atmospheric chemistry models //Numerical Analysis and Applications, 2013 (in press).
NASA Astrophysics Data System (ADS)
Liu, Shixing; Liu, Chang; Hua, Wei; Guo, Yongxin
2016-11-01
By using the discrete variational method, we study the numerical method of the general nonholonomic system in the generalized Birkhoffian framework, and construct a numerical method of generalized Birkhoffian equations called a self-adjoint-preserving algorithm. Numerical results show that it is reasonable to study the nonholonomic system by the structure-preserving algorithm in the generalized Birkhoffian framework. Project supported by the National Natural Science Foundation of China (Grant Nos. 11472124, 11572145, 11202090, and 11301350), the Doctor Research Start-up Fund of Liaoning Province, China (Grant No. 20141050), the China Postdoctoral Science Foundation (Grant No. 2014M560203), and the General Science and Technology Research Plans of Liaoning Educational Bureau, China (Grant No. L2013005).
Wind Plant Performance Prediction (WP3) Project
DOE Office of Scientific and Technical Information (OSTI.GOV)
Craig, Anna
The methods for analysis of operational wind plant data are highly variable across the wind industry, leading to high uncertainties in the validation and bias-correction of preconstruction energy estimation methods. Lack of credibility in the preconstruction energy estimates leads to significant impacts on project financing and therefore the final levelized cost of energy for the plant. In this work, the variation in the evaluation of a wind plant's operational energy production as a result of variations in the processing methods applied to the operational data is examined. Preliminary results indicate that selection of the filters applied to the data andmore » the filter parameters can have significant impacts in the final computed assessment metrics.« less
NASA Astrophysics Data System (ADS)
Rani, Monika; Bhatti, Harbax S.; Singh, Vikramjeet
2017-11-01
In optical communication, the behavior of the ultrashort pulses of optical solitons can be described through nonlinear Schrodinger equation. This partial differential equation is widely used to contemplate a number of physically important phenomena, including optical shock waves, laser and plasma physics, quantum mechanics, elastic media, etc. The exact analytical solution of (1+n)-dimensional higher order nonlinear Schrodinger equation by He's variational iteration method has been presented. Our proposed solutions are very helpful in studying the solitary wave phenomena and ensure rapid convergent series and avoid round off errors. Different examples with graphical representations have been given to justify the capability of the method.
NASA Astrophysics Data System (ADS)
Li, Zhe; Feng, Jinchao; Liu, Pengyu; Sun, Zhonghua; Li, Gang; Jia, Kebin
2018-05-01
Temperature is usually considered as a fluctuation in near-infrared spectral measurement. Chemometric methods were extensively studied to correct the effect of temperature variations. However, temperature can be considered as a constructive parameter that provides detailed chemical information when systematically changed during the measurement. Our group has researched the relationship between temperature-induced spectral variation (TSVC) and normalized squared temperature. In this study, we focused on the influence of temperature distribution in calibration set. Multi-temperature calibration set selection (MTCS) method was proposed to improve the prediction accuracy by considering the temperature distribution of calibration samples. Furthermore, double-temperature calibration set selection (DTCS) method was proposed based on MTCS method and the relationship between TSVC and normalized squared temperature. We compare the prediction performance of PLS models based on random sampling method and proposed methods. The results from experimental studies showed that the prediction performance was improved by using proposed methods. Therefore, MTCS method and DTCS method will be the alternative methods to improve prediction accuracy in near-infrared spectral measurement.
Dodin, I. Y.; Zhmoginov, A. I.; Ruiz, D. E.
2017-02-24
Applications of variational methods are typically restricted to conservative systems. Some extensions to dissipative systems have been reported too but require ad hoc techniques such as the artificial doubling of the dynamical variables. We propose a different approach. Here, we show that for a broad class of dissipative systems of practical interest, variational principles can be formulated using constant Lagrange multipliers and Lagrangians nonlocal in time, which allow treating reversible and irreversible dynamics on the same footing. A general variational theory of linear dispersion is formulated as an example. Particularly, we present a variational formulation for linear geometrical optics inmore » a general dissipative medium, which is allowed to be nonstationary, inhomogeneous, anisotropic, and exhibit both temporal and spatial dispersion simultaneously.« less
Li, Jianqi; Wang, Yi; Jiang, Yu; Xie, Haibin; Li, Gengying
2009-09-01
An open permanent magnet system with vertical B(0) field and without self-shielding can be quite susceptible to perturbations from external magnetic sources. B(0) variation in such a system located close to a subway station was measured to be greater than 0.7 microT by both MRI and a fluxgate magnetometer. This B(0) variation caused image artifacts. A navigator echo approach that monitored and compensated the view-to-view variation in magnetic resonance signal phase was developed to correct for image artifacts. Human brain imaging experiments using a multislice gradient-echo sequence demonstrated that the ghosting and blurring artifacts associated with B(0) variations were effectively removed using the navigator method.
Performance Analysis of Entropy Methods on K Means in Clustering Process
NASA Astrophysics Data System (ADS)
Dicky Syahputra Lubis, Mhd.; Mawengkang, Herman; Suwilo, Saib
2017-12-01
K Means is a non-hierarchical data clustering method that attempts to partition existing data into one or more clusters / groups. This method partitions the data into clusters / groups so that data that have the same characteristics are grouped into the same cluster and data that have different characteristics are grouped into other groups.The purpose of this data clustering is to minimize the objective function set in the clustering process, which generally attempts to minimize variation within a cluster and maximize the variation between clusters. However, the main disadvantage of this method is that the number k is often not known before. Furthermore, a randomly chosen starting point may cause two points to approach the distance to be determined as two centroids. Therefore, for the determination of the starting point in K Means used entropy method where this method is a method that can be used to determine a weight and take a decision from a set of alternatives. Entropy is able to investigate the harmony in discrimination among a multitude of data sets. Using Entropy criteria with the highest value variations will get the highest weight. Given this entropy method can help K Means work process in determining the starting point which is usually determined at random. Thus the process of clustering on K Means can be more quickly known by helping the entropy method where the iteration process is faster than the K Means Standard process. Where the postoperative patient dataset of the UCI Repository Machine Learning used and using only 12 data as an example of its calculations is obtained by entropy method only with 2 times iteration can get the desired end result.