Ng, S K; McLachlan, G J
2003-04-15
We consider a mixture model approach to the regression analysis of competing-risks data. Attention is focused on inference concerning the effects of factors on both the probability of occurrence and the hazard rate conditional on each of the failure types. These two quantities are specified in the mixture model using the logistic model and the proportional hazards model, respectively. We propose a semi-parametric mixture method to estimate the logistic and regression coefficients jointly, whereby the component-baseline hazard functions are completely unspecified. Estimation is based on maximum likelihood on the basis of the full likelihood, implemented via an expectation-conditional maximization (ECM) algorithm. Simulation studies are performed to compare the performance of the proposed semi-parametric method with a fully parametric mixture approach. The results show that when the component-baseline hazard is monotonic increasing, the semi-parametric and fully parametric mixture approaches are comparable for mildly and moderately censored samples. When the component-baseline hazard is not monotonic increasing, the semi-parametric method consistently provides less biased estimates than a fully parametric approach and is comparable in efficiency in the estimation of the parameters for all levels of censoring. The methods are illustrated using a real data set of prostate cancer patients treated with different dosages of the drug diethylstilbestrol. Copyright 2003 John Wiley & Sons, Ltd.
Synthesis and Control of Flexible Systems with Component-Level Uncertainties
NASA Technical Reports Server (NTRS)
Maghami, Peiman G.; Lim, Kyong B.
2009-01-01
An efficient and computationally robust method for synthesis of component dynamics is developed. The method defines the interface forces/moments as feasible vectors in transformed coordinates to ensure that connectivity requirements of the combined structure are met. The synthesized system is then defined in a transformed set of feasible coordinates. The simplicity of form is exploited to effectively deal with modeling parametric and non-parametric uncertainties at the substructure level. Uncertainty models of reasonable size and complexity are synthesized for the combined structure from those in the substructure models. In particular, we address frequency and damping uncertainties at the component level. The approach first considers the robustness of synthesized flexible systems. It is then extended to deal with non-synthesized dynamic models with component-level uncertainties by projecting uncertainties to the system level. A numerical example is given to demonstrate the feasibility of the proposed approach.
Free-form geometric modeling by integrating parametric and implicit PDEs.
Du, Haixia; Qin, Hong
2007-01-01
Parametric PDE techniques, which use partial differential equations (PDEs) defined over a 2D or 3D parametric domain to model graphical objects and processes, can unify geometric attributes and functional constraints of the models. PDEs can also model implicit shapes defined by level sets of scalar intensity fields. In this paper, we present an approach that integrates parametric and implicit trivariate PDEs to define geometric solid models containing both geometric information and intensity distribution subject to flexible boundary conditions. The integrated formulation of second-order or fourth-order elliptic PDEs permits designers to manipulate PDE objects of complex geometry and/or arbitrary topology through direct sculpting and free-form modeling. We developed a PDE-based geometric modeling system for shape design and manipulation of PDE objects. The integration of implicit PDEs with parametric geometry offers more general and arbitrary shape blending and free-form modeling for objects with intensity attributes than pure geometric models.
A Non-parametric Cutout Index for Robust Evaluation of Identified Proteins*
Serang, Oliver; Paulo, Joao; Steen, Hanno; Steen, Judith A.
2013-01-01
This paper proposes a novel, automated method for evaluating sets of proteins identified using mass spectrometry. The remaining peptide-spectrum match score distributions of protein sets are compared to an empirical absent peptide-spectrum match score distribution, and a Bayesian non-parametric method reminiscent of the Dirichlet process is presented to accurately perform this comparison. Thus, for a given protein set, the process computes the likelihood that the proteins identified are correctly identified. First, the method is used to evaluate protein sets chosen using different protein-level false discovery rate (FDR) thresholds, assigning each protein set a likelihood. The protein set assigned the highest likelihood is used to choose a non-arbitrary protein-level FDR threshold. Because the method can be used to evaluate any protein identification strategy (and is not limited to mere comparisons of different FDR thresholds), we subsequently use the method to compare and evaluate multiple simple methods for merging peptide evidence over replicate experiments. The general statistical approach can be applied to other types of data (e.g. RNA sequencing) and generalizes to multivariate problems. PMID:23292186
Locally-Based Kernal PLS Smoothing to Non-Parametric Regression Curve Fitting
NASA Technical Reports Server (NTRS)
Rosipal, Roman; Trejo, Leonard J.; Wheeler, Kevin; Korsmeyer, David (Technical Monitor)
2002-01-01
We present a novel smoothing approach to non-parametric regression curve fitting. This is based on kernel partial least squares (PLS) regression in reproducing kernel Hilbert space. It is our concern to apply the methodology for smoothing experimental data where some level of knowledge about the approximate shape, local inhomogeneities or points where the desired function changes its curvature is known a priori or can be derived based on the observed noisy data. We propose locally-based kernel PLS regression that extends the previous kernel PLS methodology by incorporating this knowledge. We compare our approach with existing smoothing splines, hybrid adaptive splines and wavelet shrinkage techniques on two generated data sets.
Fixed-Order Mixed Norm Designs for Building Vibration Control
NASA Technical Reports Server (NTRS)
Whorton, Mark S.; Calise, Anthony J.
2000-01-01
This study investigates the use of H2, mu-synthesis, and mixed H2/mu methods to construct full order controllers and optimized controllers of fixed dimensions. The benchmark problem definition is first extended to include uncertainty within the controller bandwidth in the form of parametric uncertainty representative of uncertainty in the natural frequencies of the design model. The sensitivity of H2 design to unmodeled dynamics and parametric uncertainty is evaluated for a range of controller levels of authority. Next, mu-synthesis methods are applied to design full order compensators that are robust to both unmodeled dynamics and to parametric uncertainty. Finally, a set of mixed H2/mu compensators are designed which are optimized for a fixed compensator dimension. These mixed norm designs recover the H2 design performance levels while providing the same levels of robust stability as the mu designs. It is shown that designing with the mixed norm approach permits higher levels of controller authority for which the H2 designs are destabilizing. The benchmark problem is that of an active tendon system. The controller designs are all based on the use of acceleration feedback.
Falk, Carl F; Cai, Li
2016-06-01
We present a semi-parametric approach to estimating item response functions (IRF) useful when the true IRF does not strictly follow commonly used functions. Our approach replaces the linear predictor of the generalized partial credit model with a monotonic polynomial. The model includes the regular generalized partial credit model at the lowest order polynomial. Our approach extends Liang's (A semi-parametric approach to estimate IRFs, Unpublished doctoral dissertation, 2007) method for dichotomous item responses to the case of polytomous data. Furthermore, item parameter estimation is implemented with maximum marginal likelihood using the Bock-Aitkin EM algorithm, thereby facilitating multiple group analyses useful in operational settings. Our approach is demonstrated on both educational and psychological data. We present simulation results comparing our approach to more standard IRF estimation approaches and other non-parametric and semi-parametric alternatives.
Rank-based permutation approaches for non-parametric factorial designs.
Umlauft, Maria; Konietschke, Frank; Pauly, Markus
2017-11-01
Inference methods for null hypotheses formulated in terms of distribution functions in general non-parametric factorial designs are studied. The methods can be applied to continuous, ordinal or even ordered categorical data in a unified way, and are based only on ranks. In this set-up Wald-type statistics and ANOVA-type statistics are the current state of the art. The first method is asymptotically exact but a rather liberal statistical testing procedure for small to moderate sample size, while the latter is only an approximation which does not possess the correct asymptotic α level under the null. To bridge these gaps, a novel permutation approach is proposed which can be seen as a flexible generalization of the Kruskal-Wallis test to all kinds of factorial designs with independent observations. It is proven that the permutation principle is asymptotically correct while keeping its finite exactness property when data are exchangeable. The results of extensive simulation studies foster these theoretical findings. A real data set exemplifies its applicability. © 2017 The British Psychological Society.
Parameterization of DFTB3/3OB for Sulfur and Phosphorus for Chemical and Biological Applications
2015-01-01
We report the parametrization of the approximate density functional tight binding method, DFTB3, for sulfur and phosphorus. The parametrization is done in a framework consistent with our previous 3OB set established for O, N, C, and H, thus the resulting parameters can be used to describe a broad set of organic and biologically relevant molecules. The 3d orbitals are included in the parametrization, and the electronic parameters are chosen to minimize errors in the atomization energies. The parameters are tested using a fairly diverse set of molecules of biological relevance, focusing on the geometries, reaction energies, proton affinities, and hydrogen bonding interactions of these molecules; vibrational frequencies are also examined, although less systematically. The results of DFTB3/3OB are compared to those from DFT (B3LYP and PBE), ab initio (MP2, G3B3), and several popular semiempirical methods (PM6 and PDDG), as well as predictions of DFTB3 with the older parametrization (the MIO set). In general, DFTB3/3OB is a major improvement over the previous parametrization (DFTB3/MIO), and for the majority cases tested here, it also outperforms PM6 and PDDG, especially for structural properties, vibrational frequencies, hydrogen bonding interactions, and proton affinities. For reaction energies, DFTB3/3OB exhibits major improvement over DFTB3/MIO, due mainly to significant reduction of errors in atomization energies; compared to PM6 and PDDG, DFTB3/3OB also generally performs better, although the magnitude of improvement is more modest. Compared to high-level calculations, DFTB3/3OB is most successful at predicting geometries; larger errors are found in the energies, although the results can be greatly improved by computing single point energies at a high level with DFTB3 geometries. There are several remaining issues with the DFTB3/3OB approach, most notably its difficulty in describing phosphate hydrolysis reactions involving a change in the coordination number of the phosphorus, for which a specific parametrization (3OB/OPhyd) is developed as a temporary solution; this suggests that the current DFTB3 methodology has limited transferability for complex phosphorus chemistry at the level of accuracy required for detailed mechanistic investigations. Therefore, fundamental improvements in the DFTB3 methodology are needed for a reliable method that describes phosphorus chemistry without ad hoc parameters. Nevertheless, DFTB3/3OB is expected to be a competitive QM method in QM/MM calculations for studying phosphorus/sulfur chemistry in condensed phase systems, especially as a low-level method that drives the sampling in a dual-level QM/MM framework. PMID:24803865
A Study of Fixed-Order Mixed Norm Designs for a Benchmark Problem in Structural Control
NASA Technical Reports Server (NTRS)
Whorton, Mark S.; Calise, Anthony J.; Hsu, C. C.
1998-01-01
This study investigates the use of H2, p-synthesis, and mixed H2/mu methods to construct full-order controllers and optimized controllers of fixed dimensions. The benchmark problem definition is first extended to include uncertainty within the controller bandwidth in the form of parametric uncertainty representative of uncertainty in the natural frequencies of the design model. The sensitivity of H2 design to unmodelled dynamics and parametric uncertainty is evaluated for a range of controller levels of authority. Next, mu-synthesis methods are applied to design full-order compensators that are robust to both unmodelled dynamics and to parametric uncertainty. Finally, a set of mixed H2/mu compensators are designed which are optimized for a fixed compensator dimension. These mixed norm designs recover the H, design performance levels while providing the same levels of robust stability as the u designs. It is shown that designing with the mixed norm approach permits higher levels of controller authority for which the H, designs are destabilizing. The benchmark problem is that of an active tendon system. The controller designs are all based on the use of acceleration feedback.
NASA Astrophysics Data System (ADS)
Koyuncu, A.; Cigeroglu, E.; Özgüven, H. N.
2017-10-01
In this study, a new approach is proposed for identification of structural nonlinearities by employing cascaded optimization and neural networks. Linear finite element model of the system and frequency response functions measured at arbitrary locations of the system are used in this approach. Using the finite element model, a training data set is created, which appropriately spans the possible nonlinear configurations space of the system. A classification neural network trained on these data sets then localizes and determines the types of all nonlinearities associated with the nonlinear degrees of freedom in the system. A new training data set spanning the parametric space associated with the determined nonlinearities is created to facilitate parametric identification. Utilizing this data set, initially, a feed forward regression neural network is trained, which parametrically identifies the classified nonlinearities. Then, the results obtained are further improved by carrying out an optimization which uses network identified values as starting points. Unlike identification methods available in literature, the proposed approach does not require data collection from the degrees of freedoms where nonlinear elements are attached, and furthermore, it is sufficiently accurate even in the presence of measurement noise. The application of the proposed approach is demonstrated on an example system with nonlinear elements and on a real life experimental setup with a local nonlinearity.
Model-free estimation of the psychometric function
Żychaluk, Kamila; Foster, David H.
2009-01-01
A subject's response to the strength of a stimulus is described by the psychometric function, from which summary measures, such as a threshold or slope, may be derived. Traditionally, this function is estimated by fitting a parametric model to the experimental data, usually the proportion of successful trials at each stimulus level. Common models include the Gaussian and Weibull cumulative distribution functions. This approach works well if the model is correct, but it can mislead if not. In practice, the correct model is rarely known. Here, a nonparametric approach based on local linear fitting is advocated. No assumption is made about the true model underlying the data, except that the function is smooth. The critical role of the bandwidth is identified, and its optimum value estimated by a cross-validation procedure. As a demonstration, seven vision and hearing data sets were fitted by the local linear method and by several parametric models. The local linear method frequently performed better and never worse than the parametric ones. Supplemental materials for this article can be downloaded from app.psychonomic-journals.org/content/supplemental. PMID:19633355
Accurate segmentation framework for the left ventricle wall from cardiac cine MRI
NASA Astrophysics Data System (ADS)
Sliman, H.; Khalifa, F.; Elnakib, A.; Soliman, A.; Beache, G. M.; Gimel'farb, G.; Emam, A.; Elmaghraby, A.; El-Baz, A.
2013-10-01
We propose a novel, fast, robust, bi-directional coupled parametric deformable model to segment the left ventricle (LV) wall borders using first- and second-order visual appearance features. These features are embedded in a new stochastic external force that preserves the topology of LV wall to track the evolution of the parametric deformable models control points. To accurately estimate the marginal density of each deformable model control point, the empirical marginal grey level distributions (first-order appearance) inside and outside the boundary of the deformable model are modeled with adaptive linear combinations of discrete Gaussians (LCDG). The second order visual appearance of the LV wall is accurately modeled with a new rotationally invariant second-order Markov-Gibbs random field (MGRF). We tested the proposed segmentation approach on 15 data sets in 6 infarction patients using the Dice similarity coefficient (DSC) and the average distance (AD) between the ground truth and automated segmentation contours. Our approach achieves a mean DSC value of 0.926±0.022 and AD value of 2.16±0.60 compared to two other level set methods that achieve 0.904±0.033 and 0.885±0.02 for DSC; and 2.86±1.35 and 5.72±4.70 for AD, respectively.
Daly, Caitlin H; Higgins, Victoria; Adeli, Khosrow; Grey, Vijay L; Hamid, Jemila S
2017-12-01
To statistically compare and evaluate commonly used methods of estimating reference intervals and to determine which method is best based on characteristics of the distribution of various data sets. Three approaches for estimating reference intervals, i.e. parametric, non-parametric, and robust, were compared with simulated Gaussian and non-Gaussian data. The hierarchy of the performances of each method was examined based on bias and measures of precision. The findings of the simulation study were illustrated through real data sets. In all Gaussian scenarios, the parametric approach provided the least biased and most precise estimates. In non-Gaussian scenarios, no single method provided the least biased and most precise estimates for both limits of a reference interval across all sample sizes, although the non-parametric approach performed the best for most scenarios. The hierarchy of the performances of the three methods was only impacted by sample size and skewness. Differences between reference interval estimates established by the three methods were inflated by variability. Whenever possible, laboratories should attempt to transform data to a Gaussian distribution and use the parametric approach to obtain the most optimal reference intervals. When this is not possible, laboratories should consider sample size and skewness as factors in their choice of reference interval estimation method. The consequences of false positives or false negatives may also serve as factors in this decision. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Free response approach in a parametric system
NASA Astrophysics Data System (ADS)
Huang, Dishan; Zhang, Yueyue; Shao, Hexi
2017-07-01
In this study, a new approach to predict the free response in a parametric system is investigated. It is proposed in the special form of a trigonometric series with an exponentially decaying function of time, based on the concept of frequency splitting. By applying harmonic balance, the parametric vibration equation is transformed into an infinite set of homogeneous linear equations, from which the principal oscillation frequency can be computed, and all coefficients of harmonic components can be obtained. With initial conditions, arbitrary constants in a general solution can be determined. To analyze the computational accuracy and consistency, an approach error function is defined, which is used to assess the computational error in the proposed approach and in the standard numerical approach based on the Runge-Kutta algorithm. Furthermore, an example of a dynamic model of airplane wing flutter on a turbine engine is given to illustrate the applicability of the proposed approach. Numerical solutions show that the proposed approach exhibits high accuracy in mathematical expression, and it is valuable for theoretical research and engineering applications of parametric systems.
Halliday, David M; Senik, Mohd Harizal; Stevenson, Carl W; Mason, Rob
2016-08-01
The ability to infer network structure from multivariate neuronal signals is central to computational neuroscience. Directed network analyses typically use parametric approaches based on auto-regressive (AR) models, where networks are constructed from estimates of AR model parameters. However, the validity of using low order AR models for neurophysiological signals has been questioned. A recent article introduced a non-parametric approach to estimate directionality in bivariate data, non-parametric approaches are free from concerns over model validity. We extend the non-parametric framework to include measures of directed conditional independence, using scalar measures that decompose the overall partial correlation coefficient summatively by direction, and a set of functions that decompose the partial coherence summatively by direction. A time domain partial correlation function allows both time and frequency views of the data to be constructed. The conditional independence estimates are conditioned on a single predictor. The framework is applied to simulated cortical neuron networks and mixtures of Gaussian time series data with known interactions. It is applied to experimental data consisting of local field potential recordings from bilateral hippocampus in anaesthetised rats. The framework offers a non-parametric approach to estimation of directed interactions in multivariate neuronal recordings, and increased flexibility in dealing with both spike train and time series data. The framework offers a novel alternative non-parametric approach to estimate directed interactions in multivariate neuronal recordings, and is applicable to spike train and time series data. Copyright © 2016 Elsevier B.V. All rights reserved.
Kerschbamer, Rudolf
2015-05-01
This paper proposes a geometric delineation of distributional preference types and a non-parametric approach for their identification in a two-person context. It starts with a small set of assumptions on preferences and shows that this set (i) naturally results in a taxonomy of distributional archetypes that nests all empirically relevant types considered in previous work; and (ii) gives rise to a clean experimental identification procedure - the Equality Equivalence Test - that discriminates between archetypes according to core features of preferences rather than properties of specific modeling variants. As a by-product the test yields a two-dimensional index of preference intensity.
A Multivariate Quality Loss Function Approach for Optimization of Spinning Processes
NASA Astrophysics Data System (ADS)
Chakraborty, Shankar; Mitra, Ankan
2018-05-01
Recent advancements in textile industry have given rise to several spinning techniques, such as ring spinning, rotor spinning etc., which can be used to produce a wide variety of textile apparels so as to fulfil the end requirements of the customers. To achieve the best out of these processes, they should be utilized at their optimal parametric settings. However, in presence of multiple yarn characteristics which are often conflicting in nature, it becomes a challenging task for the spinning industry personnel to identify the best parametric mix which would simultaneously optimize all the responses. Hence, in this paper, the applicability of a new systematic approach in the form of multivariate quality loss function technique is explored for optimizing multiple quality characteristics of yarns while identifying the ideal settings of two spinning processes. It is observed that this approach performs well against the other multi-objective optimization techniques, such as desirability function, distance function and mean squared error methods. With slight modifications in the upper and lower specification limits of the considered quality characteristics, and constraints of the non-linear optimization problem, it can be successfully applied to other processes in textile industry to determine their optimal parametric settings.
Bayesian non-parametric inference for stochastic epidemic models using Gaussian Processes.
Xu, Xiaoguang; Kypraios, Theodore; O'Neill, Philip D
2016-10-01
This paper considers novel Bayesian non-parametric methods for stochastic epidemic models. Many standard modeling and data analysis methods use underlying assumptions (e.g. concerning the rate at which new cases of disease will occur) which are rarely challenged or tested in practice. To relax these assumptions, we develop a Bayesian non-parametric approach using Gaussian Processes, specifically to estimate the infection process. The methods are illustrated with both simulated and real data sets, the former illustrating that the methods can recover the true infection process quite well in practice, and the latter illustrating that the methods can be successfully applied in different settings. © The Author 2016. Published by Oxford University Press.
Hutson, Alan D
2018-01-01
In this note, we develop a new and novel semi-parametric estimator of the survival curve that is comparable to the product-limit estimator under very relaxed assumptions. The estimator is based on a beta parametrization that warps the empirical distribution of the observed censored and uncensored data. The parameters are obtained using a pseudo-maximum likelihood approach adjusting the survival curve accounting for the censored observations. In the univariate setting, the new estimator tends to better extend the range of the survival estimation given a high degree of censoring. However, the key feature of this paper is that we develop a new two-group semi-parametric exact permutation test for comparing survival curves that is generally superior to the classic log-rank and Wilcoxon tests and provides the best global power across a variety of alternatives. The new test is readily extended to the k group setting. PMID:26988931
A Parametric k-Means Algorithm
Tarpey, Thaddeus
2007-01-01
Summary The k points that optimally represent a distribution (usually in terms of a squared error loss) are called the k principal points. This paper presents a computationally intensive method that automatically determines the principal points of a parametric distribution. Cluster means from the k-means algorithm are nonparametric estimators of principal points. A parametric k-means approach is introduced for estimating principal points by running the k-means algorithm on a very large simulated data set from a distribution whose parameters are estimated using maximum likelihood. Theoretical and simulation results are presented comparing the parametric k-means algorithm to the usual k-means algorithm and an example on determining sizes of gas masks is used to illustrate the parametric k-means algorithm. PMID:17917692
Bayesian Dose-Response Modeling in Sparse Data
NASA Astrophysics Data System (ADS)
Kim, Steven B.
This book discusses Bayesian dose-response modeling in small samples applied to two different settings. The first setting is early phase clinical trials, and the second setting is toxicology studies in cancer risk assessment. In early phase clinical trials, experimental units are humans who are actual patients. Prior to a clinical trial, opinions from multiple subject area experts are generally more informative than the opinion of a single expert, but we may face a dilemma when they have disagreeing prior opinions. In this regard, we consider compromising the disagreement and compare two different approaches for making a decision. In addition to combining multiple opinions, we also address balancing two levels of ethics in early phase clinical trials. The first level is individual-level ethics which reflects the perspective of trial participants. The second level is population-level ethics which reflects the perspective of future patients. We extensively compare two existing statistical methods which focus on each perspective and propose a new method which balances the two conflicting perspectives. In toxicology studies, experimental units are living animals. Here we focus on a potential non-monotonic dose-response relationship which is known as hormesis. Briefly, hormesis is a phenomenon which can be characterized by a beneficial effect at low doses and a harmful effect at high doses. In cancer risk assessments, the estimation of a parameter, which is known as a benchmark dose, can be highly sensitive to a class of assumptions, monotonicity or hormesis. In this regard, we propose a robust approach which considers both monotonicity and hormesis as a possibility. In addition, We discuss statistical hypothesis testing for hormesis and consider various experimental designs for detecting hormesis based on Bayesian decision theory. Past experiments have not been optimally designed for testing for hormesis, and some Bayesian optimal designs may not be optimal under a wrong parametric assumption. In this regard, we consider a robust experimental design which does not require any parametric assumption.
Scarpazza, Cristina; Nichols, Thomas E; Seramondi, Donato; Maumet, Camille; Sartori, Giuseppe; Mechelli, Andrea
2016-01-01
In recent years, an increasing number of studies have used Voxel Based Morphometry (VBM) to compare a single patient with a psychiatric or neurological condition of interest against a group of healthy controls. However, the validity of this approach critically relies on the assumption that the single patient is drawn from a hypothetical population with a normal distribution and variance equal to that of the control group. In a previous investigation, we demonstrated that family-wise false positive error rate (i.e., the proportion of statistical comparisons yielding at least one false positive) in single case VBM are much higher than expected (Scarpazza et al., 2013). Here, we examine whether the use of non-parametric statistics, which does not rely on the assumptions of normal distribution and equal variance, would enable the investigation of single subjects with good control of false positive risk. We empirically estimated false positive rates (FPRs) in single case non-parametric VBM, by performing 400 statistical comparisons between a single disease-free individual and a group of 100 disease-free controls. The impact of smoothing (4, 8, and 12 mm) and type of pre-processing (Modulated, Unmodulated) was also examined, as these factors have been found to influence FPRs in previous investigations using parametric statistics. The 400 statistical comparisons were repeated using two independent, freely available data sets in order to maximize the generalizability of the results. We found that the family-wise error rate was 5% for increases and 3.6% for decreases in one data set; and 5.6% for increases and 6.3% for decreases in the other data set (5% nominal). Further, these results were not dependent on the level of smoothing and modulation. Therefore, the present study provides empirical evidence that single case VBM studies with non-parametric statistics are not susceptible to high false positive rates. The critical implication of this finding is that VBM can be used to characterize neuroanatomical alterations in individual subjects as long as non-parametric statistics are employed.
Packham, B; Barnes, G; Dos Santos, G Sato; Aristovich, K; Gilad, O; Ghosh, A; Oh, T; Holder, D
2016-06-01
Electrical impedance tomography (EIT) allows for the reconstruction of internal conductivity from surface measurements. A change in conductivity occurs as ion channels open during neural activity, making EIT a potential tool for functional brain imaging. EIT images can have >10 000 voxels, which means statistical analysis of such images presents a substantial multiple testing problem. One way to optimally correct for these issues and still maintain the flexibility of complicated experimental designs is to use random field theory. This parametric method estimates the distribution of peaks one would expect by chance in a smooth random field of a given size. Random field theory has been used in several other neuroimaging techniques but never validated for EIT images of fast neural activity, such validation can be achieved using non-parametric techniques. Both parametric and non-parametric techniques were used to analyze a set of 22 images collected from 8 rats. Significant group activations were detected using both techniques (corrected p < 0.05). Both parametric and non-parametric analyses yielded similar results, although the latter was less conservative. These results demonstrate the first statistical analysis of such an image set and indicate that such an analysis is an approach for EIT images of neural activity.
Packham, B; Barnes, G; dos Santos, G Sato; Aristovich, K; Gilad, O; Ghosh, A; Oh, T; Holder, D
2016-01-01
Abstract Electrical impedance tomography (EIT) allows for the reconstruction of internal conductivity from surface measurements. A change in conductivity occurs as ion channels open during neural activity, making EIT a potential tool for functional brain imaging. EIT images can have >10 000 voxels, which means statistical analysis of such images presents a substantial multiple testing problem. One way to optimally correct for these issues and still maintain the flexibility of complicated experimental designs is to use random field theory. This parametric method estimates the distribution of peaks one would expect by chance in a smooth random field of a given size. Random field theory has been used in several other neuroimaging techniques but never validated for EIT images of fast neural activity, such validation can be achieved using non-parametric techniques. Both parametric and non-parametric techniques were used to analyze a set of 22 images collected from 8 rats. Significant group activations were detected using both techniques (corrected p < 0.05). Both parametric and non-parametric analyses yielded similar results, although the latter was less conservative. These results demonstrate the first statistical analysis of such an image set and indicate that such an analysis is an approach for EIT images of neural activity. PMID:27203477
Nonparametric estimation of benchmark doses in environmental risk assessment
Piegorsch, Walter W.; Xiong, Hui; Bhattacharya, Rabi N.; Lin, Lizhen
2013-01-01
Summary An important statistical objective in environmental risk analysis is estimation of minimum exposure levels, called benchmark doses (BMDs), that induce a pre-specified benchmark response in a dose-response experiment. In such settings, representations of the risk are traditionally based on a parametric dose-response model. It is a well-known concern, however, that if the chosen parametric form is misspecified, inaccurate and possibly unsafe low-dose inferences can result. We apply a nonparametric approach for calculating benchmark doses, based on an isotonic regression method for dose-response estimation with quantal-response data (Bhattacharya and Kong, 2007). We determine the large-sample properties of the estimator, develop bootstrap-based confidence limits on the BMDs, and explore the confidence limits’ small-sample properties via a short simulation study. An example from cancer risk assessment illustrates the calculations. PMID:23914133
NASA Technical Reports Server (NTRS)
Meyer, Peter; Larson, Steven A.; Hansen, Earl G.; Itten, Klaus I.
1993-01-01
Remotely sensed data have geometric characteristics and representation which depend on the type of the acquisition system used. To correlate such data over large regions with other real world representation tools like conventional maps or Geographic Information System (GIS) for verification purposes, or for further treatment within different data sets, a coregistration has to be performed. In addition to the geometric characteristics of the sensor there are two other dominating factors which affect the geometry: the stability of the platform and the topography. There are two basic approaches for a geometric correction on a pixel-by-pixel basis: (1) A parametric approach using the location of the airplane and inertial navigation system data to simulate the observation geometry; and (2) a non-parametric approach using tie points or ground control points. It is well known that the non-parametric approach is not reliable enough for the unstable flight conditions of airborne systems, and is not satisfying in areas with significant topography, e.g. mountains and hills. The present work describes a parametric preprocessing procedure which corrects effects of flight line and attitude variation as well as topographic influences and is described in more detail by Meyer.
NASA Astrophysics Data System (ADS)
Maity, Kalipada; Pradhan, Swastik
2018-04-01
In this study, machining of titanium alloy (grade 5) is carried out using MT-CVD coated cutting tool. Titanium alloys possess superior strength-to-weight ratio with good corrosion resistance. Most of the industries used titanium alloy for the manufacturing of various types of lightweight components. The parts made from Ti-6Al-4V largely used in aerospace, biomedical, automotive and marine sectors. The conventional machining of this material is very difficult, due to low thermal conductivity and high chemical reactivity properties. To achieve a good surface finish with minimum tool wear of cutting tool, the machining is carried out using MT-CVD coated cutting tool. The experiment is carried out using of Taguchi L27 array layout with three cutting variables and levels. To find out the optimum parametric setting desirability function analysis (DFA) approach is used. The analysis of variance is studied to know the percentage contribution of each cutting variables. The optimum parametric setting results calculated from DFA were validated through the confirmation test.
Identifying Attributes of CO2 Leakage Zones in Shallow Aquifers Using a Parametric Level Set Method
NASA Astrophysics Data System (ADS)
Sun, A. Y.; Islam, A.; Wheeler, M.
2016-12-01
Leakage through abandoned wells and geologic faults poses the greatest risk to CO2 storage permanence. For shallow aquifers, secondary CO2 plumes emanating from the leak zones may go undetected for a sustained period of time and has the greatest potential to cause large-scale and long-term environmental impacts. Identification of the attributes of leak zones, including their shape, location, and strength, is required for proper environmental risk assessment. This study applies a parametric level set (PaLS) method to characterize the leakage zone. Level set methods are appealing for tracking topological changes and recovering unknown shapes of objects. However, level set evolution using the conventional level set methods is challenging. In PaLS, the level set function is approximated using a weighted sum of basis functions and the level set evolution problem is replaced by an optimization problem. The efficacy of PaLS is demonstrated through recovering the source zone created by CO2 leakage into a carbonate aquifer. Our results show that PaLS is a robust source identification method that can recover the approximate source locations in the presence of measurement errors, model parameter uncertainty, and inaccurate initial guesses of source flux strengths. The PaLS inversion framework introduced in this work is generic and can be adapted for any reactive transport model by switching the pre- and post-processing routines.
Thresholding functional connectomes by means of mixture modeling.
Bielczyk, Natalia Z; Walocha, Fabian; Ebel, Patrick W; Haak, Koen V; Llera, Alberto; Buitelaar, Jan K; Glennon, Jeffrey C; Beckmann, Christian F
2018-05-01
Functional connectivity has been shown to be a very promising tool for studying the large-scale functional architecture of the human brain. In network research in fMRI, functional connectivity is considered as a set of pair-wise interactions between the nodes of the network. These interactions are typically operationalized through the full or partial correlation between all pairs of regional time series. Estimating the structure of the latent underlying functional connectome from the set of pair-wise partial correlations remains an open research problem though. Typically, this thresholding problem is approached by proportional thresholding, or by means of parametric or non-parametric permutation testing across a cohort of subjects at each possible connection. As an alternative, we propose a data-driven thresholding approach for network matrices on the basis of mixture modeling. This approach allows for creating subject-specific sparse connectomes by modeling the full set of partial correlations as a mixture of low correlation values associated with weak or unreliable edges in the connectome and a sparse set of reliable connections. Consequently, we propose to use alternative thresholding strategy based on the model fit using pseudo-False Discovery Rates derived on the basis of the empirical null estimated as part of the mixture distribution. We evaluate the method on synthetic benchmark fMRI datasets where the underlying network structure is known, and demonstrate that it gives improved performance with respect to the alternative methods for thresholding connectomes, given the canonical thresholding levels. We also demonstrate that mixture modeling gives highly reproducible results when applied to the functional connectomes of the visual system derived from the n-back Working Memory task in the Human Connectome Project. The sparse connectomes obtained from mixture modeling are further discussed in the light of the previous knowledge of the functional architecture of the visual system in humans. We also demonstrate that with use of our method, we are able to extract similar information on the group level as can be achieved with permutation testing even though these two methods are not equivalent. We demonstrate that with both of these methods, we obtain functional decoupling between the two hemispheres in the higher order areas of the visual cortex during visual stimulation as compared to the resting state, which is in line with previous studies suggesting lateralization in the visual processing. However, as opposed to permutation testing, our approach does not require inference at the cohort level and can be used for creating sparse connectomes at the level of a single subject. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Heidema, A Geert; Boer, Jolanda M A; Nagelkerke, Nico; Mariman, Edwin C M; van der A, Daphne L; Feskens, Edith J M
2006-04-21
Genetic epidemiologists have taken the challenge to identify genetic polymorphisms involved in the development of diseases. Many have collected data on large numbers of genetic markers but are not familiar with available methods to assess their association with complex diseases. Statistical methods have been developed for analyzing the relation between large numbers of genetic and environmental predictors to disease or disease-related variables in genetic association studies. In this commentary we discuss logistic regression analysis, neural networks, including the parameter decreasing method (PDM) and genetic programming optimized neural networks (GPNN) and several non-parametric methods, which include the set association approach, combinatorial partitioning method (CPM), restricted partitioning method (RPM), multifactor dimensionality reduction (MDR) method and the random forests approach. The relative strengths and weaknesses of these methods are highlighted. Logistic regression and neural networks can handle only a limited number of predictor variables, depending on the number of observations in the dataset. Therefore, they are less useful than the non-parametric methods to approach association studies with large numbers of predictor variables. GPNN on the other hand may be a useful approach to select and model important predictors, but its performance to select the important effects in the presence of large numbers of predictors needs to be examined. Both the set association approach and random forests approach are able to handle a large number of predictors and are useful in reducing these predictors to a subset of predictors with an important contribution to disease. The combinatorial methods give more insight in combination patterns for sets of genetic and/or environmental predictor variables that may be related to the outcome variable. As the non-parametric methods have different strengths and weaknesses we conclude that to approach genetic association studies using the case-control design, the application of a combination of several methods, including the set association approach, MDR and the random forests approach, will likely be a useful strategy to find the important genes and interaction patterns involved in complex diseases.
Ionescu, Crina-Maria; Geidl, Stanislav; Svobodová Vařeková, Radka; Koča, Jaroslav
2013-10-28
We focused on the parametrization and evaluation of empirical models for fast and accurate calculation of conformationally dependent atomic charges in proteins. The models were based on the electronegativity equalization method (EEM), and the parametrization procedure was tailored to proteins. We used large protein fragments as reference structures and fitted the EEM model parameters using atomic charges computed by three population analyses (Mulliken, Natural, iterative Hirshfeld), at the Hartree-Fock level with two basis sets (6-31G*, 6-31G**) and in two environments (gas phase, implicit solvation). We parametrized and successfully validated 24 EEM models. When tested on insulin and ubiquitin, all models reproduced quantum mechanics level charges well and were consistent with respect to population analysis and basis set. Specifically, the models showed on average a correlation of 0.961, RMSD 0.097 e, and average absolute error per atom 0.072 e. The EEM models can be used with the freely available EEM implementation EEM_SOLVER.
Space transfer vehicle concepts and requirements study. Volume 3, book 1: Program cost estimates
NASA Technical Reports Server (NTRS)
Peffley, Al F.
1991-01-01
The Space Transfer Vehicle (STV) Concepts and Requirements Study cost estimate and program planning analysis is presented. The cost estimating technique used to support STV system, subsystem, and component cost analysis is a mixture of parametric cost estimating and selective cost analogy approaches. The parametric cost analysis is aimed at developing cost-effective aerobrake, crew module, tank module, and lander designs with the parametric cost estimates data. This is accomplished using cost as a design parameter in an iterative process with conceptual design input information. The parametric estimating approach segregates costs by major program life cycle phase (development, production, integration, and launch support). These phases are further broken out into major hardware subsystems, software functions, and tasks according to the STV preliminary program work breakdown structure (WBS). The WBS is defined to a low enough level of detail by the study team to highlight STV system cost drivers. This level of cost visibility provided the basis for cost sensitivity analysis against various design approaches aimed at achieving a cost-effective design. The cost approach, methodology, and rationale are described. A chronological record of the interim review material relating to cost analysis is included along with a brief summary of the study contract tasks accomplished during that period of review and the key conclusions or observations identified that relate to STV program cost estimates. The STV life cycle costs are estimated on the proprietary parametric cost model (PCM) with inputs organized by a project WBS. Preliminary life cycle schedules are also included.
Kück, Patrick; Meusemann, Karen; Dambach, Johannes; Thormann, Birthe; von Reumont, Björn M; Wägele, Johann W; Misof, Bernhard
2010-03-31
Methods of alignment masking, which refers to the technique of excluding alignment blocks prior to tree reconstructions, have been successful in improving the signal-to-noise ratio in sequence alignments. However, the lack of formally well defined methods to identify randomness in sequence alignments has prevented a routine application of alignment masking. In this study, we compared the effects on tree reconstructions of the most commonly used profiling method (GBLOCKS) which uses a predefined set of rules in combination with alignment masking, with a new profiling approach (ALISCORE) based on Monte Carlo resampling within a sliding window, using different data sets and alignment methods. While the GBLOCKS approach excludes variable sections above a certain threshold which choice is left arbitrary, the ALISCORE algorithm is free of a priori rating of parameter space and therefore more objective. ALISCORE was successfully extended to amino acids using a proportional model and empirical substitution matrices to score randomness in multiple sequence alignments. A complex bootstrap resampling leads to an even distribution of scores of randomly similar sequences to assess randomness of the observed sequence similarity. Testing performance on real data, both masking methods, GBLOCKS and ALISCORE, helped to improve tree resolution. The sliding window approach was less sensitive to different alignments of identical data sets and performed equally well on all data sets. Concurrently, ALISCORE is capable of dealing with different substitution patterns and heterogeneous base composition. ALISCORE and the most relaxed GBLOCKS gap parameter setting performed best on all data sets. Correspondingly, Neighbor-Net analyses showed the most decrease in conflict. Alignment masking improves signal-to-noise ratio in multiple sequence alignments prior to phylogenetic reconstruction. Given the robust performance of alignment profiling, alignment masking should routinely be used to improve tree reconstructions. Parametric methods of alignment profiling can be easily extended to more complex likelihood based models of sequence evolution which opens the possibility of further improvements.
Multi-parametric variational data assimilation for hydrological forecasting
NASA Astrophysics Data System (ADS)
Alvarado-Montero, R.; Schwanenberg, D.; Krahe, P.; Helmke, P.; Klein, B.
2017-12-01
Ensemble forecasting is increasingly applied in flow forecasting systems to provide users with a better understanding of forecast uncertainty and consequently to take better-informed decisions. A common practice in probabilistic streamflow forecasting is to force deterministic hydrological model with an ensemble of numerical weather predictions. This approach aims at the representation of meteorological uncertainty but neglects uncertainty of the hydrological model as well as its initial conditions. Complementary approaches use probabilistic data assimilation techniques to receive a variety of initial states or represent model uncertainty by model pools instead of single deterministic models. This paper introduces a novel approach that extends a variational data assimilation based on Moving Horizon Estimation to enable the assimilation of observations into multi-parametric model pools. It results in a probabilistic estimate of initial model states that takes into account the parametric model uncertainty in the data assimilation. The assimilation technique is applied to the uppermost area of River Main in Germany. We use different parametric pools, each of them with five parameter sets, to assimilate streamflow data, as well as remotely sensed data from the H-SAF project. We assess the impact of the assimilation in the lead time performance of perfect forecasts (i.e. observed data as forcing variables) as well as deterministic and probabilistic forecasts from ECMWF. The multi-parametric assimilation shows an improvement of up to 23% for CRPS performance and approximately 20% in Brier Skill Scores with respect to the deterministic approach. It also improves the skill of the forecast in terms of rank histogram and produces a narrower ensemble spread.
Oliveira, Augusto F; Philipsen, Pier; Heine, Thomas
2015-11-10
In the first part of this series, we presented a parametrization strategy to obtain high-quality electronic band structures on the basis of density-functional-based tight-binding (DFTB) calculations and published a parameter set called QUASINANO2013.1. Here, we extend our parametrization effort to include the remaining terms that are needed to compute the total energy and its gradient, commonly referred to as repulsive potential. Instead of parametrizing these terms as a two-body potential, we calculate them explicitly from the DFTB analogues of the Kohn-Sham total energy expression. This strategy requires only two further numerical parameters per element. Thus, the atomic configuration and four real numbers per element are sufficient to define the DFTB model at this level of parametrization. The QUASINANO2015 parameter set allows the calculation of energy, structure, and electronic structure of all systems composed of elements ranging from H to Ca. Extensive benchmarks show that the overall accuracy of QUASINANO2015 is comparable to that of well-established methods, including PM7 and hand-tuned DFTB parameter sets, while coverage of a much larger range of chemical systems is available.
Moss, Brian G; Yeaton, William H
2013-10-01
Annually, American colleges and universities provide developmental education (DE) to millions of underprepared students; however, evaluation estimates of DE benefits have been mixed. Using a prototypic exemplar of DE, our primary objective was to investigate the utility of a replicative evaluative framework for assessing program effectiveness. Within the context of the regression discontinuity (RD) design, this research examined the effectiveness of a DE program for five, sequential cohorts of first-time college students. Discontinuity estimates were generated for individual terms and cumulatively, across terms. Participants were 3,589 first-time community college students. DE program effects were measured by contrasting both college-level English grades and a dichotomous measure of pass/fail, for DE and non-DE students. Parametric and nonparametric estimates of overall effect were positive for continuous and dichotomous measures of achievement (grade and pass/fail). The variability of program effects over time was determined by tracking results within individual terms and cumulatively, across terms. Applying this replication strategy, DE's overall impact was modest (an effect size of approximately .20) but quite consistent, based on parametric and nonparametric estimation approaches. A meta-analysis of five RD results yielded virtually the same estimate as the overall, parametric findings. Subset analysis, though tentative, suggested that males benefited more than females, while academic gains were comparable for different ethnicities. The cumulative, within-study comparison, replication approach offers considerable potential for the evaluation of new and existing policies, particularly when effects are relatively small, as is often the case in applied settings.
Comparison of four approaches to a rock facies classification problem
Dubois, M.K.; Bohling, Geoffrey C.; Chakrabarti, S.
2007-01-01
In this study, seven classifiers based on four different approaches were tested in a rock facies classification problem: classical parametric methods using Bayes' rule, and non-parametric methods using fuzzy logic, k-nearest neighbor, and feed forward-back propagating artificial neural network. Determining the most effective classifier for geologic facies prediction in wells without cores in the Panoma gas field, in Southwest Kansas, was the objective. Study data include 3600 samples with known rock facies class (from core) with each sample having either four or five measured properties (wire-line log curves), and two derived geologic properties (geologic constraining variables). The sample set was divided into two subsets, one for training and one for testing the ability of the trained classifier to correctly assign classes. Artificial neural networks clearly outperformed all other classifiers and are effective tools for this particular classification problem. Classical parametric models were inadequate due to the nature of the predictor variables (high dimensional and not linearly correlated), and feature space of the classes (overlapping). The other non-parametric methods tested, k-nearest neighbor and fuzzy logic, would need considerable improvement to match the neural network effectiveness, but further work, possibly combining certain aspects of the three non-parametric methods, may be justified. ?? 2006 Elsevier Ltd. All rights reserved.
Multi Response Optimization of Laser Micro Marking Process:A Grey- Fuzzy Approach
NASA Astrophysics Data System (ADS)
Shivakoti, I.; Das, P. P.; Kibria, G.; Pradhan, B. B.; Mustafa, Z.; Ghadai, R. K.
2017-07-01
The selection of optimal parametric combination for efficient machining has always become a challenging issue for the manufacturing researcher. The optimal parametric combination always provides a better machining which improves the productivity, product quality and subsequently reduces the production cost and time. The paper presents the hybrid approach of Grey relational analysis and Fuzzy logic to obtain the optimal parametric combination for better laser beam micro marking on the Gallium Nitride (GaN) work material. The response surface methodology has been implemented for design of experiment considering three parameters with their five levels. The parameter such as current, frequency and scanning speed has been considered and the mark width, mark depth and mark intensity has been considered as the process response.
Parametric Modeling as a Technology of Rapid Prototyping in Light Industry
NASA Astrophysics Data System (ADS)
Tomilov, I. N.; Grudinin, S. N.; Frolovsky, V. D.; Alexandrov, A. A.
2016-04-01
The paper deals with the parametric modeling method of virtual mannequins for the purposes of design automation in clothing industry. The described approach includes the steps of generation of the basic model on the ground of the initial one (obtained in 3D-scanning process), its parameterization and deformation. The complex surfaces are presented by the wireframe model. The modeling results are evaluated with the set of similarity factors. Deformed models are compared with their virtual prototypes. The results of modeling are estimated by the standard deviation factor.
Model selection criterion in survival analysis
NASA Astrophysics Data System (ADS)
Karabey, Uǧur; Tutkun, Nihal Ata
2017-07-01
Survival analysis deals with time until occurrence of an event of interest such as death, recurrence of an illness, the failure of an equipment or divorce. There are various survival models with semi-parametric or parametric approaches used in medical, natural or social sciences. The decision on the most appropriate model for the data is an important point of the analysis. In literature Akaike information criteria or Bayesian information criteria are used to select among nested models. In this study,the behavior of these information criterion is discussed for a real data set.
Parametric tests of a traction drive retrofitted to an automotive gas turbine
NASA Technical Reports Server (NTRS)
Rohn, D. A.; Lowenthal, S. H.; Anderson, N. E.
1980-01-01
The results of a test program to retrofit a high performance fixed ratio Nasvytis Multiroller Traction Drive in place of a helical gear set to a gas turbine engine are presented. Parametric tests up to a maximum engine power turbine speed of 45,500 rpm and to a power level of 11 kW were conducted. Comparisons were made to similar drives that were parametrically tested on a back-to-back test stand. The drive showed good compatibility with the gas turbine engine. Specific fuel consumption of the engine with the traction drive speed reducer installed was comparable to the original helical gearset equipped engine.
NASA Astrophysics Data System (ADS)
Sampson, David D.; Chin, Lixin; Gong, Peijun; Wijesinghe, Philip; Es'haghian, Shaghayegh; Allen, Wesley M.; Klyen, Blake R.; Kirk, Rodney W.; Kennedy, Brendan F.; McLaughlin, Robert A.
2016-03-01
INVITED TALK Advances in imaging tissue microstructure in living subjects, or in freshly excised tissue with minimum preparation and processing, are important for future diagnosis and surgical guidance in the clinical setting, particularly for application to cancer. Whilst microscopy methods continue to advance on the cellular scale and medical imaging is well established on the scale of the whole tumor or organ, it is attractive to consider imaging the tumor environment on the micro-scale, between that of cells and whole tissues. Such a scenario is ideally suited to optical coherence tomography (OCT), with the twin attractions of requiring little or no tissue preparation, and in vivo capability. OCT's intrinsic scattering contrast reveals many morphological features of tumors, but is frequently ineffective in revealing other important aspects, such as microvasculature, or in reliably distinguishing tumor from uninvolved stroma. To address these shortcomings, we are developing several advances on the basic OCT approach. We are exploring speckle fluctuations to image tissue microvasculature and we have been developing several parametric approaches to tissue micro-scale characterization. Our approaches extract, from a three-dimensional OCT data set, a two-dimensional image of an optical parameter, such as attenuation or birefringence, or a mechanical parameter, such as stiffness, that aids in characterizing the tissue. This latter method, termed optical coherence elastography, parallels developments in ultrasound and magnetic resonance imaging. Parametric imaging of birefringence and of stiffness both show promise in addressing the important issue of differentiating cancer from uninvolved stroma in breast tissue.
NASA Astrophysics Data System (ADS)
Gryanik, Vladimir M.; Lüpkes, Christof
2018-02-01
In climate and weather prediction models the near-surface turbulent fluxes of heat and momentum and related transfer coefficients are usually parametrized on the basis of Monin-Obukhov similarity theory (MOST). To avoid iteration, required for the numerical solution of the MOST equations, many models apply parametrizations of the transfer coefficients based on an approach relating these coefficients to the bulk Richardson number Rib. However, the parametrizations that are presently used in most climate models are valid only for weaker stability and larger surface roughnesses than those documented during the Surface Heat Budget of the Arctic Ocean campaign (SHEBA). The latter delivered a well-accepted set of turbulence data in the stable surface layer over polar sea-ice. Using stability functions based on the SHEBA data, we solve the MOST equations applying a new semi-analytic approach that results in transfer coefficients as a function of Rib and roughness lengths for momentum and heat. It is shown that the new coefficients reproduce the coefficients obtained by the numerical iterative method with a good accuracy in the most relevant range of stability and roughness lengths. For small Rib, the new bulk transfer coefficients are similar to the traditional coefficients, but for large Rib they are much smaller than currently used coefficients. Finally, a possible adjustment of the latter and the implementation of the new proposed parametrizations in models are discussed.
Kang, Jiqiang; Wei, Xiaoming; Li, Bowen; Wang, Xie; Yu, Luoqin; Tan, Sisi; Jinata, Chandra; Wong, Kenneth K. Y.
2016-01-01
We proposed a sensitivity enhancement method of the interference-based signal detection approach and applied it on a swept-source optical coherence tomography (SS-OCT) system through all-fiber optical parametric amplifier (FOPA) and parametric balanced detector (BD). The parametric BD was realized by combining the signal and phase conjugated idler band that was newly-generated through FOPA, and specifically by superimposing these two bands at a photodetector. The sensitivity enhancement by FOPA and parametric BD in SS-OCT were demonstrated experimentally. The results show that SS-OCT with FOPA and SS-OCT with parametric BD can provide more than 9 dB and 12 dB sensitivity improvement, respectively, when compared with the conventional SS-OCT in a spectral bandwidth spanning over 76 nm. To further verify and elaborate their sensitivity enhancement, a bio-sample imaging experiment was conducted on loach eyes by conventional SS-OCT setup, SS-OCT with FOPA and parametric BD at different illumination power levels. All these results proved that using FOPA and parametric BD could improve the sensitivity significantly in SS-OCT systems. PMID:27446655
Balzer, Laura B; Zheng, Wenjing; van der Laan, Mark J; Petersen, Maya L
2018-01-01
We often seek to estimate the impact of an exposure naturally occurring or randomly assigned at the cluster-level. For example, the literature on neighborhood determinants of health continues to grow. Likewise, community randomized trials are applied to learn about real-world implementation, sustainability, and population effects of interventions with proven individual-level efficacy. In these settings, individual-level outcomes are correlated due to shared cluster-level factors, including the exposure, as well as social or biological interactions between individuals. To flexibly and efficiently estimate the effect of a cluster-level exposure, we present two targeted maximum likelihood estimators (TMLEs). The first TMLE is developed under a non-parametric causal model, which allows for arbitrary interactions between individuals within a cluster. These interactions include direct transmission of the outcome (i.e. contagion) and influence of one individual's covariates on another's outcome (i.e. covariate interference). The second TMLE is developed under a causal sub-model assuming the cluster-level and individual-specific covariates are sufficient to control for confounding. Simulations compare the alternative estimators and illustrate the potential gains from pairing individual-level risk factors and outcomes during estimation, while avoiding unwarranted assumptions. Our results suggest that estimation under the sub-model can result in bias and misleading inference in an observational setting. Incorporating working assumptions during estimation is more robust than assuming they hold in the underlying causal model. We illustrate our approach with an application to HIV prevention and treatment.
A strategy for improved computational efficiency of the method of anchored distributions
NASA Astrophysics Data System (ADS)
Over, Matthew William; Yang, Yarong; Chen, Xingyuan; Rubin, Yoram
2013-06-01
This paper proposes a strategy for improving the computational efficiency of model inversion using the method of anchored distributions (MAD) by "bundling" similar model parametrizations in the likelihood function. Inferring the likelihood function typically requires a large number of forward model (FM) simulations for each possible model parametrization; as a result, the process is quite expensive. To ease this prohibitive cost, we present an approximation for the likelihood function called bundling that relaxes the requirement for high quantities of FM simulations. This approximation redefines the conditional statement of the likelihood function as the probability of a set of similar model parametrizations "bundle" replicating field measurements, which we show is neither a model reduction nor a sampling approach to improving the computational efficiency of model inversion. To evaluate the effectiveness of these modifications, we compare the quality of predictions and computational cost of bundling relative to a baseline MAD inversion of 3-D flow and transport model parameters. Additionally, to aid understanding of the implementation we provide a tutorial for bundling in the form of a sample data set and script for the R statistical computing language. For our synthetic experiment, bundling achieved a 35% reduction in overall computational cost and had a limited negative impact on predicted probability distributions of the model parameters. Strategies for minimizing error in the bundling approximation, for enforcing similarity among the sets of model parametrizations, and for identifying convergence of the likelihood function are also presented.
Bahrami, Sheyda; Shamsi, Mousa
2017-01-01
Functional magnetic resonance imaging (fMRI) is a popular method to probe the functional organization of the brain using hemodynamic responses. In this method, volume images of the entire brain are obtained with a very good spatial resolution and low temporal resolution. However, they always suffer from high dimensionality in the face of classification algorithms. In this work, we combine a support vector machine (SVM) with a self-organizing map (SOM) for having a feature-based classification by using SVM. Then, a linear kernel SVM is used for detecting the active areas. Here, we use SOM for feature extracting and labeling the datasets. SOM has two major advances: (i) it reduces dimension of data sets for having less computational complexity and (ii) it is useful for identifying brain regions with small onset differences in hemodynamic responses. Our non-parametric model is compared with parametric and non-parametric methods. We use simulated fMRI data sets and block design inputs in this paper and consider the contrast to noise ratio (CNR) value equal to 0.6 for simulated datasets. fMRI simulated dataset has contrast 1-4% in active areas. The accuracy of our proposed method is 93.63% and the error rate is 6.37%.
NASA Astrophysics Data System (ADS)
Noh, S. J.; Rakovec, O.; Kumar, R.; Samaniego, L. E.
2015-12-01
Accurate and reliable streamflow prediction is essential to mitigate social and economic damage coming from water-related disasters such as flood and drought. Sequential data assimilation (DA) may facilitate improved streamflow prediction using real-time observations to correct internal model states. In conventional DA methods such as state updating, parametric uncertainty is often ignored mainly due to practical limitations of methodology to specify modeling uncertainty with limited ensemble members. However, if parametric uncertainty related with routing and runoff components is not incorporated properly, predictive uncertainty by model ensemble may be insufficient to capture dynamics of observations, which may deteriorate predictability. Recently, a multi-scale parameter regionalization (MPR) method was proposed to make hydrologic predictions at different scales using a same set of model parameters without losing much of the model performance. The MPR method incorporated within the mesoscale hydrologic model (mHM, http://www.ufz.de/mhm) could effectively represent and control uncertainty of high-dimensional parameters in a distributed model using global parameters. In this study, we evaluate impacts of streamflow data assimilation over European river basins. Especially, a multi-parametric ensemble approach is tested to consider the effects of parametric uncertainty in DA. Because augmentation of parameters is not required within an assimilation window, the approach could be more stable with limited ensemble members and have potential for operational uses. To consider the response times and non-Gaussian characteristics of internal hydrologic processes, lagged particle filtering is utilized. The presentation will be focused on gains and limitations of streamflow data assimilation and multi-parametric ensemble method over large-scale basins.
A Nonparametric Approach to Automated S-Wave Picking
NASA Astrophysics Data System (ADS)
Rawles, C.; Thurber, C. H.
2014-12-01
Although a number of very effective P-wave automatic pickers have been developed over the years, automatic picking of S waves has remained more challenging. Most automatic pickers take a parametric approach, whereby some characteristic function (CF), e.g. polarization or kurtosis, is determined from the data and the pick is estimated from the CF. We have adopted a nonparametric approach, estimating the pick directly from the waveforms. For a particular waveform to be auto-picked, the method uses a combination of similarity to a set of seismograms with known S-wave arrivals and dissimilarity to a set of seismograms that do not contain S-wave arrivals. Significant effort has been made towards dealing with the problem of S-to-P conversions. We have evaluated the effectiveness of our method by testing it on multiple sets of microearthquake seismograms with well-determined S-wave arrivals for several areas around the world, including fault zones and volcanic regions. In general, we find that the results from our auto-picker are consistent with reviewed analyst picks 90% of the time at the 0.2 s level and 80% of the time at the 0.1 s level, or better. For most of the large datasets we have analyzed, our auto-picker also makes far more S-wave picks than were made previously by analysts. We are using these enlarged sets of high-quality S-wave picks to refine tomographic inversions for these areas, resulting in substantial improvement in the quality of the S-wave images. We will show examples from New Zealand, Hawaii, and California.
Modeling and replicating statistical topology and evidence for CMB nonhomogeneity
Agami, Sarit
2017-01-01
Under the banner of “big data,” the detection and classification of structure in extremely large, high-dimensional, data sets are two of the central statistical challenges of our times. Among the most intriguing new approaches to this challenge is “TDA,” or “topological data analysis,” one of the primary aims of which is providing nonmetric, but topologically informative, preanalyses of data which make later, more quantitative, analyses feasible. While TDA rests on strong mathematical foundations from topology, in applications, it has faced challenges due to difficulties in handling issues of statistical reliability and robustness, often leading to an inability to make scientific claims with verifiable levels of statistical confidence. We propose a methodology for the parametric representation, estimation, and replication of persistence diagrams, the main diagnostic tool of TDA. The power of the methodology lies in the fact that even if only one persistence diagram is available for analysis—the typical case for big data applications—the replications permit conventional statistical hypothesis testing. The methodology is conceptually simple and computationally practical, and provides a broadly effective statistical framework for persistence diagram TDA analysis. We demonstrate the basic ideas on a toy example, and the power of the parametric approach to TDA modeling in an analysis of cosmic microwave background (CMB) nonhomogeneity. PMID:29078301
Kutateladze, Andrei G; Mukhina, Olga A
2014-09-05
Spin-spin coupling constants in (1)H NMR carry a wealth of structural information and offer a powerful tool for deciphering molecular structures. However, accurate ab initio or DFT calculations of spin-spin coupling constants have been very challenging and expensive. Scaling of (easy) Fermi contacts, fc, especially in the context of recent findings by Bally and Rablen (Bally, T.; Rablen, P. R. J. Org. Chem. 2011, 76, 4818), offers a framework for achieving practical evaluation of spin-spin coupling constants. We report a faster and more precise parametrization approach utilizing a new basis set for hydrogen atoms optimized in conjunction with (i) inexpensive B3LYP/6-31G(d) molecular geometries, (ii) inexpensive 4-31G basis set for carbon atoms in fc calculations, and (iii) individual parametrization for different atom types/hybridizations, not unlike a force field in molecular mechanics, but designed for the fc's. With the training set of 608 experimental constants we achieved rmsd <0.19 Hz. The methodology performs very well as we illustrate with a set of complex organic natural products, including strychnine (rmsd 0.19 Hz), morphine (rmsd 0.24 Hz), etc. This precision is achieved with much shorter computational times: accurate spin-spin coupling constants for the two conformers of strychnine were computed in parallel on two 16-core nodes of a Linux cluster within 10 min.
Experimental Characterization of Gas Turbine Emissions at Simulated Flight Altitude Conditions
NASA Technical Reports Server (NTRS)
Howard, R. P.; Wormhoudt, J. C.; Whitefield, P. D.
1996-01-01
NASA's Atmospheric Effects of Aviation Project (AEAP) is developing a scientific basis for assessment of the atmospheric impact of subsonic and supersonic aviation. A primary goal is to assist assessments of United Nations scientific organizations and hence, consideration of emissions standards by the International Civil Aviation Organization (ICAO). Engine tests have been conducted at AEDC to fulfill the need of AEAP. The purpose of these tests is to obtain a comprehensive database to be used for supplying critical information to the atmospheric research community. It includes: (1) simulated sea-level-static test data as well as simulated altitude data; and (2) intrusive (extractive probe) data as well as non-intrusive (optical techniques) data. A commercial-type bypass engine with aviation fuel was used in this test series. The test matrix was set by parametrically selecting the temperature, pressure, and flow rate at sea-level-static and different altitudes to obtain a parametric set of data.
Schörgendorfer, Angela; Branscum, Adam J; Hanson, Timothy E
2013-06-01
Logistic regression is a popular tool for risk analysis in medical and population health science. With continuous response data, it is common to create a dichotomous outcome for logistic regression analysis by specifying a threshold for positivity. Fitting a linear regression to the nondichotomized response variable assuming a logistic sampling model for the data has been empirically shown to yield more efficient estimates of odds ratios than ordinary logistic regression of the dichotomized endpoint. We illustrate that risk inference is not robust to departures from the parametric logistic distribution. Moreover, the model assumption of proportional odds is generally not satisfied when the condition of a logistic distribution for the data is violated, leading to biased inference from a parametric logistic analysis. We develop novel Bayesian semiparametric methodology for testing goodness of fit of parametric logistic regression with continuous measurement data. The testing procedures hold for any cutoff threshold and our approach simultaneously provides the ability to perform semiparametric risk estimation. Bayes factors are calculated using the Savage-Dickey ratio for testing the null hypothesis of logistic regression versus a semiparametric generalization. We propose a fully Bayesian and a computationally efficient empirical Bayesian approach to testing, and we present methods for semiparametric estimation of risks, relative risks, and odds ratios when parametric logistic regression fails. Theoretical results establish the consistency of the empirical Bayes test. Results from simulated data show that the proposed approach provides accurate inference irrespective of whether parametric assumptions hold or not. Evaluation of risk factors for obesity shows that different inferences are derived from an analysis of a real data set when deviations from a logistic distribution are permissible in a flexible semiparametric framework. © 2013, The International Biometric Society.
NASA Astrophysics Data System (ADS)
Hastuti, S.; Harijono; Murtini, E. S.; Fibrianto, K.
2018-03-01
This current study is aimed to investigate the use of parametric and non-parametric approach for sensory RATA (Rate-All-That-Apply) method. Ledre as Bojonegoro unique local food product was used as point of interest, in which 319 panelists were involved in the study. The result showed that ledre is characterized as easy-crushed texture, sticky in mouth, stingy sensation and easy to swallow. It has also strong banana flavour with brown in colour. Compared to eggroll and semprong, ledre has more variances in terms of taste as well the roll length. As RATA questionnaire is designed to collect categorical data, non-parametric approach is the common statistical procedure. However, similar results were also obtained as parametric approach, regardless the fact of non-normal distributed data. Thus, it suggests that parametric approach can be applicable for consumer study with large number of respondents, even though it may not satisfy the assumption of ANOVA (Analysis of Variances).
Stroet, Martin; Koziara, Katarzyna B; Malde, Alpeshkumar K; Mark, Alan E
2017-12-12
A general method for parametrizing atomic interaction functions is presented. The method is based on an analysis of surfaces corresponding to the difference between calculated and target data as a function of alternative combinations of parameters (parameter space mapping). The consideration of surfaces in parameter space as opposed to local values or gradients leads to a better understanding of the relationships between the parameters being optimized and a given set of target data. This in turn enables for a range of target data from multiple molecules to be combined in a robust manner and for the optimal region of parameter space to be trivially identified. The effectiveness of the approach is illustrated by using the method to refine the chlorine 6-12 Lennard-Jones parameters against experimental solvation free enthalpies in water and hexane as well as the density and heat of vaporization of the liquid at atmospheric pressure for a set of 10 aromatic-chloro compounds simultaneously. Single-step perturbation is used to efficiently calculate solvation free enthalpies for a wide range of parameter combinations. The capacity of this approach to parametrize accurate and transferrable force fields is discussed.
Keeping nurses at work: a duration analysis.
Holmås, Tor Helge
2002-09-01
A shortage of nurses is currently a problem in several countries, and an important question is therefore how one can increase the supply of nursing labour. In this paper, we focus on the issue of nurses leaving the public health sector by utilising a unique data set containing information on both the supply and demand side of the market. To describe the exit rate from the health sector we apply a semi-parametric hazard rate model. In the estimations, we correct for unobserved heterogeneity by both a parametric (Gamma) and a non-parametric approach. We find that both wages and working conditions have an impact on nurses' decision to quit. Furthermore, failing to correct for the fact that nurses' income partly consists of compensation for inconvenient working hours results in a considerable downward bias of the wage effect. Copyright 2002 John Wiley & Sons, Ltd.
A Parametric Approach to Numerical Modeling of TKR Contact Forces
Lundberg, Hannah J.; Foucher, Kharma C.; Wimmer, Markus A.
2009-01-01
In vivo knee contact forces are difficult to determine using numerical methods because there are more unknown forces than equilibrium equations available. We developed parametric methods for computing contact forces across the knee joint during the stance phase of level walking. Three-dimensional contact forces were calculated at two points of contact between the tibia and the femur, one on the lateral aspect of the tibial plateau, and one on the medial side. Muscle activations were parametrically varied over their physiologic range resulting in a solution space of contact forces. The obtained solution space was reasonably small and the resulting force pattern compared well to a previous model from the literature for kinematics and external kinetics from the same patient. Peak forces of the parametric model and the previous model were similar for the first half of the stance phase, but differed for the second half. The previous model did not take into account the transverse external moment about the knee and could not calculate muscle activation levels. Ultimately, the parametric model will result in more accurate contact force inputs for total knee simulators, as current inputs are not generally based on kinematics and kinetics inputs from TKR patients. PMID:19155015
NASA Astrophysics Data System (ADS)
Vittal, H.; Singh, Jitendra; Kumar, Pankaj; Karmakar, Subhankar
2015-06-01
In watershed management, flood frequency analysis (FFA) is performed to quantify the risk of flooding at different spatial locations and also to provide guidelines for determining the design periods of flood control structures. The traditional FFA was extensively performed by considering univariate scenario for both at-site and regional estimation of return periods. However, due to inherent mutual dependence of the flood variables or characteristics [i.e., peak flow (P), flood volume (V) and flood duration (D), which are random in nature], analysis has been further extended to multivariate scenario, with some restrictive assumptions. To overcome the assumption of same family of marginal density function for all flood variables, the concept of copula has been introduced. Although, the advancement from univariate to multivariate analyses drew formidable attention to the FFA research community, the basic limitation was that the analyses were performed with the implementation of only parametric family of distributions. The aim of the current study is to emphasize the importance of nonparametric approaches in the field of multivariate FFA; however, the nonparametric distribution may not always be a good-fit and capable of replacing well-implemented multivariate parametric and multivariate copula-based applications. Nevertheless, the potential of obtaining best-fit using nonparametric distributions might be improved because such distributions reproduce the sample's characteristics, resulting in more accurate estimations of the multivariate return period. Hence, the current study shows the importance of conjugating multivariate nonparametric approach with multivariate parametric and copula-based approaches, thereby results in a comprehensive framework for complete at-site FFA. Although the proposed framework is designed for at-site FFA, this approach can also be applied to regional FFA because regional estimations ideally include at-site estimations. The framework is based on the following steps: (i) comprehensive trend analysis to assess nonstationarity in the observed data; (ii) selection of the best-fit univariate marginal distribution with a comprehensive set of parametric and nonparametric distributions for the flood variables; (iii) multivariate frequency analyses with parametric, copula-based and nonparametric approaches; and (iv) estimation of joint and various conditional return periods. The proposed framework for frequency analysis is demonstrated using 110 years of observed data from Allegheny River at Salamanca, New York, USA. The results show that for both univariate and multivariate cases, the nonparametric Gaussian kernel provides the best estimate. Further, we perform FFA for twenty major rivers over continental USA, which shows for seven rivers, all the flood variables followed nonparametric Gaussian kernel; whereas for other rivers, parametric distributions provide the best-fit either for one or two flood variables. Thus the summary of results shows that the nonparametric method cannot substitute the parametric and copula-based approaches, but should be considered during any at-site FFA to provide the broadest choices for best estimation of the flood return periods.
The Bayesian Cramér-Rao lower bound in Astrometry
NASA Astrophysics Data System (ADS)
Mendez, R. A.; Echeverria, A.; Silva, J.; Orchard, M.
2018-01-01
A determination of the highest precision that can be achieved in the measurement of the location of a stellar-like object has been a topic of permanent interest by the astrometric community. The so-called (parametric, or non-Bayesian) Cramér-Rao (CR hereafter) bound provides a lower bound for the variance with which one could estimate the position of a point source. This has been studied recently by Mendez et al. (2013, 2014, 2015). In this work we present a different approach to the same problem (Echeverria et al. 2016), using a Bayesian CR setting which has a number of advantages over the parametric scenario.
The Bayesian Cramér-Rao lower bound in Astrometry
NASA Astrophysics Data System (ADS)
Mendez, R. A.; Echeverria, A.; Silva, J.; Orchard, M.
2017-07-01
A determination of the highest precision that can be achieved in the measurement of the location of a stellar-like object has been a topic of permanent interest by the astrometric community. The so-called (parametric, or non-Bayesian) Cramér-Rao (CR hereafter) bound provides a lower bound for the variance with which one could estimate the position of a point source. This has been studied recently by Mendez and collaborators (2014, 2015). In this work we present a different approach to the same problem (Echeverria et al. 2016), using a Bayesian CR setting which has a number of advantages over the parametric scenario.
Organizing Space Shuttle parametric data for maintainability
NASA Technical Reports Server (NTRS)
Angier, R. C.
1983-01-01
A model of organization and management of Space Shuttle data is proposed. Shuttle avionics software is parametrically altered by a reconfiguration process for each flight. As the flight rate approaches an operational level, current methods of data management would become increasingly complex. An alternative method is introduced, using modularized standard data, and its implications for data collection, integration, validation, and reconfiguration processes are explored. Information modules are cataloged for later use, and may be combined in several levels for maintenance. For each flight, information modules can then be selected from the catalog at a high level. These concepts take advantage of the reusability of Space Shuttle information to reduce the cost of reconfiguration as flight experience increases.
A semi-parametric within-subject mixture approach to the analyses of responses and response times.
Molenaar, Dylan; Bolsinova, Maria; Vermunt, Jeroen K
2018-05-01
In item response theory, modelling the item response times in addition to the item responses may improve the detection of possible between- and within-subject differences in the process that resulted in the responses. For instance, if respondents rely on rapid guessing on some items but not on all, the joint distribution of the responses and response times will be a multivariate within-subject mixture distribution. Suitable parametric methods to detect these within-subject differences have been proposed. In these approaches, a distribution needs to be assumed for the within-class response times. In this paper, it is demonstrated that these parametric within-subject approaches may produce false positives and biased parameter estimates if the assumption concerning the response time distribution is violated. A semi-parametric approach is proposed which resorts to categorized response times. This approach is shown to hardly produce false positives and parameter bias. In addition, the semi-parametric approach results in approximately the same power as the parametric approach. © 2017 The British Psychological Society.
Weight and the Future of Space Flight Hardware Cost Modeling
NASA Technical Reports Server (NTRS)
Prince, Frank A.
2003-01-01
Weight has been used as the primary input variable for cost estimating almost as long as there have been parametric cost models. While there are good reasons for using weight, serious limitations exist. These limitations have been addressed by multi-variable equations and trend analysis in models such as NAFCOM, PRICE, and SEER; however, these models have not be able to address the significant time lags that can occur between the development of similar space flight hardware systems. These time lags make the cost analyst's job difficult because insufficient data exists to perform trend analysis, and the current set of parametric models are not well suited to accommodating process improvements in space flight hardware design, development, build and test. As a result, people of good faith can have serious disagreement over the cost for new systems. To address these shortcomings, new cost modeling approaches are needed. The most promising approach is process based (sometimes called activity) costing. Developing process based models will require a detailed understanding of the functions required to produce space flight hardware combined with innovative approaches to estimating the necessary resources. Particularly challenging will be the lack of data at the process level. One method for developing a model is to combine notional algorithms with a discrete event simulation and model changes to the total cost as perturbations to the program are introduced. Despite these challenges, the potential benefits are such that efforts should be focused on developing process based cost models.
Combined-probability space and certainty or uncertainty relations for a finite-level quantum system
NASA Astrophysics Data System (ADS)
Sehrawat, Arun
2017-08-01
The Born rule provides a probability vector (distribution) with a quantum state for a measurement setting. For two settings, we have a pair of vectors from the same quantum state. Each pair forms a combined-probability vector that obeys certain quantum constraints, which are triangle inequalities in our case. Such a restricted set of combined vectors, called the combined-probability space, is presented here for a d -level quantum system (qudit). The combined space is a compact convex subset of a Euclidean space, and all its extreme points come from a family of parametric curves. Considering a suitable concave function on the combined space to estimate the uncertainty, we deliver an uncertainty relation by finding its global minimum on the curves for a qudit. If one chooses an appropriate concave (or convex) function, then there is no need to search for the absolute minimum (maximum) over the whole space; it will be on the parametric curves. So these curves are quite useful for establishing an uncertainty (or a certainty) relation for a general pair of settings. We also demonstrate that many known tight certainty or uncertainty relations for a qubit can be obtained with the triangle inequalities.
Semiparametric time varying coefficient model for matched case-crossover studies.
Ortega-Villa, Ana Maria; Kim, Inyoung; Kim, H
2017-03-15
In matched case-crossover studies, it is generally accepted that the covariates on which a case and associated controls are matched cannot exert a confounding effect on independent predictors included in the conditional logistic regression model. This is because any stratum effect is removed by the conditioning on the fixed number of sets of the case and controls in the stratum. Hence, the conditional logistic regression model is not able to detect any effects associated with the matching covariates by stratum. However, some matching covariates such as time often play an important role as an effect modification leading to incorrect statistical estimation and prediction. Therefore, we propose three approaches to evaluate effect modification by time. The first is a parametric approach, the second is a semiparametric penalized approach, and the third is a semiparametric Bayesian approach. Our parametric approach is a two-stage method, which uses conditional logistic regression in the first stage and then estimates polynomial regression in the second stage. Our semiparametric penalized and Bayesian approaches are one-stage approaches developed by using regression splines. Our semiparametric one stage approach allows us to not only detect the parametric relationship between the predictor and binary outcomes, but also evaluate nonparametric relationships between the predictor and time. We demonstrate the advantage of our semiparametric one-stage approaches using both a simulation study and an epidemiological example of a 1-4 bi-directional case-crossover study of childhood aseptic meningitis with drinking water turbidity. We also provide statistical inference for the semiparametric Bayesian approach using Bayes Factors. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Nonparametric Simulation of Signal Transduction Networks with Semi-Synchronized Update
Nassiri, Isar; Masoudi-Nejad, Ali; Jalili, Mahdi; Moeini, Ali
2012-01-01
Simulating signal transduction in cellular signaling networks provides predictions of network dynamics by quantifying the changes in concentration and activity-level of the individual proteins. Since numerical values of kinetic parameters might be difficult to obtain, it is imperative to develop non-parametric approaches that combine the connectivity of a network with the response of individual proteins to signals which travel through the network. The activity levels of signaling proteins computed through existing non-parametric modeling tools do not show significant correlations with the observed values in experimental results. In this work we developed a non-parametric computational framework to describe the profile of the evolving process and the time course of the proportion of active form of molecules in the signal transduction networks. The model is also capable of incorporating perturbations. The model was validated on four signaling networks showing that it can effectively uncover the activity levels and trends of response during signal transduction process. PMID:22737250
Modeling gene expression measurement error: a quasi-likelihood approach
Strimmer, Korbinian
2003-01-01
Background Using suitable error models for gene expression measurements is essential in the statistical analysis of microarray data. However, the true probabilistic model underlying gene expression intensity readings is generally not known. Instead, in currently used approaches some simple parametric model is assumed (usually a transformed normal distribution) or the empirical distribution is estimated. However, both these strategies may not be optimal for gene expression data, as the non-parametric approach ignores known structural information whereas the fully parametric models run the risk of misspecification. A further related problem is the choice of a suitable scale for the model (e.g. observed vs. log-scale). Results Here a simple semi-parametric model for gene expression measurement error is presented. In this approach inference is based an approximate likelihood function (the extended quasi-likelihood). Only partial knowledge about the unknown true distribution is required to construct this function. In case of gene expression this information is available in the form of the postulated (e.g. quadratic) variance structure of the data. As the quasi-likelihood behaves (almost) like a proper likelihood, it allows for the estimation of calibration and variance parameters, and it is also straightforward to obtain corresponding approximate confidence intervals. Unlike most other frameworks, it also allows analysis on any preferred scale, i.e. both on the original linear scale as well as on a transformed scale. It can also be employed in regression approaches to model systematic (e.g. array or dye) effects. Conclusions The quasi-likelihood framework provides a simple and versatile approach to analyze gene expression data that does not make any strong distributional assumptions about the underlying error model. For several simulated as well as real data sets it provides a better fit to the data than competing models. In an example it also improved the power of tests to identify differential expression. PMID:12659637
NASA Technical Reports Server (NTRS)
Coverse, G. L.
1984-01-01
A turbine modeling technique has been developed which will enable the user to obtain consistent and rapid off-design performance from design point input. This technique is applicable to both axial and radial flow turbine with flow sizes ranging from about one pound per second to several hundred pounds per second. The axial flow turbines may or may not include variable geometry in the first stage nozzle. A user-specified option will also permit the calculation of design point cooling flow levels and corresponding changes in efficiency for the axial flow turbines. The modeling technique has been incorporated into a time-sharing program in order to facilitate its use. Because this report contains a description of the input output data, values of typical inputs, and example cases, it is suitable as a user's manual. This report is the second of a three volume set. The titles of the three volumes are as follows: (1) Volume 1 CMGEN USER's Manual (Parametric Compressor Generator); (2) Volume 2 PART USER's Manual (Parametric Turbine); (3) Volume 3 MODFAN USER's Manual (Parametric Modulation Flow Fan).
NASA Astrophysics Data System (ADS)
Lototzis, M.; Papadopoulos, G. K.; Droulia, F.; Tseliou, A.; Tsiros, I. X.
2018-04-01
There are several cases where a circular variable is associated with a linear one. A typical example is wind direction that is often associated with linear quantities such as air temperature and air humidity. The analysis of a statistical relationship of this kind can be tested by the use of parametric and non-parametric methods, each of which has its own advantages and drawbacks. This work deals with correlation analysis using both the parametric and the non-parametric procedure on a small set of meteorological data of air temperature and wind direction during a summer period in a Mediterranean climate. Correlations were examined between hourly, daily and maximum-prevailing values, under typical and non-typical meteorological conditions. Both tests indicated a strong correlation between mean hourly wind directions and mean hourly air temperature, whereas mean daily wind direction and mean daily air temperature do not seem to be correlated. In some cases, however, the two procedures were found to give quite dissimilar levels of significance on the rejection or not of the null hypothesis of no correlation. The simple statistical analysis presented in this study, appropriately extended in large sets of meteorological data, may be a useful tool for estimating effects of wind on local climate studies.
Tau-REx: A new look at the retrieval of exoplanetary atmospheres
NASA Astrophysics Data System (ADS)
Waldmann, Ingo
2014-11-01
The field of exoplanetary spectroscopy is as fast moving as it is new. With an increasing amount of space and ground based instruments obtaining data on a large set of extrasolar planets we are indeed entering the era of exoplanetary characterisation. Permanently at the edge of instrument feasibility, it is as important as it is difficult to find the most optimal and objective methodologies to analysing and interpreting current data. This is particularly true for smaller and fainter Earth and Super-Earth type planets.For low to mid signal to noise (SNR) observations, we are prone to two sources of biases: 1) Prior selection in the data reduction and analysis; 2) Prior constraints on the spectral retrieval. In Waldmann et al. (2013), Morello et al. (2014) and Waldmann (2012, 2014) we have shown a prior-free approach to data analysis based on non-parametric machine learning techniques. Following these approaches we will present a new take on the spectral retrieval of extrasolar planets. Tau-REx (tau-retrieval of exoplanets) is a new line-by-line, atmospheric retrieval framework. In the past the decision on what opacity sources go into an atmospheric model were usually user defined. Manual input can lead to model biases and poor convergence of the atmospheric model to the data. In Tau-REx we have set out to solve this. Through custom built pattern recognition software, Tau-REx is able to rapidly identify the most likely atmospheric opacities from a large number of possible absorbers/emitters (ExoMol or HiTran data bases) and non-parametrically constrain the prior space for the Bayesian retrieval. Unlike other (MCMC based) techniques, Tau-REx is able to fully integrate high-dimensional log-likelihood spaces and to calculate the full Bayesian Evidence of the atmospheric models. We achieve this through a combination of Nested Sampling and a high degree of code parallelisation. This allows for an exact and unbiased Bayesian model selection and a fully mapping of potential model-data degeneracies. Together with non-parametric data de-trending of exoplanetary spectra, we can reach an un- precedented level of objectivity in our atmospheric characterisation of these foreign worlds.
Pang, Jincheng; Özkucur, Nurdan; Ren, Michael; Kaplan, David L; Levin, Michael; Miller, Eric L
2015-11-01
Phase Contrast Microscopy (PCM) is an important tool for the long term study of living cells. Unlike fluorescence methods which suffer from photobleaching of fluorophore or dye molecules, PCM image contrast is generated by the natural variations in optical index of refraction. Unfortunately, the same physical principles which allow for these studies give rise to complex artifacts in the raw PCM imagery. Of particular interest in this paper are neuron images where these image imperfections manifest in very different ways for the two structures of specific interest: cell bodies (somas) and dendrites. To address these challenges, we introduce a novel parametric image model using the level set framework and an associated variational approach which simultaneously restores and segments this class of images. Using this technique as the basis for an automated image analysis pipeline, results for both the synthetic and real images validate and demonstrate the advantages of our approach.
NASA Astrophysics Data System (ADS)
Protim Das, Partha; Gupta, P.; Das, S.; Pradhan, B. B.; Chakraborty, S.
2018-01-01
Maraging steel (MDN 300) find its application in many industries as it exhibits high hardness which are very difficult to machine material. Electro discharge machining (EDM) is an extensively popular machining process which can be used in machining of such materials. Optimization of response parameters are essential for effective machining of these materials. Past researchers have already used Taguchi for obtaining the optimal responses of EDM process for this material with responses such as material removal rate (MRR), tool wear rate (TWR), relative wear ratio (RWR), and surface roughness (SR) considering discharge current, pulse on time, pulse off time, arc gap, and duty cycle as process parameters. In this paper, grey relation analysis (GRA) with fuzzy logic is applied to this multi objective optimization problem to check the responses by an implementation of the derived parametric setting. It was found that the parametric setting derived by the proposed method results in better a response than those reported by the past researchers. Obtained results are also verified using the technique for order of preference by similarity to ideal solution (TOPSIS). The predicted result also shows that there is a significant improvement in comparison to the results of past researchers.
BROCCOLI: Software for fast fMRI analysis on many-core CPUs and GPUs
Eklund, Anders; Dufort, Paul; Villani, Mattias; LaConte, Stephen
2014-01-01
Analysis of functional magnetic resonance imaging (fMRI) data is becoming ever more computationally demanding as temporal and spatial resolutions improve, and large, publicly available data sets proliferate. Moreover, methodological improvements in the neuroimaging pipeline, such as non-linear spatial normalization, non-parametric permutation tests and Bayesian Markov Chain Monte Carlo approaches, can dramatically increase the computational burden. Despite these challenges, there do not yet exist any fMRI software packages which leverage inexpensive and powerful graphics processing units (GPUs) to perform these analyses. Here, we therefore present BROCCOLI, a free software package written in OpenCL (Open Computing Language) that can be used for parallel analysis of fMRI data on a large variety of hardware configurations. BROCCOLI has, for example, been tested with an Intel CPU, an Nvidia GPU, and an AMD GPU. These tests show that parallel processing of fMRI data can lead to significantly faster analysis pipelines. This speedup can be achieved on relatively standard hardware, but further, dramatic speed improvements require only a modest investment in GPU hardware. BROCCOLI (running on a GPU) can perform non-linear spatial normalization to a 1 mm3 brain template in 4–6 s, and run a second level permutation test with 10,000 permutations in about a minute. These non-parametric tests are generally more robust than their parametric counterparts, and can also enable more sophisticated analyses by estimating complicated null distributions. Additionally, BROCCOLI includes support for Bayesian first-level fMRI analysis using a Gibbs sampler. The new software is freely available under GNU GPL3 and can be downloaded from github (https://github.com/wanderine/BROCCOLI/). PMID:24672471
Approximate Uncertainty Modeling in Risk Analysis with Vine Copulas
Bedford, Tim; Daneshkhah, Alireza
2015-01-01
Many applications of risk analysis require us to jointly model multiple uncertain quantities. Bayesian networks and copulas are two common approaches to modeling joint uncertainties with probability distributions. This article focuses on new methodologies for copulas by developing work of Cooke, Bedford, Kurowica, and others on vines as a way of constructing higher dimensional distributions that do not suffer from some of the restrictions of alternatives such as the multivariate Gaussian copula. The article provides a fundamental approximation result, demonstrating that we can approximate any density as closely as we like using vines. It further operationalizes this result by showing how minimum information copulas can be used to provide parametric classes of copulas that have such good levels of approximation. We extend previous approaches using vines by considering nonconstant conditional dependencies, which are particularly relevant in financial risk modeling. We discuss how such models may be quantified, in terms of expert judgment or by fitting data, and illustrate the approach by modeling two financial data sets. PMID:26332240
Optimal Clustering in Graphs with Weighted Edges: A Unified Approach to the Threshold Problem.
ERIC Educational Resources Information Center
Goetschel, Roy; Voxman, William
1987-01-01
Relations on a finite set V are viewed as weighted graphs. Using the language of graph theory, two methods of partitioning V are examined: selecting threshold values and applying them to a maximal weighted spanning forest, and using a parametric linear program to obtain a most adhesive partition. (Author/EM)
Ti:sapphire - A theoretical assessment for its spectroscopy
NASA Astrophysics Data System (ADS)
Da Silva, A.; Boschetto, D.; Rax, J. M.; Chériaux, G.
2017-03-01
This article tries to theoretically compute the stimulated emission cross-sections when we know the oscillator strength of a broad material class (dielectric crystals hosting metal-transition impurity atoms). We apply the present approach to Ti:sapphire and check it by computing some emission cross-section curves for both π and σ polarizations. We also set a relationship between oscillator strength and radiative lifetime. Such an approach will allow future parametric studies for Ti:sapphire spectroscopic properties.
NASA Astrophysics Data System (ADS)
Sánchez, M.; Oldenhof, M.; Freitez, J. A.; Mundim, K. C.; Ruette, F.
A systematic improvement of parametric quantum methods (PQM) is performed by considering: (a) a new application of parameterization procedure to PQMs and (b) novel parametric functionals based on properties of elementary parametric functionals (EPF) [Ruette et al., Int J Quantum Chem 2008, 108, 1831]. Parameterization was carried out by using the simplified generalized simulated annealing (SGSA) method in the CATIVIC program. This code has been parallelized and comparison with MOPAC/2007 (PM6) and MINDO/SR was performed for a set of molecules with C=C, C=H, and H=H bonds. Results showed better accuracy than MINDO/SR and MOPAC-2007 for a selected trial set of molecules.
A parametric analysis of visual approaches for helicopters
NASA Technical Reports Server (NTRS)
Moen, G. C.; Dicarlo, D. J.; Yenni, K. R.
1976-01-01
A flight investigation was conducted to determine the characteristic shapes of the altitude, ground speed, and deceleration profiles of visual approaches for helicopters. Two hundred thirty-six visual approaches were flown from nine sets of initial conditions with four types of helicopters. Mathematical relationships were developed that describe the characteristic visual deceleration profiles. These mathematical relationships were expanded to develop equations which define the corresponding nominal ground speed, pitch attitude, pitch rate, and pitch acceleration profiles. Results are applicable to improved helicopter handling qualities in terminal area operations.
Parametrization in models of subcritical glass fracture: Activation offset and concerted activation
NASA Astrophysics Data System (ADS)
Rodrigues, Bruno Poletto; Hühn, Carolin; Erlebach, Andreas; Mey, Dorothea; Sierka, Marek; Wondraczek, Lothar
2017-08-01
There are two established but fundamentally different empirical approaches to parametrize the rate of subcritical fracture in brittle materials. While both are relying on a thermally activated reaction of bond rupture, the difference lies in the way as to how the externally applied stresses affect the local energy landscape. In the consideration of inorganic glasses, the strain energy is typically taken as an off-set on the activation barrier. As an alternative interpretation, the system’s volumetric strain-energy is added to its thermal energy. Such an interpretation is consistent with the democratic fiber bundle model. Here, we test this approach of concerted activation against macroscopic data of bond cleavage activation energy, and also against ab initio quantum chemical simulation of the energy barrier for cracking in silica. The fact that both models are able to reproduce experimental observation to a remarkable degree highlights the importance of a holistic consideration towards non-empirical understanding.
Model-based spectral estimation of Doppler signals using parallel genetic algorithms.
Solano González, J; Rodríguez Vázquez, K; García Nocetti, D F
2000-05-01
Conventional spectral analysis methods use a fast Fourier transform (FFT) on consecutive or overlapping windowed data segments. For Doppler ultrasound signals, this approach suffers from an inadequate frequency resolution due to the time segment duration and the non-stationarity characteristics of the signals. Parametric or model-based estimators can give significant improvements in the time-frequency resolution at the expense of a higher computational complexity. This work describes an approach which implements in real-time a parametric spectral estimator method using genetic algorithms (GAs) in order to find the optimum set of parameters for the adaptive filter that minimises the error function. The aim is to reduce the computational complexity of the conventional algorithm by using the simplicity associated to GAs and exploiting its parallel characteristics. This will allow the implementation of higher order filters, increasing the spectrum resolution, and opening a greater scope for using more complex methods.
Arisholm, Gunnar
2007-05-14
Group velocity mismatch (GVM) is a major concern in the design of optical parametric amplifiers (OPAs) and generators (OPGs) for pulses shorter than a few picoseconds. By simplifying the coupled propagation equations and exploiting their scaling properties, the number of free parameters for a collinear OPA is reduced to a level where the parameter space can be studied systematically by simulations. The resulting set of figures show the combinations of material parameters and pulse lengths for which high performance can be achieved, and they can serve as a basis for a design.
Chen, Xiaozhong; He, Kunjin; Chen, Zhengming
2017-01-01
The present study proposes an integrated computer-aided approach combining femur surface modeling, fracture evidence recover plate creation, and plate modification in order to conduct a parametric investigation of the design of custom plate for a specific patient. The study allows for improving the design efficiency of specific plates on the patients' femur parameters and the fracture information. Furthermore, the present approach will lead to exploration of plate modification and optimization. The three-dimensional (3D) surface model of a detailed femur and the corresponding fixation plate were represented with high-level feature parameters, and the shape of the specific plate was recursively modified in order to obtain the optimal plate for a specific patient. The proposed approach was tested and verified on a case study, and it could be helpful for orthopedic surgeons to design and modify the plate in order to fit the specific femur anatomy and the fracture information.
Li, Dongmei; Le Pape, Marc A; Parikh, Nisha I; Chen, Will X; Dye, Timothy D
2013-01-01
Microarrays are widely used for examining differential gene expression, identifying single nucleotide polymorphisms, and detecting methylation loci. Multiple testing methods in microarray data analysis aim at controlling both Type I and Type II error rates; however, real microarray data do not always fit their distribution assumptions. Smyth's ubiquitous parametric method, for example, inadequately accommodates violations of normality assumptions, resulting in inflated Type I error rates. The Significance Analysis of Microarrays, another widely used microarray data analysis method, is based on a permutation test and is robust to non-normally distributed data; however, the Significance Analysis of Microarrays method fold change criteria are problematic, and can critically alter the conclusion of a study, as a result of compositional changes of the control data set in the analysis. We propose a novel approach, combining resampling with empirical Bayes methods: the Resampling-based empirical Bayes Methods. This approach not only reduces false discovery rates for non-normally distributed microarray data, but it is also impervious to fold change threshold since no control data set selection is needed. Through simulation studies, sensitivities, specificities, total rejections, and false discovery rates are compared across the Smyth's parametric method, the Significance Analysis of Microarrays, and the Resampling-based empirical Bayes Methods. Differences in false discovery rates controls between each approach are illustrated through a preterm delivery methylation study. The results show that the Resampling-based empirical Bayes Methods offer significantly higher specificity and lower false discovery rates compared to Smyth's parametric method when data are not normally distributed. The Resampling-based empirical Bayes Methods also offers higher statistical power than the Significance Analysis of Microarrays method when the proportion of significantly differentially expressed genes is large for both normally and non-normally distributed data. Finally, the Resampling-based empirical Bayes Methods are generalizable to next generation sequencing RNA-seq data analysis.
Parametric robust control and system identification: Unified approach
NASA Technical Reports Server (NTRS)
Keel, Leehyun
1994-01-01
Despite significant advancement in the area of robust parametric control, the problem of synthesizing such a controller is still a wide open problem. Thus, we attempt to give a solution to this important problem. Our approach captures the parametric uncertainty as an H(sub infinity) unstructured uncertainty so that H(sub infinity) synthesis techniques are applicable. Although the techniques cannot cope with the exact parametric uncertainty, they give a reasonable guideline to model the unstructured uncertainty that contains the parametric uncertainty. An additional loop shaping technique is also introduced to relax its conservatism.
Zheng, Lai; Ismail, Karim
2017-05-01
Traffic conflict indicators measure the temporal and spatial proximity of conflict-involved road users. These indicators can reflect the severity of traffic conflicts to a reliable extent. Instead of using the indicator value directly as a severity index, many link functions have been developed to map the conflict indicator to a severity index. However, little information is available about the choice of a particular link function. To guard against link misspecification or subjectivity, a generalized exponential link function was developed. The severity index generated by this link was introduced to a parametric safety continuum model which objectively models the centre and tail regions. An empirical method, together with full Bayesian estimation method was adopted to estimate model parameters. The safety implication of return level was calculated based on the model parameters. The proposed approach was applied to the conflict and crash data collected from 21 segments from three freeways located in Guangdong province, China. The Pearson's correlation test between return levels and observed crashes showed that a θ value of 1.2 was the best choice of the generalized parameter for current data set. This provides statistical support for using the generalized exponential link function. With the determined generalized exponential link function, the visualization of parametric safety continuum was found to be a gyroscope-shaped hierarchy. Copyright © 2017 Elsevier Ltd. All rights reserved.
The quantile regression approach to efficiency measurement: insights from Monte Carlo simulations.
Liu, Chunping; Laporte, Audrey; Ferguson, Brian S
2008-09-01
In the health economics literature there is an ongoing debate over approaches used to estimate the efficiency of health systems at various levels, from the level of the individual hospital - or nursing home - up to that of the health system as a whole. The two most widely used approaches to evaluating the efficiency with which various units deliver care are non-parametric data envelopment analysis (DEA) and parametric stochastic frontier analysis (SFA). Productivity researchers tend to have very strong preferences over which methodology to use for efficiency estimation. In this paper, we use Monte Carlo simulation to compare the performance of DEA and SFA in terms of their ability to accurately estimate efficiency. We also evaluate quantile regression as a potential alternative approach. A Cobb-Douglas production function, random error terms and a technical inefficiency term with different distributions are used to calculate the observed output. The results, based on these experiments, suggest that neither DEA nor SFA can be regarded as clearly dominant, and that, depending on the quantile estimated, the quantile regression approach may be a useful addition to the armamentarium of methods for estimating technical efficiency.
Superpixel Cut for Figure-Ground Image Segmentation
NASA Astrophysics Data System (ADS)
Yang, Michael Ying; Rosenhahn, Bodo
2016-06-01
Figure-ground image segmentation has been a challenging problem in computer vision. Apart from the difficulties in establishing an effective framework to divide the image pixels into meaningful groups, the notions of figure and ground often need to be properly defined by providing either user inputs or object models. In this paper, we propose a novel graph-based segmentation framework, called superpixel cut. The key idea is to formulate foreground segmentation as finding a subset of superpixels that partitions a graph over superpixels. The problem is formulated as Min-Cut. Therefore, we propose a novel cost function that simultaneously minimizes the inter-class similarity while maximizing the intra-class similarity. This cost function is optimized using parametric programming. After a small learning step, our approach is fully automatic and fully bottom-up, which requires no high-level knowledge such as shape priors and scene content. It recovers coherent components of images, providing a set of multiscale hypotheses for high-level reasoning. We evaluate our proposed framework by comparing it to other generic figure-ground segmentation approaches. Our method achieves improved performance on state-of-the-art benchmark databases.
A review of parametric approaches specific to aerodynamic design process
NASA Astrophysics Data System (ADS)
Zhang, Tian-tian; Wang, Zhen-guo; Huang, Wei; Yan, Li
2018-04-01
Parametric modeling of aircrafts plays a crucial role in the aerodynamic design process. Effective parametric approaches have large design space with a few variables. Parametric methods that commonly used nowadays are summarized in this paper, and their principles have been introduced briefly. Two-dimensional parametric methods include B-Spline method, Class/Shape function transformation method, Parametric Section method, Hicks-Henne method and Singular Value Decomposition method, and all of them have wide application in the design of the airfoil. This survey made a comparison among them to find out their abilities in the design of the airfoil, and the results show that the Singular Value Decomposition method has the best parametric accuracy. The development of three-dimensional parametric methods is limited, and the most popular one is the Free-form deformation method. Those methods extended from two-dimensional parametric methods have promising prospect in aircraft modeling. Since different parametric methods differ in their characteristics, real design process needs flexible choice among them to adapt to subsequent optimization procedure.
Parametric analysis of parameters for electrical-load forecasting using artificial neural networks
NASA Astrophysics Data System (ADS)
Gerber, William J.; Gonzalez, Avelino J.; Georgiopoulos, Michael
1997-04-01
Accurate total system electrical load forecasting is a necessary part of resource management for power generation companies. The better the hourly load forecast, the more closely the power generation assets of the company can be configured to minimize the cost. Automating this process is a profitable goal and neural networks should provide an excellent means of doing the automation. However, prior to developing such a system, the optimal set of input parameters must be determined. The approach of this research was to determine what those inputs should be through a parametric study of potentially good inputs. Input parameters tested were ambient temperature, total electrical load, the day of the week, humidity, dew point temperature, daylight savings time, length of daylight, season, forecast light index and forecast wind velocity. For testing, a limited number of temperatures and total electrical loads were used as a basic reference input parameter set. Most parameters showed some forecasting improvement when added individually to the basic parameter set. Significantly, major improvements were exhibited with the day of the week, dew point temperatures, additional temperatures and loads, forecast light index and forecast wind velocity.
Combined non-parametric and parametric approach for identification of time-variant systems
NASA Astrophysics Data System (ADS)
Dziedziech, Kajetan; Czop, Piotr; Staszewski, Wieslaw J.; Uhl, Tadeusz
2018-03-01
Identification of systems, structures and machines with variable physical parameters is a challenging task especially when time-varying vibration modes are involved. The paper proposes a new combined, two-step - i.e. non-parametric and parametric - modelling approach in order to determine time-varying vibration modes based on input-output measurements. Single-degree-of-freedom (SDOF) vibration modes from multi-degree-of-freedom (MDOF) non-parametric system representation are extracted in the first step with the use of time-frequency wavelet-based filters. The second step involves time-varying parametric representation of extracted modes with the use of recursive linear autoregressive-moving-average with exogenous inputs (ARMAX) models. The combined approach is demonstrated using system identification analysis based on the experimental mass-varying MDOF frame-like structure subjected to random excitation. The results show that the proposed combined method correctly captures the dynamics of the analysed structure, using minimum a priori information on the model.
NASA Astrophysics Data System (ADS)
Stark, Dominic; Launet, Barthelemy; Schawinski, Kevin; Zhang, Ce; Koss, Michael; Turp, M. Dennis; Sartori, Lia F.; Zhang, Hantian; Chen, Yiru; Weigel, Anna K.
2018-06-01
The study of unobscured active galactic nuclei (AGN) and quasars depends on the reliable decomposition of the light from the AGN point source and the extended host galaxy light. The problem is typically approached using parametric fitting routines using separate models for the host galaxy and the point spread function (PSF). We present a new approach using a Generative Adversarial Network (GAN) trained on galaxy images. We test the method using Sloan Digital Sky Survey r-band images with artificial AGN point sources added that are then removed using the GAN and with parametric methods using GALFIT. When the AGN point source is more than twice as bright as the host galaxy, we find that our method, PSFGAN, can recover point source and host galaxy magnitudes with smaller systematic error and a lower average scatter (49 per cent). PSFGAN is more tolerant to poor knowledge of the PSF than parametric methods. Our tests show that PSFGAN is robust against a broadening in the PSF width of ± 50 per cent if it is trained on multiple PSFs. We demonstrate that while a matched training set does improve performance, we can still subtract point sources using a PSFGAN trained on non-astronomical images. While initial training is computationally expensive, evaluating PSFGAN on data is more than 40 times faster than GALFIT fitting two components. Finally, PSFGAN is more robust and easy to use than parametric methods as it requires no input parameters.
Advanced imaging techniques in brain tumors
2009-01-01
Abstract Perfusion, permeability and magnetic resonance spectroscopy (MRS) are now widely used in the research and clinical settings. In the clinical setting, qualitative, semi-quantitative and quantitative approaches such as review of color-coded maps to region of interest analysis and analysis of signal intensity curves are being applied in practice. There are several pitfalls with all of these approaches. Some of these shortcomings are reviewed, such as the relative low sensitivity of metabolite ratios from MRS and the effect of leakage on the appearance of color-coded maps from dynamic susceptibility contrast (DSC) magnetic resonance (MR) perfusion imaging and what correction and normalization methods can be applied. Combining and applying these different imaging techniques in a multi-parametric algorithmic fashion in the clinical setting can be shown to increase diagnostic specificity and confidence. PMID:19965287
Franson Interference Generated by a Two-Level System
NASA Astrophysics Data System (ADS)
Peiris, M.; Konthasinghe, K.; Muller, A.
2017-01-01
We report a Franson interferometry experiment based on correlated photon pairs generated via frequency-filtered scattered light from a near-resonantly driven two-level semiconductor quantum dot. In contrast to spontaneous parametric down-conversion and four-wave mixing, this approach can produce single pairs of correlated photons. We have measured a Franson visibility as high as 66%, which goes beyond the classical limit of 50% and approaches the limit of violation of Bell's inequalities (70.7%).
Genetic Algorithm Based Framework for Automation of Stochastic Modeling of Multi-Season Streamflows
NASA Astrophysics Data System (ADS)
Srivastav, R. K.; Srinivasan, K.; Sudheer, K.
2009-05-01
Synthetic streamflow data generation involves the synthesis of likely streamflow patterns that are statistically indistinguishable from the observed streamflow data. The various kinds of stochastic models adopted for multi-season streamflow generation in hydrology are: i) parametric models which hypothesize the form of the periodic dependence structure and the distributional form a priori (examples are PAR, PARMA); disaggregation models that aim to preserve the correlation structure at the periodic level and the aggregated annual level; ii) Nonparametric models (examples are bootstrap/kernel based methods), which characterize the laws of chance, describing the stream flow process, without recourse to prior assumptions as to the form or structure of these laws; (k-nearest neighbor (k-NN), matched block bootstrap (MABB)); non-parametric disaggregation model. iii) Hybrid models which blend both parametric and non-parametric models advantageously to model the streamflows effectively. Despite many of these developments that have taken place in the field of stochastic modeling of streamflows over the last four decades, accurate prediction of the storage and the critical drought characteristics has been posing a persistent challenge to the stochastic modeler. This is partly because, usually, the stochastic streamflow model parameters are estimated by minimizing a statistically based objective function (such as maximum likelihood (MLE) or least squares (LS) estimation) and subsequently the efficacy of the models is being validated based on the accuracy of prediction of the estimates of the water-use characteristics, which requires large number of trial simulations and inspection of many plots and tables. Still accurate prediction of the storage and the critical drought characteristics may not be ensured. In this study a multi-objective optimization framework is proposed to find the optimal hybrid model (blend of a simple parametric model, PAR(1) model and matched block bootstrap (MABB) ) based on the explicit objective functions of minimizing the relative bias and relative root mean square error in estimating the storage capacity of the reservoir. The optimal parameter set of the hybrid model is obtained based on the search over a multi- dimensional parameter space (involving simultaneous exploration of the parametric (PAR(1)) as well as the non-parametric (MABB) components). This is achieved using the efficient evolutionary search based optimization tool namely, non-dominated sorting genetic algorithm - II (NSGA-II). This approach helps in reducing the drudgery involved in the process of manual selection of the hybrid model, in addition to predicting the basic summary statistics dependence structure, marginal distribution and water-use characteristics accurately. The proposed optimization framework is used to model the multi-season streamflows of River Beaver and River Weber of USA. In case of both the rivers, the proposed GA-based hybrid model yields a much better prediction of the storage capacity (where simultaneous exploration of both parametric and non-parametric components is done) when compared with the MLE-based hybrid models (where the hybrid model selection is done in two stages, thus probably resulting in a sub-optimal model). This framework can be further extended to include different linear/non-linear hybrid stochastic models at other temporal and spatial scales as well.
Computing Optimal Stochastic Portfolio Execution Strategies: A Parametric Approach Using Simulations
NASA Astrophysics Data System (ADS)
Moazeni, Somayeh; Coleman, Thomas F.; Li, Yuying
2010-09-01
Computing optimal stochastic portfolio execution strategies under appropriate risk consideration presents great computational challenge. We investigate a parametric approach for computing optimal stochastic strategies using Monte Carlo simulations. This approach allows reduction in computational complexity by computing coefficients for a parametric representation of a stochastic dynamic strategy based on static optimization. Using this technique, constraints can be similarly handled using appropriate penalty functions. We illustrate the proposed approach to minimize the expected execution cost and Conditional Value-at-Risk (CVaR).
Parametric motion control of robotic arms: A biologically based approach using neural networks
NASA Technical Reports Server (NTRS)
Bock, O.; D'Eleuterio, G. M. T.; Lipitkas, J.; Grodski, J. J.
1993-01-01
A neural network based system is presented which is able to generate point-to-point movements of robotic manipulators. The foundation of this approach is the use of prototypical control torque signals which are defined by a set of parameters. The parameter set is used for scaling and shaping of these prototypical torque signals to effect a desired outcome of the system. This approach is based on neurophysiological findings that the central nervous system stores generalized cognitive representations of movements called synergies, schemas, or motor programs. It has been proposed that these motor programs may be stored as torque-time functions in central pattern generators which can be scaled with appropriate time and magnitude parameters. The central pattern generators use these parameters to generate stereotypical torque-time profiles, which are then sent to the joint actuators. Hence, only a small number of parameters need to be determined for each point-to-point movement instead of the entire torque-time trajectory. This same principle is implemented for controlling the joint torques of robotic manipulators where a neural network is used to identify the relationship between the task requirements and the torque parameters. Movements are specified by the initial robot position in joint coordinates and the desired final end-effector position in Cartesian coordinates. This information is provided to the neural network which calculates six torque parameters for a two-link system. The prototypical torque profiles (one per joint) are then scaled by those parameters. After appropriate training of the network, our parametric control design allowed the reproduction of a trained set of movements with relatively high accuracy, and the production of previously untrained movements with comparable accuracy. We conclude that our approach was successful in discriminating between trained movements and in generalizing to untrained movements.
Comparison of thawing and freezing dark energy parametrizations
NASA Astrophysics Data System (ADS)
Pantazis, G.; Nesseris, S.; Perivolaropoulos, L.
2016-05-01
Dark energy equation of state w (z ) parametrizations with two parameters and given monotonicity are generically either convex or concave functions. This makes them suitable for fitting either freezing or thawing quintessence models but not both simultaneously. Fitting a data set based on a freezing model with an unsuitable (concave when increasing) w (z ) parametrization [like Chevallier-Polarski-Linder (CPL)] can lead to significant misleading features like crossing of the phantom divide line, incorrect w (z =0 ), incorrect slope, etc., that are not present in the underlying cosmological model. To demonstrate this fact we generate scattered cosmological data at both the level of w (z ) and the luminosity distance DL(z ) based on either thawing or freezing quintessence models and fit them using parametrizations of convex and of concave type. We then compare statistically significant features of the best fit w (z ) with actual features of the underlying model. We thus verify that the use of unsuitable parametrizations can lead to misleading conclusions. In order to avoid these problems it is important to either use both convex and concave parametrizations and select the one with the best χ2 or use principal component analysis thus splitting the redshift range into independent bins. In the latter case, however, significant information about the slope of w (z ) at high redshifts is lost. Finally, we propose a new family of parametrizations w (z )=w0+wa(z/1 +z )n which generalizes the CPL and interpolates between thawing and freezing parametrizations as the parameter n increases to values larger than 1.
Pant, Sanjay; Lombardi, Damiano
2015-10-01
A new approach for assessing parameter identifiability of dynamical systems in a Bayesian setting is presented. The concept of Shannon entropy is employed to measure the inherent uncertainty in the parameters. The expected reduction in this uncertainty is seen as the amount of information one expects to gain about the parameters due to the availability of noisy measurements of the dynamical system. Such expected information gain is interpreted in terms of the variance of a hypothetical measurement device that can measure the parameters directly, and is related to practical identifiability of the parameters. If the individual parameters are unidentifiable, correlation between parameter combinations is assessed through conditional mutual information to determine which sets of parameters can be identified together. The information theoretic quantities of entropy and information are evaluated numerically through a combination of Monte Carlo and k-nearest neighbour methods in a non-parametric fashion. Unlike many methods to evaluate identifiability proposed in the literature, the proposed approach takes the measurement-noise into account and is not restricted to any particular noise-structure. Whilst computationally intensive for large dynamical systems, it is easily parallelisable and is non-intrusive as it does not necessitate re-writing of the numerical solvers of the dynamical system. The application of such an approach is presented for a variety of dynamical systems--ranging from systems governed by ordinary differential equations to partial differential equations--and, where possible, validated against results previously published in the literature. Copyright © 2015 Elsevier Inc. All rights reserved.
Parameter Estimation with Entangled Photons Produced by Parametric Down-Conversion
NASA Technical Reports Server (NTRS)
Cable, Hugo; Durkin, Gabriel A.
2010-01-01
We explore the advantages offered by twin light beams produced in parametric down-conversion for precision measurement. The symmetry of these bipartite quantum states, even under losses, suggests that monitoring correlations between the divergent beams permits a high-precision inference of any symmetry-breaking effect, e.g., fiber birefringence. We show that the quantity of entanglement is not the key feature for such an instrument. In a lossless setting, scaling of precision at the ultimate "Heisenberg" limit is possible with photon counting alone. Even as photon losses approach 100% the precision is shot-noise limited, and we identify the crossover point between quantum and classical precision as a function of detected flux. The predicted hypersensitivity is demonstrated with a Bayesian simulation.
Parameter estimation with entangled photons produced by parametric down-conversion.
Cable, Hugo; Durkin, Gabriel A
2010-07-02
We explore the advantages offered by twin light beams produced in parametric down-conversion for precision measurement. The symmetry of these bipartite quantum states, even under losses, suggests that monitoring correlations between the divergent beams permits a high-precision inference of any symmetry-breaking effect, e.g., fiber birefringence. We show that the quantity of entanglement is not the key feature for such an instrument. In a lossless setting, scaling of precision at the ultimate "Heisenberg" limit is possible with photon counting alone. Even as photon losses approach 100% the precision is shot-noise limited, and we identify the crossover point between quantum and classical precision as a function of detected flux. The predicted hypersensitivity is demonstrated with a Bayesian simulation.
Kurita, Takashi; Sueda, Keiichi; Tsubakimoto, Koji; Miyanaga, Noriaki
2010-07-05
We experimentally demonstrated coherent beam combining using optical parametric amplification with a nonlinear crystal pumped by random-phased multiple-beam array of the second harmonic of a Nd:YAG laser at 10-Hz repetition rate. In the proof-of-principle experiment, the phase jump between two pump beams was precisely controlled by a motorized actuator. For the demonstration of multiple-beam combining a random phase plate was used to create random-phased beamlets as a pump pulse. Far-field patterns of the pump, the signal, and the idler indicated that the spatially coherent signal beams were obtained on both cases. This approach allows scaling of the intensity of optical parametric chirped pulse amplification up to the exa-watt level while maintaining diffraction-limited beam quality.
ERIC Educational Resources Information Center
McMillen, Daniel P.; Singell, Larry D., Jr.
2010-01-01
Prior work uses a parametric approach to study the distributional effects of school finance reform and finds evidence that reform yields greater equality of school expenditures by lowering spending in high-spending districts (leveling down) or increasing spending in low-spending districts (leveling up). We develop a kernel density…
An Integrated Approach to Damage Accommodation in Flight Control
NASA Technical Reports Server (NTRS)
Boskovic, Jovan D.; Knoebel, Nathan; Mehra, Raman K.; Gregory, Irene
2008-01-01
In this paper we present an integrated approach to in-flight damage accommodation in flight control. The approach is based on Multiple Models, Switching and Tuning (MMST), and consists of three steps: In the first step the main objective is to acquire a realistic aircraft damage model. Modeling of in-flight damage is a highly complex problem since there is a large number of issues that need to be addressed. One of the most important one is that there is strong coupling between structural dynamics, aerodynamics, and flight control. These effects cannot be studied separately due to this coupling. Once a realistic damage model is available, in the second step a large number of models corresponding to different damage cases are generated. One possibility is to generate many linear models and interpolate between them to cover a large portion of the flight envelope. Once these models have been generated, we will implement a recently developed-Model Set Reduction (MSR) technique. The technique is based on parameterizing damage in terms of uncertain parameters, and uses concepts from robust control theory to arrive at a small number of "centered" models such that the controllers corresponding to these models assure desired stability and robustness properties over a subset in the parametric space. By devising a suitable model placement strategy, the entire parametric set is covered with a relatively small number of models and controllers. The third step consists of designing a Multiple Models, Switching and Tuning (MMST) strategy for estimating the current operating regime (damage case) of the aircraft, and switching to the corresponding controller to achieve effective damage accommodation and the desired performance. In the paper present a comprehensive approach to damage accommodation using Model Set Design,MMST, and Variable Structure compensation for coupling nonlinearities. The approach was evaluated on a model of F/A-18 aircraft dynamics under control effector damage, augmented by nonlinear cross-coupling terms and a structural dynamics model. The proposed approach achieved excellent performance under severe damage effects.
Location tests for biomarker studies: a comparison using simulations for the two-sample case.
Scheinhardt, M O; Ziegler, A
2013-01-01
Gene, protein, or metabolite expression levels are often non-normally distributed, heavy tailed and contain outliers. Standard statistical approaches may fail as location tests in this situation. In three Monte-Carlo simulation studies, we aimed at comparing the type I error levels and empirical power of standard location tests and three adaptive tests [O'Gorman, Can J Stat 1997; 25: 269 -279; Keselman et al., Brit J Math Stat Psychol 2007; 60: 267- 293; Szymczak et al., Stat Med 2013; 32: 524 - 537] for a wide range of distributions. We simulated two-sample scenarios using the g-and-k-distribution family to systematically vary tail length and skewness with identical and varying variability between groups. All tests kept the type I error level when groups did not vary in their variability. The standard non-parametric U-test performed well in all simulated scenarios. It was outperformed by the two non-parametric adaptive methods in case of heavy tails or large skewness. Most tests did not keep the type I error level for skewed data in the case of heterogeneous variances. The standard U-test was a powerful and robust location test for most of the simulated scenarios except for very heavy tailed or heavy skewed data, and it is thus to be recommended except for these cases. The non-parametric adaptive tests were powerful for both normal and non-normal distributions under sample variance homogeneity. But when sample variances differed, they did not keep the type I error level. The parametric adaptive test lacks power for skewed and heavy tailed distributions.
Frequency domain optical parametric amplification
Schmidt, Bruno E.; Thiré, Nicolas; Boivin, Maxime; Laramée, Antoine; Poitras, François; Lebrun, Guy; Ozaki, Tsuneyuki; Ibrahim, Heide; Légaré, François
2014-01-01
Today’s ultrafast lasers operate at the physical limits of optical materials to reach extreme performances. Amplification of single-cycle laser pulses with their corresponding octave-spanning spectra still remains a formidable challenge since the universal dilemma of gain narrowing sets limits for both real level pumped amplifiers as well as parametric amplifiers. We demonstrate that employing parametric amplification in the frequency domain rather than in time domain opens up new design opportunities for ultrafast laser science, with the potential to generate single-cycle multi-terawatt pulses. Fundamental restrictions arising from phase mismatch and damage threshold of nonlinear laser crystals are not only circumvented but also exploited to produce a synergy between increased seed spectrum and increased pump energy. This concept was successfully demonstrated by generating carrier envelope phase stable, 1.43 mJ two-cycle pulses at 1.8 μm wavelength. PMID:24805968
Ghaffari, Mahsa; Tangen, Kevin; Alaraj, Ali; Du, Xinjian; Charbel, Fady T; Linninger, Andreas A
2017-12-01
In this paper, we present a novel technique for automatic parametric mesh generation of subject-specific cerebral arterial trees. This technique generates high-quality and anatomically accurate computational meshes for fast blood flow simulations extending the scope of 3D vascular modeling to a large portion of cerebral arterial trees. For this purpose, a parametric meshing procedure was developed to automatically decompose the vascular skeleton, extract geometric features and generate hexahedral meshes using a body-fitted coordinate system that optimally follows the vascular network topology. To validate the anatomical accuracy of the reconstructed vasculature, we performed statistical analysis to quantify the alignment between parametric meshes and raw vascular images using receiver operating characteristic curve. Geometric accuracy evaluation showed an agreement with area under the curves value of 0.87 between the constructed mesh and raw MRA data sets. Parametric meshing yielded on-average, 36.6% and 21.7% orthogonal and equiangular skew quality improvement over the unstructured tetrahedral meshes. The parametric meshing and processing pipeline constitutes an automated technique to reconstruct and simulate blood flow throughout a large portion of the cerebral arterial tree down to the level of pial vessels. This study is the first step towards fast large-scale subject-specific hemodynamic analysis for clinical applications. Copyright © 2017 Elsevier Ltd. All rights reserved.
Qian, Chunqi; Murphy-Boesch, Joseph; Dodd, Stephen; Koretsky, Alan
2012-09-01
A completely wireless detection coil with an integrated parametric amplifier has been constructed to provide local amplification and transmission of MR signals. The sample coil is one element of a parametric amplifier using a zero-bias diode that mixes the weak MR signal with a strong pump signal that is obtained from an inductively coupled external loop. The NMR sample coil develops current gain via reduction in the effective coil resistance. Higher gain can be obtained by adjusting the level of the pumping power closer to the oscillation threshold, but the gain is ultimately constrained by the bandwidth requirement of MRI experiments. A feasibility study here shows that on a NaCl/D(2) O phantom, (23) Na signals with 20 dB of gain can be readily obtained with a concomitant bandwidth of 144 kHz. This gain is high enough that the integrated coil with parametric amplifier, which is coupled inductively to external loops, can provide sensitivity approaching that of direct wire connection. Copyright © 2012 Wiley Periodicals, Inc.
Parametric study of the swimming performance of a fish robot propelled by a flexible caudal fin.
Low, K H; Chong, C W
2010-12-01
In this paper, we aim to study the swimming performance of fish robots by using a statistical approach. A fish robot employing a carangiform swimming mode had been used as an experimental platform for the performance study. The experiments conducted aim to investigate the effect of various design parameters on the thrust capability of the fish robot with a flexible caudal fin. The controllable parameters associated with the fin include frequency, amplitude of oscillation, aspect ratio and the rigidity of the caudal fin. The significance of these parameters was determined in the first set of experiments by using a statistical approach. A more detailed parametric experimental study was then conducted with only those significant parameters. As a result, the parametric study could be completed with a reduced number of experiments and time spent. With the obtained experimental result, we were able to understand the relationship between various parameters and a possible adjustment of parameters to obtain a higher thrust. The proposed statistical method for experimentation provides an objective and thorough analysis of the effects of individual or combinations of parameters on the swimming performance. Such an efficient experimental design helps to optimize the process and determine factors that influence variability.
Yu, Wenbao; Park, Taesung
2014-01-01
It is common to get an optimal combination of markers for disease classification and prediction when multiple markers are available. Many approaches based on the area under the receiver operating characteristic curve (AUC) have been proposed. Existing works based on AUC in a high-dimensional context depend mainly on a non-parametric, smooth approximation of AUC, with no work using a parametric AUC-based approach, for high-dimensional data. We propose an AUC-based approach using penalized regression (AucPR), which is a parametric method used for obtaining a linear combination for maximizing the AUC. To obtain the AUC maximizer in a high-dimensional context, we transform a classical parametric AUC maximizer, which is used in a low-dimensional context, into a regression framework and thus, apply the penalization regression approach directly. Two kinds of penalization, lasso and elastic net, are considered. The parametric approach can avoid some of the difficulties of a conventional non-parametric AUC-based approach, such as the lack of an appropriate concave objective function and a prudent choice of the smoothing parameter. We apply the proposed AucPR for gene selection and classification using four real microarray and synthetic data. Through numerical studies, AucPR is shown to perform better than the penalized logistic regression and the nonparametric AUC-based method, in the sense of AUC and sensitivity for a given specificity, particularly when there are many correlated genes. We propose a powerful parametric and easily-implementable linear classifier AucPR, for gene selection and disease prediction for high-dimensional data. AucPR is recommended for its good prediction performance. Beside gene expression microarray data, AucPR can be applied to other types of high-dimensional omics data, such as miRNA and protein data.
NASA Astrophysics Data System (ADS)
Voorhoeve, Robbert; van der Maas, Annemiek; Oomen, Tom
2018-05-01
Frequency response function (FRF) identification is often used as a basis for control systems design and as a starting point for subsequent parametric system identification. The aim of this paper is to develop a multiple-input multiple-output (MIMO) local parametric modeling approach for FRF identification of lightly damped mechanical systems with improved speed and accuracy. The proposed method is based on local rational models, which can efficiently handle the lightly-damped resonant dynamics. A key aspect herein is the freedom in the multivariable rational model parametrizations. Several choices for such multivariable rational model parametrizations are proposed and investigated. For systems with many inputs and outputs the required number of model parameters can rapidly increase, adversely affecting the performance of the local modeling approach. Therefore, low-order model structures are investigated. The structure of these low-order parametrizations leads to an undesired directionality in the identification problem. To address this, an iterative local rational modeling algorithm is proposed. As a special case recently developed SISO algorithms are recovered. The proposed approach is successfully demonstrated on simulations and on an active vibration isolation system benchmark, confirming good performance of the method using significantly less parameters compared with alternative approaches.
Why preferring parametric forecasting to nonparametric methods?
Jabot, Franck
2015-05-07
A recent series of papers by Charles T. Perretti and collaborators have shown that nonparametric forecasting methods can outperform parametric methods in noisy nonlinear systems. Such a situation can arise because of two main reasons: the instability of parametric inference procedures in chaotic systems which can lead to biased parameter estimates, and the discrepancy between the real system dynamics and the modeled one, a problem that Perretti and collaborators call "the true model myth". Should ecologists go on using the demanding parametric machinery when trying to forecast the dynamics of complex ecosystems? Or should they rely on the elegant nonparametric approach that appears so promising? It will be here argued that ecological forecasting based on parametric models presents two key comparative advantages over nonparametric approaches. First, the likelihood of parametric forecasting failure can be diagnosed thanks to simple Bayesian model checking procedures. Second, when parametric forecasting is diagnosed to be reliable, forecasting uncertainty can be estimated on virtual data generated with the fitted to data parametric model. In contrast, nonparametric techniques provide forecasts with unknown reliability. This argumentation is illustrated with the simple theta-logistic model that was previously used by Perretti and collaborators to make their point. It should convince ecologists to stick to standard parametric approaches, until methods have been developed to assess the reliability of nonparametric forecasting. Copyright © 2015 Elsevier Ltd. All rights reserved.
Sensor fusion approaches for EMI and GPR-based subsurface threat identification
NASA Astrophysics Data System (ADS)
Torrione, Peter; Morton, Kenneth, Jr.; Besaw, Lance E.
2011-06-01
Despite advances in both electromagnetic induction (EMI) and ground penetrating radar (GPR) sensing and related signal processing, neither sensor alone provides a perfect tool for detecting the myriad of possible buried objects that threaten the lives of Soldiers and civilians. However, while neither GPR nor EMI sensing alone can provide optimal detection across all target types, the two approaches are highly complementary. As a result, many landmine systems seek to make use of both sensing modalities simultaneously and fuse the results from both sensors to improve detection performance for targets with widely varying metal content and GPR responses. Despite this, little work has focused on large-scale comparisons of different approaches to sensor fusion and machine learning for combining data from these highly orthogonal phenomenologies. In this work we explore a wide array of pattern recognition techniques for algorithm development and sensor fusion. Results with the ARA Nemesis landmine detection system suggest that nonlinear and non-parametric classification algorithms provide significant performance benefits for single-sensor algorithm development, and that fusion of multiple algorithms can be performed satisfactorily using basic parametric approaches, such as logistic discriminant classification, for the targets under consideration in our data sets.
Multi-parametric analysis of phagocyte antimicrobial responses using imaging flow cytometry.
Havixbeck, Jeffrey J; Wong, Michael E; More Bayona, Juan A; Barreda, Daniel R
2015-08-01
We feature a multi-parametric approach based on an imaging flow cytometry platform for examining phagocyte antimicrobial responses against the gram-negative bacterium Aeromonas veronii. This pathogen is known to induce strong inflammatory responses across a broad range of animal species, including humans. We examined the contribution of A. veronii to the induction of early phagocyte inflammatory processes in RAW 264.7 murine macrophages in vitro. We found that A. veronii, both in live or heat-killed forms, induced similar levels of macrophage activation based on NF-κB translocation. Although these macrophages maintained high levels of viability following heat-killed or live challenges with A. veronii, we identified inhibition of macrophage proliferation as early as 1h post in vitro challenge. The characterization of phagocytic responses showed a time-dependent increase in phagocytosis upon A. veronii challenge, which was paired with a robust induction of intracellular respiratory burst responses. Interestingly, despite the overall increase in the production of reactive oxygen species (ROS) among RAW 264.7 macrophages, we found a significant reduction in the production of ROS among the macrophage subset that had bound A. veronii. Phagocytic uptake of the pathogen further decreased ROS production levels, even beyond those of unstimulated controls. Overall, this multi-parametric imaging flow cytometry-based approach allowed for segregation of unique phagocyte sub-populations and examination of their downstream antimicrobial responses, and should contribute to improved understanding of phagocyte responses against Aeromonas and other pathogens. Copyright © 2015 Elsevier B.V. All rights reserved.
Rough set approach for accident chains exploration.
Wong, Jinn-Tsai; Chung, Yi-Shih
2007-05-01
This paper presents a novel non-parametric methodology--rough set theory--for accident occurrence exploration. The rough set theory allows researchers to analyze accidents in multiple dimensions and to model accident occurrence as factor chains. Factor chains are composed of driver characteristics, trip characteristics, driver behavior and environment factors that imply typical accident occurrence. A real-world database (2003 Taiwan single auto-vehicle accidents) is used as an example to demonstrate the proposed approach. The results show that although most accident patterns are unique, some accident patterns are significant and worth noting. Student drivers who are young and less experienced exhibit a relatively high possibility of being involved in off-road accidents on roads with a speed limit between 51 and 79 km/h under normal driving circumstances. Notably, for bump-into-facility accidents, wet surface is a distinctive environmental factor.
Impacts of Advanced Manufacturing Technology on Parametric Estimating
1989-12-01
been build ( Blois , p. 65). As firms move up the levels of automation, there is a large capital investment to acquire robots, computer numerically...Affordable Acquisition Approach Study, Executive Summary, Air Force Systems Command, Andrews AFB, Maryland, February 9, 1983. Blois , K.J., "Manufacturing
NASA Technical Reports Server (NTRS)
Dean, Edwin B.
1995-01-01
Parametric cost analysis is a mathematical approach to estimating cost. Parametric cost analysis uses non-cost parameters, such as quality characteristics, to estimate the cost to bring forth, sustain, and retire a product. This paper reviews parametric cost analysis and shows how it can be used within the cost deployment process.
Qian, Yun; Yan, Huiping; Hou, Zhangshuan; ...
2015-04-10
We investigate the sensitivity of precipitation characteristics (mean, extreme and diurnal cycle) to a set of uncertain parameters that influence the qualitative and quantitative behavior of the cloud and aerosol processes in the Community Atmosphere Model (CAM5). We adopt both the Latin hypercube and quasi-Monte Carlo sampling approaches to effectively explore the high-dimensional parameter space and then conduct two large sets of simulations. One set consists of 1100 simulations (cloud ensemble) perturbing 22 parameters related to cloud physics and convection, and the other set consists of 256 simulations (aerosol ensemble) focusing on 16 parameters related to aerosols and cloud microphysics.more » Results show that for the 22 parameters perturbed in the cloud ensemble, the six having the greatest influences on the global mean precipitation are identified, three of which (related to the deep convection scheme) are the primary contributors to the total variance of the phase and amplitude of the precipitation diurnal cycle over land. The extreme precipitation characteristics are sensitive to a fewer number of parameters. The precipitation does not always respond monotonically to parameter change. The influence of individual parameters does not depend on the sampling approaches or concomitant parameters selected. Generally the GLM is able to explain more of the parametric sensitivity of global precipitation than local or regional features. The total explained variance for precipitation is primarily due to contributions from the individual parameters (75-90% in total). The total variance shows a significant seasonal variability in the mid-latitude continental regions, but very small in tropical continental regions.« less
NASA Astrophysics Data System (ADS)
Boito, D.; Dedonder, J.-P.; El-Bennich, B.; Escribano, R.; Kamiński, R.; Leśniak, L.; Loiseau, B.
2017-12-01
We introduce parametrizations of hadronic three-body B and D weak decay amplitudes that can be readily implemented in experimental analyses and are a sound alternative to the simplistic and widely used sum of Breit-Wigner type amplitudes, also known as the isobar model. These parametrizations can be particularly useful in the interpretation of C P asymmetries in the Dalitz plots. They are derived from previous calculations based on a quasi-two-body factorization approach in which two-body hadronic final-state interactions are fully taken into account in terms of unitary S - and P -wave π π , π K , and K K ¯ form factors. These form factors can be determined rigorously, fulfilling fundamental properties of quantum field-theory amplitudes such as analyticity and unitarity, and are in agreement with the low-energy behavior predicted by effective theories of QCD. They are derived from sets of coupled-channel equations using T -matrix elements constrained by experimental meson-meson phase shifts and inelasticities, chiral symmetry, and asymptotic QCD. We provide explicit amplitude expressions for the decays B±→π+π-π±, B →K π+π-, B±→K+K-K±, D+→π-π+π+, D+→K-π+π+, and D0→KS0π+π-, for which we have shown in previous studies that this approach is phenomenologically successful; in addition, we provide expressions for the D0→KS0K+K- decay. Other three-body hadronic channels can be parametrized likewise.
2017-03-23
solutions obtained through their proposed method to comparative instances of a generalized assignment problem with either ordinal cost components or... method flag: Designates the method by which the changed/ new assignment problem instance is solved. methodFlag = 0:SMAWarmstart Returns a matching...of randomized perturbations. We examine the contrasts between these methods in the context of assigning Army Officers among a set of identified
Locally adaptive decision in detection of clustered microcalcifications in mammograms.
Sainz de Cea, María V; Nishikawa, Robert M; Yang, Yongyi
2018-02-15
In computer-aided detection or diagnosis of clustered microcalcifications (MCs) in mammograms, the performance often suffers from not only the presence of false positives (FPs) among the detected individual MCs but also large variability in detection accuracy among different cases. To address this issue, we investigate a locally adaptive decision scheme in MC detection by exploiting the noise characteristics in a lesion area. Instead of developing a new MC detector, we propose a decision scheme on how to best decide whether a detected object is an MC or not in the detector output. We formulate the individual MCs as statistical outliers compared to the many noisy detections in a lesion area so as to account for the local image characteristics. To identify the MCs, we first consider a parametric method for outlier detection, the Mahalanobis distance detector, which is based on a multi-dimensional Gaussian distribution on the noisy detections. We also consider a non-parametric method which is based on a stochastic neighbor graph model of the detected objects. We demonstrated the proposed decision approach with two existing MC detectors on a set of 188 full-field digital mammograms (95 cases). The results, evaluated using free response operating characteristic (FROC) analysis, showed a significant improvement in detection accuracy by the proposed outlier decision approach over traditional thresholding (the partial area under the FROC curve increased from 3.95 to 4.25, p-value <10 -4 ). There was also a reduction in case-to-case variability in detected FPs at a given sensitivity level. The proposed adaptive decision approach could not only reduce the number of FPs in detected MCs but also improve case-to-case consistency in detection.
Locally adaptive decision in detection of clustered microcalcifications in mammograms
NASA Astrophysics Data System (ADS)
Sainz de Cea, María V.; Nishikawa, Robert M.; Yang, Yongyi
2018-02-01
In computer-aided detection or diagnosis of clustered microcalcifications (MCs) in mammograms, the performance often suffers from not only the presence of false positives (FPs) among the detected individual MCs but also large variability in detection accuracy among different cases. To address this issue, we investigate a locally adaptive decision scheme in MC detection by exploiting the noise characteristics in a lesion area. Instead of developing a new MC detector, we propose a decision scheme on how to best decide whether a detected object is an MC or not in the detector output. We formulate the individual MCs as statistical outliers compared to the many noisy detections in a lesion area so as to account for the local image characteristics. To identify the MCs, we first consider a parametric method for outlier detection, the Mahalanobis distance detector, which is based on a multi-dimensional Gaussian distribution on the noisy detections. We also consider a non-parametric method which is based on a stochastic neighbor graph model of the detected objects. We demonstrated the proposed decision approach with two existing MC detectors on a set of 188 full-field digital mammograms (95 cases). The results, evaluated using free response operating characteristic (FROC) analysis, showed a significant improvement in detection accuracy by the proposed outlier decision approach over traditional thresholding (the partial area under the FROC curve increased from 3.95 to 4.25, p-value <10-4). There was also a reduction in case-to-case variability in detected FPs at a given sensitivity level. The proposed adaptive decision approach could not only reduce the number of FPs in detected MCs but also improve case-to-case consistency in detection.
A comparison of methods for estimating the random effects distribution of a linear mixed model.
Ghidey, Wendimagegn; Lesaffre, Emmanuel; Verbeke, Geert
2010-12-01
This article reviews various recently suggested approaches to estimate the random effects distribution in a linear mixed model, i.e. (1) the smoothing by roughening approach of Shen and Louis,(1) (2) the semi-non-parametric approach of Zhang and Davidian,(2) (3) the heterogeneity model of Verbeke and Lesaffre( 3) and (4) a flexible approach of Ghidey et al. (4) These four approaches are compared via an extensive simulation study. We conclude that for the considered cases, the approach of Ghidey et al. (4) often shows to have the smallest integrated mean squared error for estimating the random effects distribution. An analysis of a longitudinal dental data set illustrates the performance of the methods in a practical example.
Localized Principal Component Analysis based Curve Evolution: A Divide and Conquer Approach
Appia, Vikram; Ganapathy, Balaji; Yezzi, Anthony; Faber, Tracy
2014-01-01
We propose a novel localized principal component analysis (PCA) based curve evolution approach which evolves the segmenting curve semi-locally within various target regions (divisions) in an image and then combines these locally accurate segmentation curves to obtain a global segmentation. The training data for our approach consists of training shapes and associated auxiliary (target) masks. The masks indicate the various regions of the shape exhibiting highly correlated variations locally which may be rather independent of the variations in the distant parts of the global shape. Thus, in a sense, we are clustering the variations exhibited in the training data set. We then use a parametric model to implicitly represent each localized segmentation curve as a combination of the local shape priors obtained by representing the training shapes and the masks as a collection of signed distance functions. We also propose a parametric model to combine the locally evolved segmentation curves into a single hybrid (global) segmentation. Finally, we combine the evolution of these semilocal and global parameters to minimize an objective energy function. The resulting algorithm thus provides a globally accurate solution, which retains the local variations in shape. We present some results to illustrate how our approach performs better than the traditional approach with fully global PCA. PMID:25520901
Yin, Jingjing; Nakas, Christos T; Tian, Lili; Reiser, Benjamin
2018-03-01
This article explores both existing and new methods for the construction of confidence intervals for differences of indices of diagnostic accuracy of competing pairs of biomarkers in three-class classification problems and fills the methodological gaps for both parametric and non-parametric approaches in the receiver operating characteristic surface framework. The most widely used such indices are the volume under the receiver operating characteristic surface and the generalized Youden index. We describe implementation of all methods and offer insight regarding the appropriateness of their use through a large simulation study with different distributional and sample size scenarios. Methods are illustrated using data from the Alzheimer's Disease Neuroimaging Initiative study, where assessment of cognitive function naturally results in a three-class classification setting.
Pinching parameters for open (super) strings
NASA Astrophysics Data System (ADS)
Playle, Sam; Sciuto, Stefano
2018-02-01
We present an approach to the parametrization of (super) Schottky space obtained by sewing together three-punctured discs with strips. Different cubic ribbon graphs classify distinct sets of pinching parameters; we show how they are mapped onto each other. The parametrization is particularly well-suited to describing the region within (super) moduli space where open bosonic or Neveu-Schwarz string propagators become very long and thin, which dominates the IR behaviour of string theories. We show how worldsheet objects such as the Green's function converge to graph theoretic objects such as the Symanzik polynomials in the α ' → 0 limit, allowing us to see how string theory reproduces the sum over Feynman graphs. The (super) string measure takes on a simple and elegant form when expressed in terms of these parameters.
Evaluation of Second-Level Inference in fMRI Analysis
Roels, Sanne P.; Loeys, Tom; Moerkerke, Beatrijs
2016-01-01
We investigate the impact of decisions in the second-level (i.e., over subjects) inferential process in functional magnetic resonance imaging on (1) the balance between false positives and false negatives and on (2) the data-analytical stability, both proxies for the reproducibility of results. Second-level analysis based on a mass univariate approach typically consists of 3 phases. First, one proceeds via a general linear model for a test image that consists of pooled information from different subjects. We evaluate models that take into account first-level (within-subjects) variability and models that do not take into account this variability. Second, one proceeds via inference based on parametrical assumptions or via permutation-based inference. Third, we evaluate 3 commonly used procedures to address the multiple testing problem: familywise error rate correction, False Discovery Rate (FDR) correction, and a two-step procedure with minimal cluster size. Based on a simulation study and real data we find that the two-step procedure with minimal cluster size results in most stable results, followed by the familywise error rate correction. The FDR results in most variable results, for both permutation-based inference and parametrical inference. Modeling the subject-specific variability yields a better balance between false positives and false negatives when using parametric inference. PMID:26819578
Combining large number of weak biomarkers based on AUC.
Yan, Li; Tian, Lili; Liu, Song
2015-12-20
Combining multiple biomarkers to improve diagnosis and/or prognosis accuracy is a common practice in clinical medicine. Both parametric and non-parametric methods have been developed for finding the optimal linear combination of biomarkers to maximize the area under the receiver operating characteristic curve (AUC), primarily focusing on the setting with a small number of well-defined biomarkers. This problem becomes more challenging when the number of observations is not order of magnitude greater than the number of variables, especially when the involved biomarkers are relatively weak. Such settings are not uncommon in certain applied fields. The first aim of this paper is to empirically evaluate the performance of existing linear combination methods under such settings. The second aim is to propose a new combination method, namely, the pairwise approach, to maximize AUC. Our simulation studies demonstrated that the performance of several existing methods can become unsatisfactory as the number of markers becomes large, while the newly proposed pairwise method performs reasonably well. Furthermore, we apply all the combination methods to real datasets used for the development and validation of MammaPrint. The implication of our study for the design of optimal linear combination methods is discussed. Copyright © 2015 John Wiley & Sons, Ltd.
Combining large number of weak biomarkers based on AUC
Yan, Li; Tian, Lili; Liu, Song
2018-01-01
Combining multiple biomarkers to improve diagnosis and/or prognosis accuracy is a common practice in clinical medicine. Both parametric and non-parametric methods have been developed for finding the optimal linear combination of biomarkers to maximize the area under the receiver operating characteristic curve (AUC), primarily focusing on the setting with a small number of well-defined biomarkers. This problem becomes more challenging when the number of observations is not order of magnitude greater than the number of variables, especially when the involved biomarkers are relatively weak. Such settings are not uncommon in certain applied fields. The first aim of this paper is to empirically evaluate the performance of existing linear combination methods under such settings. The second aim is to propose a new combination method, namely, the pairwise approach, to maximize AUC. Our simulation studies demonstrated that the performance of several existing methods can become unsatisfactory as the number of markers becomes large, while the newly proposed pairwise method performs reasonably well. Furthermore, we apply all the combination methods to real datasets used for the development and validation of MammaPrint. The implication of our study for the design of optimal linear combination methods is discussed. PMID:26227901
NASA Astrophysics Data System (ADS)
Perry, Dan; Nakamoto, Mark; Verghese, Nishath; Hurat, Philippe; Rouse, Rich
2007-03-01
Model-based hotspot detection and silicon-aware parametric analysis help designers optimize their chips for yield, area and performance without the high cost of applying foundries' recommended design rules. This set of DFM/ recommended rules is primarily litho-driven, but cannot guarantee a manufacturable design without imposing overly restrictive design requirements. This rule-based methodology of making design decisions based on idealized polygons that no longer represent what is on silicon needs to be replaced. Using model-based simulation of the lithography, OPC, RET and etch effects, followed by electrical evaluation of the resulting shapes, leads to a more realistic and accurate analysis. This analysis can be used to evaluate intelligent design trade-offs and identify potential failures due to systematic manufacturing defects during the design phase. The successful DFM design methodology consists of three parts: 1. Achieve a more aggressive layout through limited usage of litho-related recommended design rules. A 10% to 15% area reduction is achieved by using more aggressive design rules. DFM/recommended design rules are used only if there is no impact on cell size. 2. Identify and fix hotspots using a model-based layout printability checker. Model-based litho and etch simulation are done at the cell level to identify hotspots. Violations of recommended rules may cause additional hotspots, which are then fixed. The resulting design is ready for step 3. 3. Improve timing accuracy with a process-aware parametric analysis tool for transistors and interconnect. Contours of diffusion, poly and metal layers are used for parametric analysis. In this paper, we show the results of this physical and electrical DFM methodology at Qualcomm. We describe how Qualcomm was able to develop more aggressive cell designs that yielded a 10% to 15% area reduction using this methodology. Model-based shape simulation was employed during library development to validate architecture choices and to optimize cell layout. At the physical verification stage, the shape simulator was run at full-chip level to identify and fix residual hotspots on interconnect layers, on poly or metal 1 due to interaction between adjacent cells, or on metal 1 due to interaction between routing (via and via cover) and cell geometry. To determine an appropriate electrical DFM solution, Qualcomm developed an experiment to examine various electrical effects. After reporting the silicon results of this experiment, which showed sizeable delay variations due to lithography-related systematic effects, we also explain how contours of diffusion, poly and metal can be used for silicon-aware parametric analysis of transistors and interconnect at the cell-, block- and chip-level.
Cloud GPU-based simulations for SQUAREMR.
Kantasis, George; Xanthis, Christos G; Haris, Kostas; Heiberg, Einar; Aletras, Anthony H
2017-01-01
Quantitative Magnetic Resonance Imaging (MRI) is a research tool, used more and more in clinical practice, as it provides objective information with respect to the tissues being imaged. Pixel-wise T 1 quantification (T 1 mapping) of the myocardium is one such application with diagnostic significance. A number of mapping sequences have been developed for myocardial T 1 mapping with a wide range in terms of measurement accuracy and precision. Furthermore, measurement results obtained with these pulse sequences are affected by errors introduced by the particular acquisition parameters used. SQUAREMR is a new method which has the potential of improving the accuracy of these mapping sequences through the use of massively parallel simulations on Graphical Processing Units (GPUs) by taking into account different acquisition parameter sets. This method has been shown to be effective in myocardial T 1 mapping; however, execution times may exceed 30min which is prohibitively long for clinical applications. The purpose of this study was to accelerate the construction of SQUAREMR's multi-parametric database to more clinically acceptable levels. The aim of this study was to develop a cloud-based cluster in order to distribute the computational load to several GPU-enabled nodes and accelerate SQUAREMR. This would accommodate high demands for computational resources without the need for major upfront equipment investment. Moreover, the parameter space explored by the simulations was optimized in order to reduce the computational load without compromising the T 1 estimates compared to a non-optimized parameter space approach. A cloud-based cluster with 16 nodes resulted in a speedup of up to 13.5 times compared to a single-node execution. Finally, the optimized parameter set approach allowed for an execution time of 28s using the 16-node cluster, without compromising the T 1 estimates by more than 10ms. The developed cloud-based cluster and optimization of the parameter set reduced the execution time of the simulations involved in constructing the SQUAREMR multi-parametric database thus bringing SQUAREMR's applicability within time frames that would be likely acceptable in the clinic. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Siegenthaler-Le Drian, C.; Spichtinger, P.; Lohmann, U.
2010-09-01
Marine stratocumulus-capped boundary layers exhibit a strong net cooling impact on the Earth-Atmosphere system. Moreover, they are highly persistent over subtropical oceans. Therefore climate models need to represent them well in order to make reliable projections of future climate. One of the reasons for the absence of stratocumuli in the general circulation model ECHAM5-HAM (Roeckner et al., 2003; Stier et al., 2005) is due to the limited vertical resolution. In the current model version, no vertical sub-grid scale variability of clouds is taken into account, such that clouds occupy the full vertical layer. Around the inversion on top of the planetary boundary layer (PBL), conserved variables often have a steep gradient, which in a GCM may produce large discretization errors (Bretherton and Park, 2009). This inversion has a large diurnal cycle and varies with location around the globe, which is difficult to represent in a classical, coarse Eulerian approach. Furthermore, Lenderink and Holtslag (2000) and Lock (2001) showed that an inconsistent numerical representation between the entrainment parametrization and the other schemes, particularly with the vertical advection can lead to the occurrence of 'numerical entrainment'. The problem can be resolved by introducing a dynamical inversion as introduced by Grenier and Bretherton (2001) and Lock (2001). As these features can be seen in our version of ECHAM5-HAM, our implementation is aimed to reduce the numerical entrainment and to better represent stratocumuli in ECHAM5-HAM. To better resolve stratocumulus clouds, their inversion and the interaction between the turbulent diffusion and the vertical advection, the vertical grid is dynamically refined. The new grid is based on the reconstruction of the profiles of variables experiencing a sharp gradient (temperature, mixing ratio) applying the method presented in Grenier and Bretherton (2001). In typical stratocumulus regions, an additional grid level is thus associated with the PBL top. In case a cloud can be formed, a new level is associated with the lifting condensation level as well. The regular grid plus the two additional levels define the new dynamical grid, which varies geographically and temporally. The physical processes are computed on this new dynamical grid, Consequently, the sharp gradients and the interaction between the different processes can be better resolved. Some results of this new parametrization will be presented. On a single column model set-up, the reconstruction method accurately finds the inversion at the PBL top for the EPIC stratocumulus case. Also, on a global scale, the occurrence of a successful reconstruction, which is restricted in typical stratocumulus regions, occurs with a high frequency. The impact of the new dynamical grid on clouds and the radiation balance will be presented in the talk. References [Bretherton and Park, 2009] Bretherton, C. S. and Park, S. (2009). A new moist turbulence parametrization in the community atmosphere model. J. Climate, 22:3422-3448. [Grenier and Bretherton, 2001] Grenier, H. and Bretherton, C. S. (2001). A moist parametrization for large-scale models and its application to subtropical cloud-topped marine boundary layers. Mon. Wea. Rev., 129:357-377. [Lenderink and Holtslag, 2000] Lenderink, G. and Holtslag, A. M. (2000). Evaluation of the kinetic energy approach for modeling turbulent fluxes in stratocumulus. Mon. Wea. Rev., 128:244-258. [Lock, 2001] Lock, A. P. (2001). The numerical representation of entrainment in parametrizations of boundary layer turbulent mixing. Mon. Wea. Rev., 129:1148-1163. [Roeckner et al., 2003] Roeckner, E., Bäuml, G., Bonaventura, L. et al. (2003). The atmospheric general circulation model echam5, part I: Model description. Technical Report 349, Max-Planck-Institute for Meteorology, Hamburg,Germany. [Stier et al., 2005] Stier, P., Feichter, J., Kinne, S. et al. (2005). The aerosol-climate model ECHAM5-HAM. Atmos. Chem. Phys., 5:1125-1156.
Robust point matching via vector field consensus.
Jiayi Ma; Ji Zhao; Jinwen Tian; Yuille, Alan L; Zhuowen Tu
2014-04-01
In this paper, we propose an efficient algorithm, called vector field consensus, for establishing robust point correspondences between two sets of points. Our algorithm starts by creating a set of putative correspondences which can contain a very large number of false correspondences, or outliers, in addition to a limited number of true correspondences (inliers). Next, we solve for correspondence by interpolating a vector field between the two point sets, which involves estimating a consensus of inlier points whose matching follows a nonparametric geometrical constraint. We formulate this a maximum a posteriori (MAP) estimation of a Bayesian model with hidden/latent variables indicating whether matches in the putative set are outliers or inliers. We impose nonparametric geometrical constraints on the correspondence, as a prior distribution, using Tikhonov regularizers in a reproducing kernel Hilbert space. MAP estimation is performed by the EM algorithm which by also estimating the variance of the prior model (initialized to a large value) is able to obtain good estimates very quickly (e.g., avoiding many of the local minima inherent in this formulation). We illustrate this method on data sets in 2D and 3D and demonstrate that it is robust to a very large number of outliers (even up to 90%). We also show that in the special case where there is an underlying parametric geometrical model (e.g., the epipolar line constraint) that we obtain better results than standard alternatives like RANSAC if a large number of outliers are present. This suggests a two-stage strategy, where we use our nonparametric model to reduce the size of the putative set and then apply a parametric variant of our approach to estimate the geometric parameters. Our algorithm is computationally efficient and we provide code for others to use it. In addition, our approach is general and can be applied to other problems, such as learning with a badly corrupted training data set.
Efficiency Analysis of Public Universities in Thailand
ERIC Educational Resources Information Center
Kantabutra, Saranya; Tang, John C. S.
2010-01-01
This paper examines the performance of Thai public universities in terms of efficiency, using a non-parametric approach called data envelopment analysis. Two efficiency models, the teaching efficiency model and the research efficiency model, are developed and the analysis is conducted at the faculty level. Further statistical analyses are also…
Application of the LSQR algorithm in non-parametric estimation of aerosol size distribution
NASA Astrophysics Data System (ADS)
He, Zhenzong; Qi, Hong; Lew, Zhongyuan; Ruan, Liming; Tan, Heping; Luo, Kun
2016-05-01
Based on the Least Squares QR decomposition (LSQR) algorithm, the aerosol size distribution (ASD) is retrieved in non-parametric approach. The direct problem is solved by the Anomalous Diffraction Approximation (ADA) and the Lambert-Beer Law. An optimal wavelength selection method is developed to improve the retrieval accuracy of the ASD. The proposed optimal wavelength set is selected by the method which can make the measurement signals sensitive to wavelength and decrease the degree of the ill-condition of coefficient matrix of linear systems effectively to enhance the anti-interference ability of retrieval results. Two common kinds of monomodal and bimodal ASDs, log-normal (L-N) and Gamma distributions, are estimated, respectively. Numerical tests show that the LSQR algorithm can be successfully applied to retrieve the ASD with high stability in the presence of random noise and low susceptibility to the shape of distributions. Finally, the experimental measurement ASD over Harbin in China is recovered reasonably. All the results confirm that the LSQR algorithm combined with the optimal wavelength selection method is an effective and reliable technique in non-parametric estimation of ASD.
NASA Astrophysics Data System (ADS)
Noh, Seong Jin; Rakovec, Oldrich; Kumar, Rohini; Samaniego, Luis
2016-04-01
There have been tremendous improvements in distributed hydrologic modeling (DHM) which made a process-based simulation with a high spatiotemporal resolution applicable on a large spatial scale. Despite of increasing information on heterogeneous property of a catchment, DHM is still subject to uncertainties inherently coming from model structure, parameters and input forcing. Sequential data assimilation (DA) may facilitate improved streamflow prediction via DHM using real-time observations to correct internal model states. In conventional DA methods such as state updating, parametric uncertainty is, however, often ignored mainly due to practical limitations of methodology to specify modeling uncertainty with limited ensemble members. If parametric uncertainty related with routing and runoff components is not incorporated properly, predictive uncertainty by DHM may be insufficient to capture dynamics of observations, which may deteriorate predictability. Recently, a multi-scale parameter regionalization (MPR) method was proposed to make hydrologic predictions at different scales using a same set of model parameters without losing much of the model performance. The MPR method incorporated within the mesoscale hydrologic model (mHM, http://www.ufz.de/mhm) could effectively represent and control uncertainty of high-dimensional parameters in a distributed model using global parameters. In this study, we present a global multi-parametric ensemble approach to incorporate parametric uncertainty of DHM in DA to improve streamflow predictions. To effectively represent and control uncertainty of high-dimensional parameters with limited number of ensemble, MPR method is incorporated with DA. Lagged particle filtering is utilized to consider the response times and non-Gaussian characteristics of internal hydrologic processes. The hindcasting experiments are implemented to evaluate impacts of the proposed DA method on streamflow predictions in multiple European river basins having different climate and catchment characteristics. Because augmentation of parameters is not required within an assimilation window, the approach could be stable with limited ensemble members and viable for practical uses.
Quintela-del-Río, Alejandro; Francisco-Fernández, Mario
2011-02-01
The study of extreme values and prediction of ozone data is an important topic of research when dealing with environmental problems. Classical extreme value theory is usually used in air-pollution studies. It consists in fitting a parametric generalised extreme value (GEV) distribution to a data set of extreme values, and using the estimated distribution to compute return levels and other quantities of interest. Here, we propose to estimate these values using nonparametric functional data methods. Functional data analysis is a relatively new statistical methodology that generally deals with data consisting of curves or multi-dimensional variables. In this paper, we use this technique, jointly with nonparametric curve estimation, to provide alternatives to the usual parametric statistical tools. The nonparametric estimators are applied to real samples of maximum ozone values obtained from several monitoring stations belonging to the Automatic Urban and Rural Network (AURN) in the UK. The results show that nonparametric estimators work satisfactorily, outperforming the behaviour of classical parametric estimators. Functional data analysis is also used to predict stratospheric ozone concentrations. We show an application, using the data set of mean monthly ozone concentrations in Arosa, Switzerland, and the results are compared with those obtained by classical time series (ARIMA) analysis. Copyright © 2010 Elsevier Ltd. All rights reserved.
Tuning a climate model using nudging to reanalysis.
NASA Astrophysics Data System (ADS)
Cheedela, S. K.; Mapes, B. E.
2014-12-01
Tuning a atmospheric general circulation model involves a daunting task of adjusting non-observable parameters to adjust the mean climate. These parameters arise from necessity to describe unresolved flow through parametrizations. Tuning a climate model is often done with certain set of priorities, such as global mean temperature, net top of the atmosphere radiation. These priorities are hard enough to reach let alone reducing systematic biases in the models. The goal of currently study is to explore alternate ways to tune a climate model to reduce some systematic biases that can be used in synergy with existing efforts. Nudging a climate model to a known state is a poor man's inverse of tuning process described above. Our approach involves nudging the atmospheric model to state of art reanalysis fields thereby providing a balanced state with respect to the global mean temperature and winds. The tendencies derived from nudging are negative of errors from physical parametrizations as the errors from dynamical core would be small. Patterns of nudging are compared to the patterns of different physical parametrizations to decipher the cause for certain biases in relation to tuning parameters. This approach might also help in understanding certain compensating errors that arise from tuning process. ECHAM6 is a comprehensive general model, also used in recent Coupled Model Intercomparision Project(CMIP5). The approach used to tune it and effect of certain parameters that effect its mean climate are reported clearly, hence it serves as a benchmark for our approach. Our planned experiments include nudging ECHAM6 atmospheric model to European Center Reanalysis (ERA-Interim) and reanalysis from National Center for Environmental Prediction (NCEP) and decipher choice of certain parameters that lead to systematic biases in its simulations. Of particular interest are reducing long standing biases related to simulation of Asian summer monsoon.
NASA Astrophysics Data System (ADS)
Meresescu, Alina G.; Kowalski, Matthieu; Schmidt, Frédéric; Landais, François
2018-06-01
The Water Residence Time distribution is the equivalent of the impulse response of a linear system allowing the propagation of water through a medium, e.g. the propagation of rain water from the top of the mountain towards the aquifers. We consider the output aquifer levels as the convolution between the input rain levels and the Water Residence Time, starting with an initial aquifer base level. The estimation of Water Residence Time is important for a better understanding of hydro-bio-geochemical processes and mixing properties of wetlands used as filters in ecological applications, as well as protecting fresh water sources for wells from pollutants. Common methods of estimating the Water Residence Time focus on cross-correlation, parameter fitting and non-parametric deconvolution methods. Here we propose a 1D full-deconvolution, regularized, non-parametric inverse problem algorithm that enforces smoothness and uses constraints of causality and positivity to estimate the Water Residence Time curve. Compared to Bayesian non-parametric deconvolution approaches, it has a fast runtime per test case; compared to the popular and fast cross-correlation method, it produces a more precise Water Residence Time curve even in the case of noisy measurements. The algorithm needs only one regularization parameter to balance between smoothness of the Water Residence Time and accuracy of the reconstruction. We propose an approach on how to automatically find a suitable value of the regularization parameter from the input data only. Tests on real data illustrate the potential of this method to analyze hydrological datasets.
Evaluation of variable selection methods for random forests and omics data sets.
Degenhardt, Frauke; Seifert, Stephan; Szymczak, Silke
2017-10-16
Machine learning methods and in particular random forests are promising approaches for prediction based on high dimensional omics data sets. They provide variable importance measures to rank predictors according to their predictive power. If building a prediction model is the main goal of a study, often a minimal set of variables with good prediction performance is selected. However, if the objective is the identification of involved variables to find active networks and pathways, approaches that aim to select all relevant variables should be preferred. We evaluated several variable selection procedures based on simulated data as well as publicly available experimental methylation and gene expression data. Our comparison included the Boruta algorithm, the Vita method, recurrent relative variable importance, a permutation approach and its parametric variant (Altmann) as well as recursive feature elimination (RFE). In our simulation studies, Boruta was the most powerful approach, followed closely by the Vita method. Both approaches demonstrated similar stability in variable selection, while Vita was the most robust approach under a pure null model without any predictor variables related to the outcome. In the analysis of the different experimental data sets, Vita demonstrated slightly better stability in variable selection and was less computationally intensive than Boruta.In conclusion, we recommend the Boruta and Vita approaches for the analysis of high-dimensional data sets. Vita is considerably faster than Boruta and thus more suitable for large data sets, but only Boruta can also be applied in low-dimensional settings. © The Author 2017. Published by Oxford University Press.
High-power parametric amplification of 11.8-fs laser pulses with carrier-envelope phase control.
Zinkstok, R Th; Witte, S; Hogervorst, W; Eikema, K S E
2005-01-01
Phase-stable parametric chirped-pulse amplification of ultrashort pulses from a carrier-envelope phase-stabilized mode-locked Ti:sapphire oscillator (11.0 fs) to 0.25 mJ/pulse at 1 kHz is demonstrated. Compression with a grating compressor and a LCD shaper yields near-Fourier-limited 11.8-fs pulses with an energy of 0.12 mJ. The amplifier is pumped by 532-nm pulses from a synchronized mode-locked laser, Nd:YAG amplifier system. This approach is shown to be promising for the next generation of ultrafast amplifiers aimed at producing terawatt-level phase-controlled few-cycle laser pulses.
Automated unsupervised multi-parametric classification of adipose tissue depots in skeletal muscle
Valentinitsch, Alexander; Karampinos, Dimitrios C.; Alizai, Hamza; Subburaj, Karupppasamy; Kumar, Deepak; Link, Thomas M.; Majumdar, Sharmila
2012-01-01
Purpose To introduce and validate an automated unsupervised multi-parametric method for segmentation of the subcutaneous fat and muscle regions in order to determine subcutaneous adipose tissue (SAT) and intermuscular adipose tissue (IMAT) areas based on data from a quantitative chemical shift-based water-fat separation approach. Materials and Methods Unsupervised standard k-means clustering was employed to define sets of similar features (k = 2) within the whole multi-modal image after the water-fat separation. The automated image processing chain was composed of three primary stages including tissue, muscle and bone region segmentation. The algorithm was applied on calf and thigh datasets to compute SAT and IMAT areas and was compared to a manual segmentation. Results The IMAT area using the automatic segmentation had excellent agreement with the IMAT area using the manual segmentation for all the cases in the thigh (R2: 0.96) and for cases with up to moderate IMAT area in the calf (R2: 0.92). The group with the highest grade of muscle fat infiltration in the calf had the highest error in the inner SAT contour calculation. Conclusion The proposed multi-parametric segmentation approach combined with quantitative water-fat imaging provides an accurate and reliable method for an automated calculation of the SAT and IMAT areas reducing considerably the total post-processing time. PMID:23097409
Building and using a statistical 3D motion atlas for analyzing myocardial contraction in MRI
NASA Astrophysics Data System (ADS)
Rougon, Nicolas F.; Petitjean, Caroline; Preteux, Francoise J.
2004-05-01
We address the issue of modeling and quantifying myocardial contraction from 4D MR sequences, and present an unsupervised approach for building and using a statistical 3D motion atlas for the normal heart. This approach relies on a state-of-the-art variational non rigid registration (NRR) technique using generalized information measures, which allows for robust intra-subject motion estimation and inter-subject anatomical alignment. The atlas is built from a collection of jointly acquired tagged and cine MR exams in short- and long-axis views. Subject-specific non parametric motion estimates are first obtained by incremental NRR of tagged images onto the end-diastolic (ED) frame. Individual motion data are then transformed into the coordinate system of a reference subject using subject-to-reference mappings derived by NRR of cine ED images. Finally, principal component analysis of aligned motion data is performed for each cardiac phase, yielding a mean model and a set of eigenfields encoding kinematic ariability. The latter define an organ-dedicated hierarchical motion basis which enables parametric motion measurement from arbitrary tagged MR exams. To this end, the atlas is transformed into subject coordinates by reference-to-subject NRR of ED cine frames. Atlas-based motion estimation is then achieved by parametric NRR of tagged images onto the ED frame, yielding a compact description of myocardial contraction during diastole.
Decker, Anna L.; Hubbard, Alan; Crespi, Catherine M.; Seto, Edmund Y.W.; Wang, May C.
2015-01-01
While child and adolescent obesity is a serious public health concern, few studies have utilized parameters based on the causal inference literature to examine the potential impacts of early intervention. The purpose of this analysis was to estimate the causal effects of early interventions to improve physical activity and diet during adolescence on body mass index (BMI), a measure of adiposity, using improved techniques. The most widespread statistical method in studies of child and adolescent obesity is multi-variable regression, with the parameter of interest being the coefficient on the variable of interest. This approach does not appropriately adjust for time-dependent confounding, and the modeling assumptions may not always be met. An alternative parameter to estimate is one motivated by the causal inference literature, which can be interpreted as the mean change in the outcome under interventions to set the exposure of interest. The underlying data-generating distribution, upon which the estimator is based, can be estimated via a parametric or semi-parametric approach. Using data from the National Heart, Lung, and Blood Institute Growth and Health Study, a 10-year prospective cohort study of adolescent girls, we estimated the longitudinal impact of physical activity and diet interventions on 10-year BMI z-scores via a parameter motivated by the causal inference literature, using both parametric and semi-parametric estimation approaches. The parameters of interest were estimated with a recently released R package, ltmle, for estimating means based upon general longitudinal treatment regimes. We found that early, sustained intervention on total calories had a greater impact than a physical activity intervention or non-sustained interventions. Multivariable linear regression yielded inflated effect estimates compared to estimates based on targeted maximum-likelihood estimation and data-adaptive super learning. Our analysis demonstrates that sophisticated, optimal semiparametric estimation of longitudinal treatment-specific means via ltmle provides an incredibly powerful, yet easy-to-use tool, removing impediments for putting theory into practice. PMID:26046009
Hyperbolic and semi-parametric models in finance
NASA Astrophysics Data System (ADS)
Bingham, N. H.; Kiesel, Rüdiger
2001-02-01
The benchmark Black-Scholes-Merton model of mathematical finance is parametric, based on the normal/Gaussian distribution. Its principal parametric competitor, the hyperbolic model of Barndorff-Nielsen, Eberlein and others, is briefly discussed. Our main theme is the use of semi-parametric models, incorporating the mean vector and covariance matrix as in the Markowitz approach, plus a non-parametric part, a scalar function incorporating features such as tail-decay. Implementation is also briefly discussed.
Volume-preserving normal forms of Hopf-zero singularity
NASA Astrophysics Data System (ADS)
Gazor, Majid; Mokhtari, Fahimeh
2013-10-01
A practical method is described for computing the unique generator of the algebra of first integrals associated with a large class of Hopf-zero singularity. The set of all volume-preserving classical normal forms of this singularity is introduced via a Lie algebra description. This is a maximal vector space of classical normal forms with first integral; this is whence our approach works. Systems with a nonzero condition on their quadratic parts are considered. The algebra of all first integrals for any such system has a unique (modulo scalar multiplication) generator. The infinite level volume-preserving parametric normal forms of any nondegenerate perturbation within the Lie algebra of any such system is computed, where it can have rich dynamics. The associated unique generator of the algebra of first integrals are derived. The symmetry group of the infinite level normal forms are also discussed. Some necessary formulas are derived and applied to appropriately modified Rössler and generalized Kuramoto-Sivashinsky equations to demonstrate the applicability of our theoretical results. An approach (introduced by Iooss and Lombardi) is applied to find an optimal truncation for the first level normal forms of these examples with exponentially small remainders. The numerically suggested radius of convergence (for the first integral) associated with a hypernormalization step is discussed for the truncated first level normal forms of the examples. This is achieved by an efficient implementation of the results using Maple.
Automated Training of ReaxFF Reactive Force Fields for Energetics of Enzymatic Reactions.
Trnka, Tomáš; Tvaroška, Igor; Koča, Jaroslav
2018-01-09
Computational studies of the reaction mechanisms of various enzymes are nowadays based almost exclusively on hybrid QM/MM models. Unfortunately, the success of this approach strongly depends on the selection of the QM region, and computational cost is a crucial limiting factor. An interesting alternative is offered by empirical reactive molecular force fields, especially the ReaxFF potential developed by van Duin and co-workers. However, even though an initial parametrization of ReaxFF for biomolecules already exists, it does not provide the desired level of accuracy. We have conducted a thorough refitting of the ReaxFF force field to improve the description of reaction energetics. To minimize the human effort required, we propose a fully automated approach to generate an extensive training set comprised of thousands of different geometries and molecular fragments starting from a few model molecules. Electrostatic parameters were optimized with QM electrostatic potentials as the main target quantity, avoiding excessive dependence on the choice of reference atomic charges and improving robustness and transferability. The remaining force field parameters were optimized using the VD-CMA-ES variant of the CMA-ES optimization algorithm. This method is able to optimize hundreds of parameters simultaneously with unprecedented speed and reliability. The resulting force field was validated on a real enzymatic system, ppGalNAcT2 glycosyltransferase. The new force field offers excellent qualitative agreement with the reference QM/MM reaction energy profile, matches the relative energies of intermediate and product minima almost exactly, and reduces the overestimation of transition state energies by 27-48% compared with the previous parametrization.
NASA Astrophysics Data System (ADS)
Xu, Jiuping; Li, Jun
2002-09-01
In this paper a class of stochastic multiple-objective programming problems with one quadratic, several linear objective functions and linear constraints has been introduced. The former model is transformed into a deterministic multiple-objective nonlinear programming model by means of the introduction of random variables' expectation. The reference direction approach is used to deal with linear objectives and results in a linear parametric optimization formula with a single linear objective function. This objective function is combined with the quadratic function using the weighted sums. The quadratic problem is transformed into a linear (parametric) complementary problem, the basic formula for the proposed approach. The sufficient and necessary conditions for (properly, weakly) efficient solutions and some construction characteristics of (weakly) efficient solution sets are obtained. An interactive algorithm is proposed based on reference direction and weighted sums. Varying the parameter vector on the right-hand side of the model, the DM can freely search the efficient frontier with the model. An extended portfolio selection model is formed when liquidity is considered as another objective to be optimized besides expectation and risk. The interactive approach is illustrated with a practical example.
NASA Astrophysics Data System (ADS)
Bakoban, Rana A.
2017-08-01
The coefficient of variation [CV] has several applications in applied statistics. So in this paper, we adopt Bayesian and non-Bayesian approaches for the estimation of CV under type-II censored data from extension exponential distribution [EED]. The point and interval estimate of the CV are obtained for each of the maximum likelihood and parametric bootstrap techniques. Also the Bayesian approach with the help of MCMC method is presented. A real data set is presented and analyzed, hence the obtained results are used to assess the obtained theoretical results.
Covariate analysis of bivariate survival data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bennett, L.E.
1992-01-01
The methods developed are used to analyze the effects of covariates on bivariate survival data when censoring and ties are present. The proposed method provides models for bivariate survival data that include differential covariate effects and censored observations. The proposed models are based on an extension of the univariate Buckley-James estimators which replace censored data points by their expected values, conditional on the censoring time and the covariates. For the bivariate situation, it is necessary to determine the expectation of the failure times for one component conditional on the failure or censoring time of the other component. Two different methodsmore » have been developed to estimate these expectations. In the semiparametric approach these expectations are determined from a modification of Burke's estimate of the bivariate empirical survival function. In the parametric approach censored data points are also replaced by their conditional expected values where the expected values are determined from a specified parametric distribution. The model estimation will be based on the revised data set, comprised of uncensored components and expected values for the censored components. The variance-covariance matrix for the estimated covariate parameters has also been derived for both the semiparametric and parametric methods. Data from the Demographic and Health Survey was analyzed by these methods. The two outcome variables are post-partum amenorrhea and breastfeeding; education and parity were used as the covariates. Both the covariate parameter estimates and the variance-covariance estimates for the semiparametric and parametric models will be compared. In addition, a multivariate test statistic was used in the semiparametric model to examine contrasts. The significance of the statistic was determined from a bootstrap distribution of the test statistic.« less
Parametrization of DFTB3/3OB for Magnesium and Zinc for Chemical and Biological Applications
2015-01-01
We report the parametrization of the approximate density functional theory, DFTB3, for magnesium and zinc for chemical and biological applications. The parametrization strategy follows that established in previous work that parametrized several key main group elements (O, N, C, H, P, and S). This 3OB set of parameters can thus be used to study many chemical and biochemical systems. The parameters are benchmarked using both gas-phase and condensed-phase systems. The gas-phase results are compared to DFT (mostly B3LYP), ab initio (MP2 and G3B3), and PM6, as well as to a previous DFTB parametrization (MIO). The results indicate that DFTB3/3OB is particularly successful at predicting structures, including rather complex dinuclear metalloenzyme active sites, while being semiquantitative (with a typical mean absolute deviation (MAD) of ∼3–5 kcal/mol) for energetics. Single-point calculations with high-level quantum mechanics (QM) methods generally lead to very satisfying (a typical MAD of ∼1 kcal/mol) energetic properties. DFTB3/MM simulations for solution and two enzyme systems also lead to encouraging structural and energetic properties in comparison to available experimental data. The remaining limitations of DFTB3, such as the treatment of interaction between metal ions and highly charged/polarizable ligands, are also discussed. PMID:25178644
A Feature-based Approach to Big Data Analysis of Medical Images
Toews, Matthew; Wachinger, Christian; Estepar, Raul San Jose; Wells, William M.
2015-01-01
This paper proposes an inference method well-suited to large sets of medical images. The method is based upon a framework where distinctive 3D scale-invariant features are indexed efficiently to identify approximate nearest-neighbor (NN) feature matches in O(log N) computational complexity in the number of images N. It thus scales well to large data sets, in contrast to methods based on pair-wise image registration or feature matching requiring O(N) complexity. Our theoretical contribution is a density estimator based on a generative model that generalizes kernel density estimation and K-nearest neighbor (KNN) methods. The estimator can be used for on-the-fly queries, without requiring explicit parametric models or an off-line training phase. The method is validated on a large multi-site data set of 95,000,000 features extracted from 19,000 lung CT scans. Subject-level classification identifies all images of the same subjects across the entire data set despite deformation due to breathing state, including unintentional duplicate scans. State-of-the-art performance is achieved in predicting chronic pulmonary obstructive disorder (COPD) severity across the 5-category GOLD clinical rating, with an accuracy of 89% if both exact and one-off predictions are considered correct. PMID:26221685
A Feature-Based Approach to Big Data Analysis of Medical Images.
Toews, Matthew; Wachinger, Christian; Estepar, Raul San Jose; Wells, William M
2015-01-01
This paper proposes an inference method well-suited to large sets of medical images. The method is based upon a framework where distinctive 3D scale-invariant features are indexed efficiently to identify approximate nearest-neighbor (NN) feature matches-in O (log N) computational complexity in the number of images N. It thus scales well to large data sets, in contrast to methods based on pair-wise image registration or feature matching requiring O(N) complexity. Our theoretical contribution is a density estimator based on a generative model that generalizes kernel density estimation and K-nearest neighbor (KNN) methods.. The estimator can be used for on-the-fly queries, without requiring explicit parametric models or an off-line training phase. The method is validated on a large multi-site data set of 95,000,000 features extracted from 19,000 lung CT scans. Subject-level classification identifies all images of the same subjects across the entire data set despite deformation due to breathing state, including unintentional duplicate scans. State-of-the-art performance is achieved in predicting chronic pulmonary obstructive disorder (COPD) severity across the 5-category GOLD clinical rating, with an accuracy of 89% if both exact and one-off predictions are considered correct.
Schuitemaker, Alie; van Berckel, Bart N M; Kropholler, Marc A; Veltman, Dick J; Scheltens, Philip; Jonker, Cees; Lammertsma, Adriaan A; Boellaard, Ronald
2007-05-01
(R)-[11C]PK11195 has been used for quantifying cerebral microglial activation in vivo. In previous studies, both plasma input and reference tissue methods have been used, usually in combination with a region of interest (ROI) approach. Definition of ROIs, however, can be labourious and prone to interobserver variation. In addition, results are only obtained for predefined areas and (unexpected) signals in undefined areas may be missed. On the other hand, standard pharmacokinetic models are too sensitive to noise to calculate (R)-[11C]PK11195 binding on a voxel-by-voxel basis. Linearised versions of both plasma input and reference tissue models have been described, and these are more suitable for parametric imaging. The purpose of this study was to compare the performance of these plasma input and reference tissue parametric methods on the outcome of statistical parametric mapping (SPM) analysis of (R)-[11C]PK11195 binding. Dynamic (R)-[11C]PK11195 PET scans with arterial blood sampling were performed in 7 younger and 11 elderly healthy subjects. Parametric images of volume of distribution (Vd) and binding potential (BP) were generated using linearised versions of plasma input (Logan) and reference tissue (Reference Parametric Mapping) models. Images were compared at the group level using SPM with a two-sample t-test per voxel, both with and without proportional scaling. Parametric BP images without scaling provided the most sensitive framework for determining differences in (R)-[11C]PK11195 binding between younger and elderly subjects. Vd images could only demonstrate differences in (R)-[11C]PK11195 binding when analysed with proportional scaling due to intersubject variation in K1/k2 (blood-brain barrier transport and non-specific binding).
Spectral decompositions of multiple time series: a Bayesian non-parametric approach.
Macaro, Christian; Prado, Raquel
2014-01-01
We consider spectral decompositions of multiple time series that arise in studies where the interest lies in assessing the influence of two or more factors. We write the spectral density of each time series as a sum of the spectral densities associated to the different levels of the factors. We then use Whittle's approximation to the likelihood function and follow a Bayesian non-parametric approach to obtain posterior inference on the spectral densities based on Bernstein-Dirichlet prior distributions. The prior is strategically important as it carries identifiability conditions for the models and allows us to quantify our degree of confidence in such conditions. A Markov chain Monte Carlo (MCMC) algorithm for posterior inference within this class of frequency-domain models is presented.We illustrate the approach by analyzing simulated and real data via spectral one-way and two-way models. In particular, we present an analysis of functional magnetic resonance imaging (fMRI) brain responses measured in individuals who participated in a designed experiment to study pain perception in humans.
Marmarelis, Vasilis Z.; Berger, Theodore W.
2009-01-01
Parametric and non-parametric modeling methods are combined to study the short-term plasticity (STP) of synapses in the central nervous system (CNS). The nonlinear dynamics of STP are modeled by means: (1) previously proposed parametric models based on mechanistic hypotheses and/or specific dynamical processes, and (2) non-parametric models (in the form of Volterra kernels) that transforms the presynaptic signals into postsynaptic signals. In order to synergistically use the two approaches, we estimate the Volterra kernels of the parametric models of STP for four types of synapses using synthetic broadband input–output data. Results show that the non-parametric models accurately and efficiently replicate the input–output transformations of the parametric models. Volterra kernels provide a general and quantitative representation of the STP. PMID:18506609
Cabrieto, Jedelyn; Tuerlinckx, Francis; Kuppens, Peter; Grassmann, Mariel; Ceulemans, Eva
2017-06-01
Change point detection in multivariate time series is a complex task since next to the mean, the correlation structure of the monitored variables may also alter when change occurs. DeCon was recently developed to detect such changes in mean and\\or correlation by combining a moving windows approach and robust PCA. However, in the literature, several other methods have been proposed that employ other non-parametric tools: E-divisive, Multirank, and KCP. Since these methods use different statistical approaches, two issues need to be tackled. First, applied researchers may find it hard to appraise the differences between the methods. Second, a direct comparison of the relative performance of all these methods for capturing change points signaling correlation changes is still lacking. Therefore, we present the basic principles behind DeCon, E-divisive, Multirank, and KCP and the corresponding algorithms, to make them more accessible to readers. We further compared their performance through extensive simulations using the settings of Bulteel et al. (Biological Psychology, 98 (1), 29-42, 2014) implying changes in mean and in correlation structure and those of Matteson and James (Journal of the American Statistical Association, 109 (505), 334-345, 2014) implying different numbers of (noise) variables. KCP emerged as the best method in almost all settings. However, in case of more than two noise variables, only DeCon performed adequately in detecting correlation changes.
NASA Astrophysics Data System (ADS)
Laloy, Eric; Hérault, Romain; Lee, John; Jacques, Diederik; Linde, Niklas
2017-12-01
Efficient and high-fidelity prior sampling and inversion for complex geological media is still a largely unsolved challenge. Here, we use a deep neural network of the variational autoencoder type to construct a parametric low-dimensional base model parameterization of complex binary geological media. For inversion purposes, it has the attractive feature that random draws from an uncorrelated standard normal distribution yield model realizations with spatial characteristics that are in agreement with the training set. In comparison with the most commonly used parametric representations in probabilistic inversion, we find that our dimensionality reduction (DR) approach outperforms principle component analysis (PCA), optimization-PCA (OPCA) and discrete cosine transform (DCT) DR techniques for unconditional geostatistical simulation of a channelized prior model. For the considered examples, important compression ratios (200-500) are achieved. Given that the construction of our parameterization requires a training set of several tens of thousands of prior model realizations, our DR approach is more suited for probabilistic (or deterministic) inversion than for unconditional (or point-conditioned) geostatistical simulation. Probabilistic inversions of 2D steady-state and 3D transient hydraulic tomography data are used to demonstrate the DR-based inversion. For the 2D case study, the performance is superior compared to current state-of-the-art multiple-point statistics inversion by sequential geostatistical resampling (SGR). Inversion results for the 3D application are also encouraging.
Evolution in totally constrained models: Schrödinger vs. Heisenberg pictures
NASA Astrophysics Data System (ADS)
Olmedo, Javier
2016-06-01
We study the relation between two evolution pictures that are currently considered for totally constrained theories. Both descriptions are based on Rovelli’s evolving constants approach, where one identifies a (possibly local) degree of freedom of the system as an internal time. This method is well understood classically in several situations. The purpose of this paper is to further analyze this approach at the quantum level. Concretely, we will compare the (Schrödinger-like) picture where the physical states evolve in time with the (Heisenberg-like) picture in which one defines parametrized observables (or evolving constants of the motion). We will show that in the particular situations considered in this paper (the parametrized relativistic particle and a spatially flat homogeneous and isotropic spacetime coupled to a massless scalar field) both descriptions are equivalent. We will finally comment on possible issues and on the genericness of the equivalence between both pictures.
NASA Astrophysics Data System (ADS)
Ma, Zhi-Sai; Liu, Li; Zhou, Si-Da; Yu, Lei; Naets, Frank; Heylen, Ward; Desmet, Wim
2018-01-01
The problem of parametric output-only identification of time-varying structures in a recursive manner is considered. A kernelized time-dependent autoregressive moving average (TARMA) model is proposed by expanding the time-varying model parameters onto the basis set of kernel functions in a reproducing kernel Hilbert space. An exponentially weighted kernel recursive extended least squares TARMA identification scheme is proposed, and a sliding-window technique is subsequently applied to fix the computational complexity for each consecutive update, allowing the method to operate online in time-varying environments. The proposed sliding-window exponentially weighted kernel recursive extended least squares TARMA method is employed for the identification of a laboratory time-varying structure consisting of a simply supported beam and a moving mass sliding on it. The proposed method is comparatively assessed against an existing recursive pseudo-linear regression TARMA method via Monte Carlo experiments and shown to be capable of accurately tracking the time-varying dynamics. Furthermore, the comparisons demonstrate the superior achievable accuracy, lower computational complexity and enhanced online identification capability of the proposed kernel recursive extended least squares TARMA approach.
Robustness Analysis and Optimally Robust Control Design via Sum-of-Squares
NASA Technical Reports Server (NTRS)
Dorobantu, Andrei; Crespo, Luis G.; Seiler, Peter J.
2012-01-01
A control analysis and design framework is proposed for systems subject to parametric uncertainty. The underlying strategies are based on sum-of-squares (SOS) polynomial analysis and nonlinear optimization to design an optimally robust controller. The approach determines a maximum uncertainty range for which the closed-loop system satisfies a set of stability and performance requirements. These requirements, de ned as inequality constraints on several metrics, are restricted to polynomial functions of the uncertainty. To quantify robustness, SOS analysis is used to prove that the closed-loop system complies with the requirements for a given uncertainty range. The maximum uncertainty range, calculated by assessing a sequence of increasingly larger ranges, serves as a robustness metric for the closed-loop system. To optimize the control design, nonlinear optimization is used to enlarge the maximum uncertainty range by tuning the controller gains. Hence, the resulting controller is optimally robust to parametric uncertainty. This approach balances the robustness margins corresponding to each requirement in order to maximize the aggregate system robustness. The proposed framework is applied to a simple linear short-period aircraft model with uncertain aerodynamic coefficients.
NASA Astrophysics Data System (ADS)
Sartori, G.; Valente, G.
2003-02-01
Functions which are equivariant or invariant under the transformations of a compact linear group G acting in a Euclidean space Bbb Rn, can profitably be studied as functions defined in the orbit space of the group. The orbit space is the union of a finite set of strata, which are semialgebraic manifolds formed by the G-orbits with the same orbit-type. In this paper, we provide a simple recipe to obtain rational parametrizations of the strata. Our results can be easily exploited, in many physical contexts where the study of equivariant or invariant functions is important, for instance in the determination of patterns of spontaneous symmetry breaking, in the analysis of phase spaces and structural phase transitions (Landau theory), in equivariant bifurcation theory, in crystal field theory and in most areas where use is made of symmetry-adapted functions. A physically significant example of utilization of the recipe is given, related to spontaneous polarization in chiral biaxial liquid crystals, where the advantages with respect to previous heuristic approaches are shown.
Bayesian Local Contamination Models for Multivariate Outliers
Page, Garritt L.; Dunson, David B.
2013-01-01
In studies where data are generated from multiple locations or sources it is common for there to exist observations that are quite unlike the majority. Motivated by the application of establishing a reference value in an inter-laboratory setting when outlying labs are present, we propose a local contamination model that is able to accommodate unusual multivariate realizations in a flexible way. The proposed method models the process level of a hierarchical model using a mixture with a parametric component and a possibly nonparametric contamination. Much of the flexibility in the methodology is achieved by allowing varying random subsets of the elements in the lab-specific mean vectors to be allocated to the contamination component. Computational methods are developed and the methodology is compared to three other possible approaches using a simulation study. We apply the proposed method to a NIST/NOAA sponsored inter-laboratory study which motivated the methodological development. PMID:24363465
Problems of low-parameter equations of state
NASA Astrophysics Data System (ADS)
Petrik, G. G.
2017-11-01
The paper focuses on the system approach to problems of low-parametric equations of state (EOS). It is a continuation of the investigations in the field of substantiated prognosis of properties on two levels, molecular and thermodynamic. Two sets of low-parameter EOS have been considered based on two very simple molecular-level models. The first one consists of EOS of van der Waals type (a modification of van der Waals EOS proposed for spheres). The main problem of these EOS is a weak connection with the micro-level, which raise many uncertainties. The second group of EOS has been derived by the author independently of the ideas of van der Waals based on the model of interacting point centers (IPC). All the parameters of the EOS have a meaning and are associated with the manifestation of attractive and repulsive forces. The relationship between them is found to be the control parameter of the thermodynamic level. In this case, EOS IPC passes into a one-parameter family. It is shown that many EOS of vdW-type can be included in the framework of the PC model. Simultaneously, all their parameters acquire a physical meaning.
Hame, Yrjo; Angelini, Elsa D; Hoffman, Eric A; Barr, R Graham; Laine, Andrew F
2014-07-01
The extent of pulmonary emphysema is commonly estimated from CT scans by computing the proportional area of voxels below a predefined attenuation threshold. However, the reliability of this approach is limited by several factors that affect the CT intensity distributions in the lung. This work presents a novel method for emphysema quantification, based on parametric modeling of intensity distributions and a hidden Markov measure field model to segment emphysematous regions. The framework adapts to the characteristics of an image to ensure a robust quantification of emphysema under varying CT imaging protocols, and differences in parenchymal intensity distributions due to factors such as inspiration level. Compared to standard approaches, the presented model involves a larger number of parameters, most of which can be estimated from data, to handle the variability encountered in lung CT scans. The method was applied on a longitudinal data set with 87 subjects and a total of 365 scans acquired with varying imaging protocols. The resulting emphysema estimates had very high intra-subject correlation values. By reducing sensitivity to changes in imaging protocol, the method provides a more robust estimate than standard approaches. The generated emphysema delineations promise advantages for regional analysis of emphysema extent and progression.
Automatic firearm class identification from cartridge cases
NASA Astrophysics Data System (ADS)
Kamalakannan, Sridharan; Mann, Christopher J.; Bingham, Philip R.; Karnowski, Thomas P.; Gleason, Shaun S.
2011-03-01
We present a machine vision system for automatic identification of the class of firearms by extracting and analyzing two significant properties from spent cartridge cases, namely the Firing Pin Impression (FPI) and the Firing Pin Aperture Outline (FPAO). Within the framework of the proposed machine vision system, a white light interferometer is employed to image the head of the spent cartridge cases. As a first step of the algorithmic procedure, the Primer Surface Area (PSA) is detected using a circular Hough transform. Once the PSA is detected, a customized statistical region-based parametric active contour model is initialized around the center of the PSA and evolved to segment the FPI. Subsequently, the scaled version of the segmented FPI is used to initialize a customized Mumford-Shah based level set model in order to segment the FPAO. Once the shapes of FPI and FPAO are extracted, a shape-based level set method is used in order to compare these extracted shapes to an annotated dataset of FPIs and FPAOs from varied firearm types. A total of 74 cartridge case images non-uniformly distributed over five different firearms are processed using the aforementioned scheme and the promising nature of the results (95% classification accuracy) demonstrate the efficacy of the proposed approach.
Biowaste home composting: experimental process monitoring and quality control.
Tatàno, Fabio; Pagliaro, Giacomo; Di Giovanni, Paolo; Floriani, Enrico; Mangani, Filippo
2015-04-01
Because home composting is a prevention option in managing biowaste at local levels, the objective of the present study was to contribute to the knowledge of the process evolution and compost quality that can be expected and obtained, respectively, in this decentralized option. In this study, organized as the research portion of a provincial project on home composting in the territory of Pesaro-Urbino (Central Italy), four experimental composters were first initiated and temporally monitored. Second, two small sub-sets of selected provincial composters (directly operated by households involved in the project) underwent quality control on their compost products at two different temporal steps. The monitored experimental composters showed overall decreasing profiles versus composting time for moisture, organic carbon, and C/N, as well as overall increasing profiles for electrical conductivity and total nitrogen, which represented qualitative indications of progress in the process. Comparative evaluations of the monitored experimental composters also suggested some interactions in home composting, i.e., high C/N ratios limiting organic matter decomposition rates and final humification levels; high moisture contents restricting the internal temperature regime; nearly horizontal phosphorus and potassium evolutions contributing to limit the rates of increase in electrical conductivity; and prolonged biowaste additions contributing to limit the rate of decrease in moisture. The measures of parametric data variability in the two sub-sets of controlled provincial composters showed decreased variability in moisture, organic carbon, and C/N from the seventh to fifteenth month of home composting, as well as increased variability in electrical conductivity, total nitrogen, and humification rate, which could be considered compatible with the respective nature of decreasing and increasing parameters during composting. The modeled parametric kinetics in the monitored experimental composters, along with the evaluation of the parametric central tendencies in the sub-sets of controlled provincial composters, all indicate that 12-15 months is a suitable duration for the appropriate development of home composting in final and simultaneous compliance with typical reference limits. Copyright © 2014 Elsevier Ltd. All rights reserved.
Intensity and temporal noise characteristics in femtosecond optical parametric amplifiers.
Chen, Wei; Fan, Jintao; Ge, Aichen; Song, Huanyu; Song, Youjian; Liu, Bowen; Chai, Lu; Wang, Chingyue; Hu, Minglie
2017-12-11
We characterize the relative intensity noise (RIN) and relative timing jitter (RTJ) between the signal and pump pulses of optical parametric amplifiers (OPAs) seeded by three different seed sources. Compared to a white-light continuum (WLC) seeded- and an optical parametric generator (OPG) seeded OPA, the narrowband CW seeded OPA exhibits the lowest root-mean-square (RMS) RIN and RTJ of 0.79% and 0.32 fs, respectively, integrated from 1 kHz to the Nyquist frequency of 1.25 MHz. An improved numerical model based on a forward Maxwell equation (FME) is built to investigate the transfers of the pump and seed's noise to the resulting OPAs' intensity and temporal fluctuation. Both the experimental and numerical study indicate that the low level of noise from the narrowband CW seeded OPA is attributed to the elimination of the RIN and RTJ coupled from the noise of seed source, being one of the important contributions to RIN and timing jitter in the other two OPAs. The approach to achieve lower level of noise from this CW seeded OPA by driving it close to saturation is also discussed with the same numerical model.
Robust biological parametric mapping: an improved technique for multimodal brain image analysis
NASA Astrophysics Data System (ADS)
Yang, Xue; Beason-Held, Lori; Resnick, Susan M.; Landman, Bennett A.
2011-03-01
Mapping the quantitative relationship between structure and function in the human brain is an important and challenging problem. Numerous volumetric, surface, region of interest and voxelwise image processing techniques have been developed to statistically assess potential correlations between imaging and non-imaging metrics. Recently, biological parametric mapping has extended the widely popular statistical parametric approach to enable application of the general linear model to multiple image modalities (both for regressors and regressands) along with scalar valued observations. This approach offers great promise for direct, voxelwise assessment of structural and functional relationships with multiple imaging modalities. However, as presented, the biological parametric mapping approach is not robust to outliers and may lead to invalid inferences (e.g., artifactual low p-values) due to slight mis-registration or variation in anatomy between subjects. To enable widespread application of this approach, we introduce robust regression and robust inference in the neuroimaging context of application of the general linear model. Through simulation and empirical studies, we demonstrate that our robust approach reduces sensitivity to outliers without substantial degradation in power. The robust approach and associated software package provides a reliable way to quantitatively assess voxelwise correlations between structural and functional neuroimaging modalities.
Finding Rational Parametric Curves of Relative Degree One or Two
ERIC Educational Resources Information Center
Boyles, Dave
2010-01-01
A plane algebraic curve, the complete set of solutions to a polynomial equation: f(x, y) = 0, can in many cases be drawn using parametric equations: x = x(t), y = y(t). Using algebra, attempting to parametrize by means of rational functions of t, one discovers quickly that it is not the degree of f but the "relative degree," that describes how…
NASA Astrophysics Data System (ADS)
Echeverria, Alex; Silva, Jorge F.; Mendez, Rene A.; Orchard, Marcos
2016-10-01
Context. The best precision that can be achieved to estimate the location of a stellar-like object is a topic of permanent interest in the astrometric community. Aims: We analyze bounds for the best position estimation of a stellar-like object on a CCD detector array in a Bayesian setting where the position is unknown, but where we have access to a prior distribution. In contrast to a parametric setting where we estimate a parameter from observations, the Bayesian approach estimates a random object (I.e., the position is a random variable) from observations that are statistically dependent on the position. Methods: We characterize the Bayesian Cramér-Rao (CR) that bounds the minimum mean square error (MMSE) of the best estimator of the position of a point source on a linear CCD-like detector, as a function of the properties of detector, the source, and the background. Results: We quantify and analyze the increase in astrometric performance from the use of a prior distribution of the object position, which is not available in the classical parametric setting. This gain is shown to be significant for various observational regimes, in particular in the case of faint objects or when the observations are taken under poor conditions. Furthermore, we present numerical evidence that the MMSE estimator of this problem tightly achieves the Bayesian CR bound. This is a remarkable result, demonstrating that all the performance gains presented in our analysis can be achieved with the MMSE estimator. Conclusions: The Bayesian CR bound can be used as a benchmark indicator of the expected maximum positional precision of a set of astrometric measurements in which prior information can be incorporated. This bound can be achieved through the conditional mean estimator, in contrast to the parametric case where no unbiased estimator precisely reaches the CR bound.
A modified Leslie-Gower predator-prey interaction model and parameter identifiability
NASA Astrophysics Data System (ADS)
Tripathi, Jai Prakash; Meghwani, Suraj S.; Thakur, Manoj; Abbas, Syed
2018-01-01
In this work, bifurcation and a systematic approach for estimation of identifiable parameters of a modified Leslie-Gower predator-prey system with Crowley-Martin functional response and prey refuge is discussed. Global asymptotic stability is discussed by applying fluctuation lemma. The system undergoes into Hopf bifurcation with respect to parameters intrinsic growth rate of predators (s) and prey reserve (m). The stability of Hopf bifurcation is also discussed by calculating Lyapunov number. The sensitivity analysis of the considered model system with respect to all variables is performed which also supports our theoretical study. To estimate the unknown parameter from the data, an optimization procedure (pseudo-random search algorithm) is adopted. System responses and phase plots for estimated parameters are also compared with true noise free data. It is found that the system dynamics with true set of parametric values is similar to the estimated parametric values. Numerical simulations are presented to substantiate the analytical findings.
Bim Automation: Advanced Modeling Generative Process for Complex Structures
NASA Astrophysics Data System (ADS)
Banfi, F.; Fai, S.; Brumana, R.
2017-08-01
The new paradigm of the complexity of modern and historic structures, which are characterised by complex forms, morphological and typological variables, is one of the greatest challenges for building information modelling (BIM). Generation of complex parametric models needs new scientific knowledge concerning new digital technologies. These elements are helpful to store a vast quantity of information during the life cycle of buildings (LCB). The latest developments of parametric applications do not provide advanced tools, resulting in time-consuming work for the generation of models. This paper presents a method capable of processing and creating complex parametric Building Information Models (BIM) with Non-Uniform to NURBS) with multiple levels of details (Mixed and ReverseLoD) based on accurate 3D photogrammetric and laser scanning surveys. Complex 3D elements are converted into parametric BIM software and finite element applications (BIM to FEA) using specific exchange formats and new modelling tools. The proposed approach has been applied to different case studies: the BIM of modern structure for the courtyard of West Block on Parliament Hill in Ottawa (Ontario) and the BIM of Masegra Castel in Sondrio (Italy), encouraging the dissemination and interaction of scientific results without losing information during the generative process.
An interactive local flattening operator to support digital investigations on artwork surfaces.
Pietroni, Nico; Massimiliano, Corsini; Cignoni, Paolo; Scopigno, Roberto
2011-12-01
Analyzing either high-frequency shape detail or any other 2D fields (scalar or vector) embedded over a 3D geometry is a complex task, since detaching the detail from the overall shape can be tricky. An alternative approach is to move to the 2D space, resolving shape reasoning to easier image processing techniques. In this paper we propose a novel framework for the analysis of 2D information distributed over 3D geometry, based on a locally smooth parametrization technique that allows us to treat local 3D data in terms of image content. The proposed approach has been implemented as a sketch-based system that allows to design with a few gestures a set of (possibly overlapping) parameterizations of rectangular portions of the surface. We demonstrate that, due to the locality of the parametrization, the distortion is under an acceptable threshold, while discontinuities can be avoided since the parametrized geometry is always homeomorphic to a disk. We show the effectiveness of the proposed technique to solve specific Cultural Heritage (CH) tasks: the analysis of chisel marks over the surface of a unfinished sculpture and the local comparison of multiple photographs mapped over the surface of an artwork. For this very difficult task, we believe that our framework and the corresponding tool are the first steps toward a computer-based shape reasoning system, able to support CH scholars with a medium they are more used to. © 2011 IEEE
Software for Managing Parametric Studies
NASA Technical Reports Server (NTRS)
Yarrow, Maurice; McCann, Karen M.; DeVivo, Adrian
2003-01-01
The Information Power Grid Virtual Laboratory (ILab) is a Practical Extraction and Reporting Language (PERL) graphical-user-interface computer program that generates shell scripts to facilitate parametric studies performed on the Grid. (The Grid denotes a worldwide network of supercomputers used for scientific and engineering computations involving data sets too large to fit on desktop computers.) Heretofore, parametric studies on the Grid have been impeded by the need to create control language scripts and edit input data files painstaking tasks that are necessary for managing multiple jobs on multiple computers. ILab reflects an object-oriented approach to automation of these tasks: All data and operations are organized into packages in order to accelerate development and debugging. A container or document object in ILab, called an experiment, contains all the information (data and file paths) necessary to define a complex series of repeated, sequenced, and/or branching processes. For convenience and to enable reuse, this object is serialized to and from disk storage. At run time, the current ILab experiment is used to generate required input files and shell scripts, create directories, copy data files, and then both initiate and monitor the execution of all computational processes.
Herzberg, Ibi; Jasinska, Anna; García, Jenny; Jawaheer, Damini; Service, Susan; Kremeyer, Barbara; Duque, Constanza; Parra, María V; Vega, Jorge; Ortiz, Daniel; Carvajal, Luis; Polanco, Guadalupe; Restrepo, Gabriel J; López, Carlos; Palacio, Carlos; Levinson, Matthew; Aldana, Ileana; Mathews, Carol; Davanzo, Pablo; Molina, Julio; Fournier, Eduardo; Bejarano, Julio; Ramírez, Magui; Ortiz, Carmen Araya; Araya, Xinia; Sabatti, Chiara; Reus, Victor; Macaya, Gabriel; Bedoya, Gabriel; Ospina, Jorge; Freimer, Nelson; Ruiz-Linares, Andrés
2006-11-01
We performed a whole genome microsatellite marker scan in six multiplex families with bipolar (BP) mood disorder ascertained in Antioquia, a historically isolated population from North West Colombia. These families were characterized clinically using the approach employed in independent ongoing studies of BP in the closely related population of the Central Valley of Costa Rica. The most consistent linkage results from parametric and non-parametric analyses of the Colombian scan involved markers on 5q31-33, a region implicated by the previous studies of BP in Costa Rica. Because of these concordant results, a follow-up study with additional markers was undertaken in an expanded set of Colombian and Costa Rican families; this provided a genome-wide significant evidence of linkage of BPI to a candidate region of approximately 10 cM in 5q31-33 (maximum non-parametric linkage score=4.395, P<0.00004). Interestingly, this region has been implicated in several previous genetic studies of schizophrenia and psychosis, including disease association with variants of the enthoprotin and gamma-aminobutyric acid receptor genes.
Estimating hazard ratios in cohort data with missing disease information due to death.
Binder, Nadine; Herrnböck, Anne-Sophie; Schumacher, Martin
2017-03-01
In clinical and epidemiological studies information on the primary outcome of interest, that is, the disease status, is usually collected at a limited number of follow-up visits. The disease status can often only be retrieved retrospectively in individuals who are alive at follow-up, but will be missing for those who died before. Right-censoring the death cases at the last visit (ad-hoc analysis) yields biased hazard ratio estimates of a potential risk factor, and the bias can be substantial and occur in either direction. In this work, we investigate three different approaches that use the same likelihood contributions derived from an illness-death multistate model in order to more adequately estimate the hazard ratio by including the death cases into the analysis: a parametric approach, a penalized likelihood approach, and an imputation-based approach. We investigate to which extent these approaches allow for an unbiased regression analysis by evaluating their performance in simulation studies and on a real data example. In doing so, we use the full cohort with complete illness-death data as reference and artificially induce missing information due to death by setting discrete follow-up visits. Compared to an ad-hoc analysis, all considered approaches provide less biased or even unbiased results, depending on the situation studied. In the real data example, the parametric approach is seen to be too restrictive, whereas the imputation-based approach could almost reconstruct the original event history information. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Nair, Ajay K; Sasidharan, Arun; John, John P; Mehrotra, Seema; Kutty, Bindu M
2016-01-01
The present study describes the development of a neurocognitive paradigm: "Assessing Neurocognition via Gamified Experimental Logic" (ANGEL), for performing the parametric evaluation of multiple neurocognitive functions simultaneously. ANGEL employs an audiovisual sensory motor design for the acquisition of multiple event related potentials (ERPs)-the C1, P50, MMN, N1, N170, P2, N2pc, LRP, P300, and ERN. The ANGEL paradigm allows assessment of 10 neurocognitive variables over the course of three "game" levels of increasing complexity ranging from simple passive observation to complex discrimination and response in the presence of multiple distractors. The paradigm allows assessment of several levels of rapid decision making: speeded up response vs. response-inhibition; responses to easy vs. difficult tasks; responses based on gestalt perception of clear vs. ambiguous stimuli; and finally, responses with set shifting during challenging tasks. The paradigm has been tested using 18 healthy participants from both sexes and the possibilities of varied data analyses have been presented in this paper. The ANGEL approach provides an ecologically valid assessment (as compared to existing tools) that quickly yields a very rich dataset and helps to assess multiple ERPs that can be studied extensively to assess cognitive functions in health and disease conditions.
Nair, Ajay K.; Sasidharan, Arun; John, John P.; Mehrotra, Seema; Kutty, Bindu M.
2016-01-01
The present study describes the development of a neurocognitive paradigm: “Assessing Neurocognition via Gamified Experimental Logic” (ANGEL), for performing the parametric evaluation of multiple neurocognitive functions simultaneously. ANGEL employs an audiovisual sensory motor design for the acquisition of multiple event related potentials (ERPs)—the C1, P50, MMN, N1, N170, P2, N2pc, LRP, P300, and ERN. The ANGEL paradigm allows assessment of 10 neurocognitive variables over the course of three “game” levels of increasing complexity ranging from simple passive observation to complex discrimination and response in the presence of multiple distractors. The paradigm allows assessment of several levels of rapid decision making: speeded up response vs. response-inhibition; responses to easy vs. difficult tasks; responses based on gestalt perception of clear vs. ambiguous stimuli; and finally, responses with set shifting during challenging tasks. The paradigm has been tested using 18 healthy participants from both sexes and the possibilities of varied data analyses have been presented in this paper. The ANGEL approach provides an ecologically valid assessment (as compared to existing tools) that quickly yields a very rich dataset and helps to assess multiple ERPs that can be studied extensively to assess cognitive functions in health and disease conditions. PMID:26858586
One-dimensional statistical parametric mapping in Python.
Pataky, Todd C
2012-01-01
Statistical parametric mapping (SPM) is a topological methodology for detecting field changes in smooth n-dimensional continua. Many classes of biomechanical data are smooth and contained within discrete bounds and as such are well suited to SPM analyses. The current paper accompanies release of 'SPM1D', a free and open-source Python package for conducting SPM analyses on a set of registered 1D curves. Three example applications are presented: (i) kinematics, (ii) ground reaction forces and (iii) contact pressure distribution in probabilistic finite element modelling. In addition to offering a high-level interface to a variety of common statistical tests like t tests, regression and ANOVA, SPM1D also emphasises fundamental concepts of SPM theory through stand-alone example scripts. Source code and documentation are available at: www.tpataky.net/spm1d/.
Parametric Shape Optimization of Lens-Focused Piezoelectric Ultrasound Transducers.
Thomas, Gilles P L; Chapelon, Jean-Yves; Bera, Jean-Christophe; Lafon, Cyril
2018-05-01
Focused transducers composed of flat piezoelectric ceramic coupled with an acoustic lens present an economical alternative to curved piezoelectric ceramics and are already in use in a variety of fields. Using a displacement/pressure (u/p) mixed finite element formulation combined with parametric level-set functions to implicitly define the boundaries between the materials and the fluid-structure interface, a method to optimize the shape of acoustic lens made of either one or multiple materials is presented. From that method, two 400 kHz focused transducers using acoustic lens were designed and built with different rapid prototyping methods, one of them made with a combination of two materials, and experimental measurements of the pressure field around the focal point are in good agreement with the presented model.
Statistical Techniques to Analyze Pesticide Data Program Food Residue Observations.
Szarka, Arpad Z; Hayworth, Carol G; Ramanarayanan, Tharacad S; Joseph, Robert S I
2018-06-26
The U.S. EPA conducts dietary-risk assessments to ensure that levels of pesticides on food in the U.S. food supply are safe. Often these assessments utilize conservative residue estimates, maximum residue levels (MRLs), and a high-end estimate derived from registrant-generated field-trial data sets. A more realistic estimate of consumers' pesticide exposure from food may be obtained by utilizing residues from food-monitoring programs, such as the Pesticide Data Program (PDP) of the U.S. Department of Agriculture. A substantial portion of food-residue concentrations in PDP monitoring programs are below the limits of detection (left-censored), which makes the comparison of regulatory-field-trial and PDP residue levels difficult. In this paper, we present a novel adaption of established statistical techniques, the Kaplan-Meier estimator (K-M), the robust regression on ordered statistic (ROS), and the maximum-likelihood estimator (MLE), to quantify the pesticide-residue concentrations in the presence of heavily censored data sets. The examined statistical approaches include the most commonly used parametric and nonparametric methods for handling left-censored data that have been used in the fields of medical and environmental sciences. This work presents a case study in which data of thiamethoxam residue on bell pepper generated from registrant field trials were compared with PDP-monitoring residue values. The results from the statistical techniques were evaluated and compared with commonly used simple substitution methods for the determination of summary statistics. It was found that the maximum-likelihood estimator (MLE) is the most appropriate statistical method to analyze this residue data set. Using the MLE technique, the data analyses showed that the median and mean PDP bell pepper residue levels were approximately 19 and 7 times lower, respectively, than the corresponding statistics of the field-trial residues.
Prepositioning emergency supplies under uncertainty: a parametric optimization method
NASA Astrophysics Data System (ADS)
Bai, Xuejie; Gao, Jinwu; Liu, Yankui
2018-07-01
Prepositioning of emergency supplies is an effective method for increasing preparedness for disasters and has received much attention in recent years. In this article, the prepositioning problem is studied by a robust parametric optimization method. The transportation cost, supply, demand and capacity are unknown prior to the extraordinary event, which are represented as fuzzy parameters with variable possibility distributions. The variable possibility distributions are obtained through the credibility critical value reduction method for type-2 fuzzy variables. The prepositioning problem is formulated as a fuzzy value-at-risk model to achieve a minimum total cost incurred in the whole process. The key difficulty in solving the proposed optimization model is to evaluate the quantile of the fuzzy function in the objective and the credibility in the constraints. The objective function and constraints can be turned into their equivalent parametric forms through chance constrained programming under the different confidence levels. Taking advantage of the structural characteristics of the equivalent optimization model, a parameter-based domain decomposition method is developed to divide the original optimization problem into six mixed-integer parametric submodels, which can be solved by standard optimization solvers. Finally, to explore the viability of the developed model and the solution approach, some computational experiments are performed on realistic scale case problems. The computational results reported in the numerical example show the credibility and superiority of the proposed parametric optimization method.
Wang, Monan; Zhang, Kai; Yang, Ning
2018-04-09
To help doctors decide their treatment from the aspect of mechanical analysis, the work built a computer assisted optimal system for treatment of femoral neck fracture oriented to clinical application. The whole system encompassed the following three parts: Preprocessing module, finite element mechanical analysis module, post processing module. Preprocessing module included parametric modeling of bone, parametric modeling of fracture face, parametric modeling of fixed screw and fixed position and input and transmission of model parameters. Finite element mechanical analysis module included grid division, element type setting, material property setting, contact setting, constraint and load setting, analysis method setting and batch processing operation. Post processing module included extraction and display of batch processing operation results, image generation of batch processing operation, optimal program operation and optimal result display. The system implemented the whole operations from input of fracture parameters to output of the optimal fixed plan according to specific patient real fracture parameter and optimal rules, which demonstrated the effectiveness of the system. Meanwhile, the system had a friendly interface, simple operation and could improve the system function quickly through modifying single module.
Spatial-temporal event detection in climate parameter imagery.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McKenna, Sean Andrew; Gutierrez, Karen A.
Previously developed techniques that comprise statistical parametric mapping, with applications focused on human brain imaging, are examined and tested here for new applications in anomaly detection within remotely-sensed imagery. Two approaches to analysis are developed: online, regression-based anomaly detection and conditional differences. These approaches are applied to two example spatial-temporal data sets: data simulated with a Gaussian field deformation approach and weekly NDVI images derived from global satellite coverage. Results indicate that anomalies can be identified in spatial temporal data with the regression-based approach. Additionally, la Nina and el Nino climatic conditions are used as different stimuli applied to themore » earth and this comparison shows that el Nino conditions lead to significant decreases in NDVI in both the Amazon Basin and in Southern India.« less
Degeling, Koen; IJzerman, Maarten J; Koopman, Miriam; Koffijberg, Hendrik
2017-12-15
Parametric distributions based on individual patient data can be used to represent both stochastic and parameter uncertainty. Although general guidance is available on how parameter uncertainty should be accounted for in probabilistic sensitivity analysis, there is no comprehensive guidance on reflecting parameter uncertainty in the (correlated) parameters of distributions used to represent stochastic uncertainty in patient-level models. This study aims to provide this guidance by proposing appropriate methods and illustrating the impact of this uncertainty on modeling outcomes. Two approaches, 1) using non-parametric bootstrapping and 2) using multivariate Normal distributions, were applied in a simulation and case study. The approaches were compared based on point-estimates and distributions of time-to-event and health economic outcomes. To assess sample size impact on the uncertainty in these outcomes, sample size was varied in the simulation study and subgroup analyses were performed for the case-study. Accounting for parameter uncertainty in distributions that reflect stochastic uncertainty substantially increased the uncertainty surrounding health economic outcomes, illustrated by larger confidence ellipses surrounding the cost-effectiveness point-estimates and different cost-effectiveness acceptability curves. Although both approaches performed similar for larger sample sizes (i.e. n = 500), the second approach was more sensitive to extreme values for small sample sizes (i.e. n = 25), yielding infeasible modeling outcomes. Modelers should be aware that parameter uncertainty in distributions used to describe stochastic uncertainty needs to be reflected in probabilistic sensitivity analysis, as it could substantially impact the total amount of uncertainty surrounding health economic outcomes. If feasible, the bootstrap approach is recommended to account for this uncertainty.
Automated, Parametric Geometry Modeling and Grid Generation for Turbomachinery Applications
NASA Technical Reports Server (NTRS)
Harrand, Vincent J.; Uchitel, Vadim G.; Whitmire, John B.
2000-01-01
The objective of this Phase I project is to develop a highly automated software system for rapid geometry modeling and grid generation for turbomachinery applications. The proposed system features a graphical user interface for interactive control, a direct interface to commercial CAD/PDM systems, support for IGES geometry output, and a scripting capability for obtaining a high level of automation and end-user customization of the tool. The developed system is fully parametric and highly automated, and, therefore, significantly reduces the turnaround time for 3D geometry modeling, grid generation and model setup. This facilitates design environments in which a large number of cases need to be generated, such as for parametric analysis and design optimization of turbomachinery equipment. In Phase I we have successfully demonstrated the feasibility of the approach. The system has been tested on a wide variety of turbomachinery geometries, including several impellers and a multi stage rotor-stator combination. In Phase II, we plan to integrate the developed system with turbomachinery design software and with commercial CAD/PDM software.
Three-Dimensional Modeling of Aircraft High-Lift Components with Vehicle Sketch Pad
NASA Technical Reports Server (NTRS)
Olson, Erik D.
2016-01-01
Vehicle Sketch Pad (OpenVSP) is a parametric geometry modeler that has been used extensively for conceptual design studies of aircraft, including studies using higher-order analysis. OpenVSP can model flap and slat surfaces using simple shearing of the airfoil coordinates, which is an appropriate level of complexity for lower-order aerodynamic analysis methods. For three-dimensional analysis, however, there is not a built-in method for defining the high-lift components in OpenVSP in a realistic manner, or for controlling their complex motions in a parametric manner that is intuitive to the designer. This paper seeks instead to utilize OpenVSP's existing capabilities, and establish a set of best practices for modeling high-lift components at a level of complexity suitable for higher-order analysis methods. Techniques are described for modeling the flap and slat components as separate three-dimensional surfaces, and for controlling their motion using simple parameters defined in the local hinge-axis frame of reference. To demonstrate the methodology, an OpenVSP model for the Energy-Efficient Transport (EET) AR12 wind-tunnel model has been created, taking advantage of OpenVSP's Advanced Parameter Linking capability to translate the motions of the high-lift components from the hinge-axis coordinate system to a set of transformations in OpenVSP's frame of reference.
Quantum annealing with parametrically driven nonlinear oscillators
NASA Astrophysics Data System (ADS)
Puri, Shruti
While progress has been made towards building Ising machines to solve hard combinatorial optimization problems, quantum speedups have so far been elusive. Furthermore, protecting annealers against decoherence and achieving long-range connectivity remain important outstanding challenges. With the hope of overcoming these challenges, I introduce a new paradigm for quantum annealing that relies on continuous variable states. Unlike the more conventional approach based on two-level systems, in this approach, quantum information is encoded in two coherent states that are stabilized by parametrically driving a nonlinear resonator. I will show that a fully connected Ising problem can be mapped onto a network of such resonators, and outline an annealing protocol based on adiabatic quantum computing. During the protocol, the resonators in the network evolve from vacuum to coherent states representing the ground state configuration of the encoded problem. In short, the system evolves between two classical states following non-classical dynamics. As will be supported by numerical results, this new annealing paradigm leads to superior noise resilience. Finally, I will discuss a realistic circuit QED realization of an all-to-all connected network of parametrically driven nonlinear resonators. The continuous variable nature of the states in the large Hilbert space of the resonator provides new opportunities for exploring quantum phase transitions and non-stoquastic dynamics during the annealing schedule.
Bower, Hannah; Andersson, Therese M-L; Crowther, Michael J; Dickman, Paul W; Lambe, Mats; Lambert, Paul C
2018-04-01
Expected or reference mortality rates are commonly used in the calculation of measures such as relative survival in population-based cancer survival studies and standardized mortality ratios. These expected rates are usually presented according to age, sex, and calendar year. In certain situations, stratification of expected rates by other factors is required to avoid potential bias if interest lies in quantifying measures according to such factors as, for example, socioeconomic status. If data are not available on a population level, information from a control population could be used to adjust expected rates. We have presented two approaches for adjusting expected mortality rates using information from a control population: a Poisson generalized linear model and a flexible parametric survival model. We used a control group from BCBaSe-a register-based, matched breast cancer cohort in Sweden with diagnoses between 1992 and 2012-to illustrate the two methods using socioeconomic status as a risk factor of interest. Results showed that Poisson and flexible parametric survival approaches estimate similar adjusted mortality rates according to socioeconomic status. Additional uncertainty involved in the methods to estimate stratified, expected mortality rates described in this study can be accounted for using a parametric bootstrap, but this might make little difference if using a large control population.
Deep learning for studies of galaxy morphology
NASA Astrophysics Data System (ADS)
Tuccillo, D.; Huertas-Company, M.; Decencière, E.; Velasco-Forero, S.
2017-06-01
Establishing accurate morphological measurements of galaxies in a reasonable amount of time for future big-data surveys such as EUCLID, the Large Synoptic Survey Telescope or the Wide Field Infrared Survey Telescope is a challenge. Because of its high level of abstraction with little human intervention, deep learning appears to be a promising approach. Deep learning is a rapidly growing discipline that models high-level patterns in data as complex multilayered networks. In this work we test the ability of deep convolutional networks to provide parametric properties of Hubble Space Telescope like galaxies (half-light radii, Sérsic indices, total flux etc..). We simulate a set of galaxies including point spread function and realistic noise from the CANDELS survey and try to recover the main galaxy parameters using deep-learning. We compare the results with the ones obtained with the commonly used profile fitting based software GALFIT. This way showing that with our method we obtain results at least equally good as the ones obtained with GALFIT but, once trained, with a factor 5 hundred time faster.
A Nonparametric Approach to Estimate Classification Accuracy and Consistency
ERIC Educational Resources Information Center
Lathrop, Quinn N.; Cheng, Ying
2014-01-01
When cut scores for classifications occur on the total score scale, popular methods for estimating classification accuracy (CA) and classification consistency (CC) require assumptions about a parametric form of the test scores or about a parametric response model, such as item response theory (IRT). This article develops an approach to estimate CA…
Minimization of Basis Risk in Parametric Earthquake Cat Bonds
NASA Astrophysics Data System (ADS)
Franco, G.
2009-12-01
A catastrophe -cat- bond is an instrument used by insurance and reinsurance companies, by governments or by groups of nations to cede catastrophic risk to the financial markets, which are capable of supplying cover for highly destructive events, surpassing the typical capacity of traditional reinsurance contracts. Parametric cat bonds, a specific type of cat bonds, use trigger mechanisms or indices that depend on physical event parameters published by respected third parties in order to determine whether a part or the entire bond principal is to be paid for a certain event. First generation cat bonds, or cat-in-a-box bonds, display a trigger mechanism that consists of a set of geographic zones in which certain conditions need to be met by an earthquake’s magnitude and depth in order to trigger payment of the bond principal. Second generation cat bonds use an index formulation that typically consists of a sum of products of a set of weights by a polynomial function of the ground motion variables reported by a geographically distributed seismic network. These instruments are especially appealing to developing countries with incipient insurance industries wishing to cede catastrophic losses to the financial markets because the payment trigger mechanism is transparent and does not involve the parties ceding or accepting the risk, significantly reducing moral hazard. In order to be successful in the market, however, parametric cat bonds have typically been required to specify relatively simple trigger conditions. The consequence of such simplifications is the increase of basis risk. This risk represents the possibility that the trigger mechanism fails to accurately capture the actual losses of a catastrophic event, namely that it does not trigger for a highly destructive event or vice versa, that a payment of the bond principal is caused by an event that produced insignificant losses. The first case disfavors the sponsor who was seeking cover for its losses while the second disfavors the investor who loses part of the investment without a reasonable cause. A streamlined and fairly automated methodology has been developed to design parametric triggers that minimize the basis risk while still maintaining their level of relative simplicity. Basis risk is minimized in both, first and second generation, parametric cat bonds through an optimization procedure that aims to find the most appropriate magnitude thresholds, geographic zones, and weight index values. Sensitivity analyses to different design assumptions show that first generation cat bonds are typically affected by a large negative basis risk, namely the risk that the bond will not trigger for events within the risk level transferred, unless a sufficiently small geographic resolution is selected to define the trigger zones. Second generation cat bonds in contrast display a bias towards negative or positive basis risk depending on the degree of the polynomial used as well as on other design parameters. Two examples are presented, the construction of a first generation parametric trigger mechanism for Costa Rica and the design of a second generation parametric index for Japan.
NASA Astrophysics Data System (ADS)
Kazmi, K. R.; Khan, F. A.
2008-01-01
In this paper, using proximal-point mapping technique of P-[eta]-accretive mapping and the property of the fixed-point set of set-valued contractive mappings, we study the behavior and sensitivity analysis of the solution set of a parametric generalized implicit quasi-variational-like inclusion involving P-[eta]-accretive mapping in real uniformly smooth Banach space. Further, under suitable conditions, we discuss the Lipschitz continuity of the solution set with respect to the parameter. The technique and results presented in this paper can be viewed as extension of the techniques and corresponding results given in [R.P. Agarwal, Y.-J. Cho, N.-J. Huang, Sensitivity analysis for strongly nonlinear quasi-variational inclusions, Appl. MathE Lett. 13 (2002) 19-24; S. Dafermos, Sensitivity analysis in variational inequalities, Math. Oper. Res. 13 (1988) 421-434; X.-P. Ding, Sensitivity analysis for generalized nonlinear implicit quasi-variational inclusions, Appl. Math. Lett. 17 (2) (2004) 225-235; X.-P. Ding, Parametric completely generalized mixed implicit quasi-variational inclusions involving h-maximal monotone mappings, J. Comput. Appl. Math. 182 (2) (2005) 252-269; X.-P. Ding, C.L. Luo, On parametric generalized quasi-variational inequalities, J. Optim. Theory Appl. 100 (1999) 195-205; Z. Liu, L. Debnath, S.M. Kang, J.S. Ume, Sensitivity analysis for parametric completely generalized nonlinear implicit quasi-variational inclusions, J. Math. Anal. Appl. 277 (1) (2003) 142-154; R.N. Mukherjee, H.L. Verma, Sensitivity analysis of generalized variational inequalities, J. Math. Anal. Appl. 167 (1992) 299-304; M.A. Noor, Sensitivity analysis framework for general quasi-variational inclusions, Comput. Math. Appl. 44 (2002) 1175-1181; M.A. Noor, Sensitivity analysis for quasivariational inclusions, J. Math. Anal. Appl. 236 (1999) 290-299; J.Y. Park, J.U. Jeong, Parametric generalized mixed variational inequalities, Appl. Math. Lett. 17 (2004) 43-48].
The Dundee Ready Education Environment Measure (DREEM): a review of its adoption and use.
Miles, Susan; Swift, Louise; Leinster, Sam J
2012-01-01
The Dundee Ready Education Environment Measure (DREEM) was published in 1997 as a tool to evaluate educational environments of medical schools and other health training settings and a recent review concluded that it was the most suitable such instrument. This study aimed to review the settings and purposes to which the DREEM has been applied and the approaches used to analyse and report it, with a view to guiding future users towards appropriate methodology. A systematic literature review was conducted using the Web of Knowledge databases of all articles reporting DREEM data between 1997 and 4 January 2011. The review found 40 publications, using data from 20 countries. DREEM is used in evaluation for diagnostic purposes, comparison between different groups and comparison with ideal/expected scores. A variety of non-parametric and parametric statistical methods have been applied, but their use is inconsistent. DREEM has been used internationally for different purposes and is regarded as a useful tool by users. However, reporting and analysis differs between publications. This lack of uniformity makes comparison between institutions difficult. Most users of DREEM are not statisticians and there is a need for informed guidelines on its reporting and statistical analysis.
Combining Search Engines for Comparative Proteomics
Tabb, David
2012-01-01
Many proteomics laboratories have found spectral counting to be an ideal way to recognize biomarkers that differentiate cohorts of samples. This approach assumes that proteins that differ in quantity between samples will generate different numbers of identifiable tandem mass spectra. Increasingly, researchers are employing multiple search engines to maximize the identifications generated from data collections. This talk evaluates four strategies to combine information from multiple search engines in comparative proteomics. The “Count Sum” model pools the spectra across search engines. The “Vote Counting” model combines the judgments from each search engine by protein. Two other models employ parametric and non-parametric analyses of protein-specific p-values from different search engines. We evaluated the four strategies in two different data sets. The ABRF iPRG 2009 study generated five LC-MS/MS analyses of “red” E. coli and five analyses of “yellow” E. coli. NCI CPTAC Study 6 generated five concentrations of Sigma UPS1 spiked into a yeast background. All data were identified with X!Tandem, Sequest, MyriMatch, and TagRecon. For both sample types, “Vote Counting” appeared to manage the diverse identification sets most effectively, yielding heightened discrimination as more search engines were added.
Can color-coded parametric maps improve dynamic enhancement pattern analysis in MR mammography?
Baltzer, P A; Dietzel, M; Vag, T; Beger, S; Freiberg, C; Herzog, A B; Gajda, M; Camara, O; Kaiser, W A
2010-03-01
Post-contrast enhancement characteristics (PEC) are a major criterion for differential diagnosis in MR mammography (MRM). Manual placement of regions of interest (ROIs) to obtain time/signal intensity curves (TSIC) is the standard approach to assess dynamic enhancement data. Computers can automatically calculate the TSIC in every lesion voxel and combine this data to form one color-coded parametric map (CCPM). Thus, the TSIC of the whole lesion can be assessed. This investigation was conducted to compare the diagnostic accuracy (DA) of CCPM with TSIC for the assessment of PEC. 329 consecutive patients with 469 histologically verified lesions were examined. MRM was performed according to a standard protocol (1.5 T, 0.1 mmol/kgbw Gd-DTPA). ROIs were drawn manually within any lesion to calculate the TSIC. CCPMs were created in all patients using dedicated software (CAD Sciences). Both methods were rated by 2 observers in consensus on an ordinal scale. Receiver operating characteristics (ROC) analysis was used to compare both methods. The area under the curve (AUC) was significantly (p=0.026) higher for CCPM (0.829) than TSIC (0.749). The sensitivity was 88.5% (CCPM) vs. 82.8% (TSIC), whereas equal specificity levels were found (CCPM: 63.7%, TSIC: 63.0%). The color-coded parametric maps (CCPMs) showed a significantly higher DA compared to TSIC, in particular the sensitivity could be increased. Therefore, the CCPM method is a feasible approach to assessing dynamic data in MRM and condenses several imaging series into one parametric map. © Georg Thieme Verlag KG Stuttgart · New York.
Rakvongthai, Yothin; Ouyang, Jinsong; Guerin, Bastien; Li, Quanzheng; Alpert, Nathaniel M.; El Fakhri, Georges
2013-01-01
Purpose: Our research goal is to develop an algorithm to reconstruct cardiac positron emission tomography (PET) kinetic parametric images directly from sinograms and compare its performance with the conventional indirect approach. Methods: Time activity curves of a NCAT phantom were computed according to a one-tissue compartmental kinetic model with realistic kinetic parameters. The sinograms at each time frame were simulated using the activity distribution for the time frame. The authors reconstructed the parametric images directly from the sinograms by optimizing a cost function, which included the Poisson log-likelihood and a spatial regularization terms, using the preconditioned conjugate gradient (PCG) algorithm with the proposed preconditioner. The proposed preconditioner is a diagonal matrix whose diagonal entries are the ratio of the parameter and the sensitivity of the radioactivity associated with parameter. The authors compared the reconstructed parametric images using the direct approach with those reconstructed using the conventional indirect approach. Results: At the same bias, the direct approach yielded significant relative reduction in standard deviation by 12%–29% and 32%–70% for 50 × 106 and 10 × 106 detected coincidences counts, respectively. Also, the PCG method effectively reached a constant value after only 10 iterations (with numerical convergence achieved after 40–50 iterations), while more than 500 iterations were needed for CG. Conclusions: The authors have developed a novel approach based on the PCG algorithm to directly reconstruct cardiac PET parametric images from sinograms, and yield better estimation of kinetic parameters than the conventional indirect approach, i.e., curve fitting of reconstructed images. The PCG method increases the convergence rate of reconstruction significantly as compared to the conventional CG method. PMID:24089922
Rakvongthai, Yothin; Ouyang, Jinsong; Guerin, Bastien; Li, Quanzheng; Alpert, Nathaniel M; El Fakhri, Georges
2013-10-01
Our research goal is to develop an algorithm to reconstruct cardiac positron emission tomography (PET) kinetic parametric images directly from sinograms and compare its performance with the conventional indirect approach. Time activity curves of a NCAT phantom were computed according to a one-tissue compartmental kinetic model with realistic kinetic parameters. The sinograms at each time frame were simulated using the activity distribution for the time frame. The authors reconstructed the parametric images directly from the sinograms by optimizing a cost function, which included the Poisson log-likelihood and a spatial regularization terms, using the preconditioned conjugate gradient (PCG) algorithm with the proposed preconditioner. The proposed preconditioner is a diagonal matrix whose diagonal entries are the ratio of the parameter and the sensitivity of the radioactivity associated with parameter. The authors compared the reconstructed parametric images using the direct approach with those reconstructed using the conventional indirect approach. At the same bias, the direct approach yielded significant relative reduction in standard deviation by 12%-29% and 32%-70% for 50 × 10(6) and 10 × 10(6) detected coincidences counts, respectively. Also, the PCG method effectively reached a constant value after only 10 iterations (with numerical convergence achieved after 40-50 iterations), while more than 500 iterations were needed for CG. The authors have developed a novel approach based on the PCG algorithm to directly reconstruct cardiac PET parametric images from sinograms, and yield better estimation of kinetic parameters than the conventional indirect approach, i.e., curve fitting of reconstructed images. The PCG method increases the convergence rate of reconstruction significantly as compared to the conventional CG method.
A Semi-Parametric Bayesian Mixture Modeling Approach for the Analysis of Judge Mediated Data
ERIC Educational Resources Information Center
Muckle, Timothy Joseph
2010-01-01
Existing methods for the analysis of ordinal-level data arising from judge ratings, such as the Multi-Facet Rasch model (MFRM, or the so-called Facets model) have been widely used in assessment in order to render fair examinee ability estimates in situations where the judges vary in their behavior or severity. However, this model makes certain…
Examination of influential observations in penalized spline regression
NASA Astrophysics Data System (ADS)
Türkan, Semra
2013-10-01
In parametric or nonparametric regression models, the results of regression analysis are affected by some anomalous observations in the data set. Thus, detection of these observations is one of the major steps in regression analysis. These observations are precisely detected by well-known influence measures. Pena's statistic is one of them. In this study, Pena's approach is formulated for penalized spline regression in terms of ordinary residuals and leverages. The real data and artificial data are used to see illustrate the effectiveness of Pena's statistic as to Cook's distance on detecting influential observations. The results of the study clearly reveal that the proposed measure is superior to Cook's Distance to detect these observations in large data set.
Borri, Marco; Schmidt, Maria A; Powell, Ceri; Koh, Dow-Mu; Riddell, Angela M; Partridge, Mike; Bhide, Shreerang A; Nutting, Christopher M; Harrington, Kevin J; Newbold, Katie L; Leach, Martin O
2015-01-01
To describe a methodology, based on cluster analysis, to partition multi-parametric functional imaging data into groups (or clusters) of similar functional characteristics, with the aim of characterizing functional heterogeneity within head and neck tumour volumes. To evaluate the performance of the proposed approach on a set of longitudinal MRI data, analysing the evolution of the obtained sub-sets with treatment. The cluster analysis workflow was applied to a combination of dynamic contrast-enhanced and diffusion-weighted imaging MRI data from a cohort of squamous cell carcinoma of the head and neck patients. Cumulative distributions of voxels, containing pre and post-treatment data and including both primary tumours and lymph nodes, were partitioned into k clusters (k = 2, 3 or 4). Principal component analysis and cluster validation were employed to investigate data composition and to independently determine the optimal number of clusters. The evolution of the resulting sub-regions with induction chemotherapy treatment was assessed relative to the number of clusters. The clustering algorithm was able to separate clusters which significantly reduced in voxel number following induction chemotherapy from clusters with a non-significant reduction. Partitioning with the optimal number of clusters (k = 4), determined with cluster validation, produced the best separation between reducing and non-reducing clusters. The proposed methodology was able to identify tumour sub-regions with distinct functional properties, independently separating clusters which were affected differently by treatment. This work demonstrates that unsupervised cluster analysis, with no prior knowledge of the data, can be employed to provide a multi-parametric characterization of functional heterogeneity within tumour volumes.
Fitting C 2 Continuous Parametric Surfaces to Frontiers Delimiting Physiologic Structures
Bayer, Jason D.
2014-01-01
We present a technique to fit C 2 continuous parametric surfaces to scattered geometric data points forming frontiers delimiting physiologic structures in segmented images. Such mathematical representation is interesting because it facilitates a large number of operations in modeling. While the fitting of C 2 continuous parametric curves to scattered geometric data points is quite trivial, the fitting of C 2 continuous parametric surfaces is not. The difficulty comes from the fact that each scattered data point should be assigned a unique parametric coordinate, and the fit is quite sensitive to their distribution on the parametric plane. We present a new approach where a polygonal (quadrilateral or triangular) surface is extracted from the segmented image. This surface is subsequently projected onto a parametric plane in a manner to ensure a one-to-one mapping. The resulting polygonal mesh is then regularized for area and edge length. Finally, from this point, surface fitting is relatively trivial. The novelty of our approach lies in the regularization of the polygonal mesh. Process performance is assessed with the reconstruction of a geometric model of mouse heart ventricles from a computerized tomography scan. Our results show an excellent reproduction of the geometric data with surfaces that are C 2 continuous. PMID:24782911
Waveform inversion for orthorhombic anisotropy with P waves: feasibility and resolution
NASA Astrophysics Data System (ADS)
Kazei, Vladimir; Alkhalifah, Tariq
2018-05-01
Various parametrizations have been suggested to simplify inversions of first arrivals, or P waves, in orthorhombic anisotropic media, but the number and type of retrievable parameters have not been decisively determined. We show that only six parameters can be retrieved from the dynamic linearized inversion of P waves. These parameters are different from the six parameters needed to describe the kinematics of P waves. Reflection-based radiation patterns from the P-P scattered waves are remapped into the spectral domain to allow for our resolution analysis based on the effective angle of illumination concept. Singular value decomposition of the spectral sensitivities from various azimuths, offset coverage scenarios and data bandwidths allows us to quantify the resolution of different parametrizations, taking into account the signal-to-noise ratio in a given experiment. According to our singular value analysis, when the primary goal of inversion is determining the velocity of the P waves, gradually adding anisotropy of lower orders (isotropic, vertically transversally isotropic and orthorhombic) in hierarchical parametrization is the best choice. Hierarchical parametrization reduces the trade-off between the parameters and makes gradual introduction of lower anisotropy orders straightforward. When all the anisotropic parameters affecting P-wave propagation need to be retrieved simultaneously, the classic parametrization of orthorhombic medium with elastic stiffness matrix coefficients and density is a better choice for inversion. We provide estimates of the number and set of parameters that can be retrieved from surface seismic data in different acquisition scenarios. To set up an inversion process, the singular values determine the number of parameters that can be inverted and the resolution matrices from the parametrizations can be used to ascertain the set of parameters that can be resolved.
Prevalence Incidence Mixture Models
The R package and webtool fits Prevalence Incidence Mixture models to left-censored and irregularly interval-censored time to event data that is commonly found in screening cohorts assembled from electronic health records. Absolute and relative risk can be estimated for simple random sampling, and stratified sampling (the two approaches of superpopulation and a finite population are supported for target populations). Non-parametric (absolute risks only), semi-parametric, weakly-parametric (using B-splines), and some fully parametric (such as the logistic-Weibull) models are supported.
ERIC Educational Resources Information Center
Sobh, Tarek M.; Tibrewal, Abhilasha
2006-01-01
Operating systems theory primarily concentrates on the optimal use of computing resources. This paper presents an alternative approach to teaching and studying operating systems design and concepts by way of parametrically optimizing critical operating system functions. Detailed examples of two critical operating systems functions using the…
Parametric symmetries in exactly solvable real and PT symmetric complex potentials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yadav, Rajesh Kumar, E-mail: rajeshastrophysics@gmail.com; Khare, Avinash, E-mail: khare@physics.unipune.ac.in; Bagchi, Bijan, E-mail: bbagchi123@gmail.com
In this paper, we discuss the parametric symmetries in different exactly solvable systems characterized by real or complex PT symmetric potentials. We focus our attention on the conventional potentials such as the generalized Pöschl Teller (GPT), Scarf-I, and PT symmetric Scarf-II which are invariant under certain parametric transformations. The resulting set of potentials is shown to yield a completely different behavior of the bound state solutions. Further, the supersymmetric partner potentials acquire different forms under such parametric transformations leading to new sets of exactly solvable real and PT symmetric complex potentials. These potentials are also observed to be shape invariantmore » (SI) in nature. We subsequently take up a study of the newly discovered rationally extended SI potentials, corresponding to the above mentioned conventional potentials, whose bound state solutions are associated with the exceptional orthogonal polynomials (EOPs). We discuss the transformations of the corresponding Casimir operator employing the properties of the so(2, 1) algebra.« less
Model risk for European-style stock index options.
Gençay, Ramazan; Gibson, Rajna
2007-01-01
In empirical modeling, there have been two strands for pricing in the options literature, namely the parametric and nonparametric models. Often, the support for the nonparametric methods is based on a benchmark such as the Black-Scholes (BS) model with constant volatility. In this paper, we study the stochastic volatility (SV) and stochastic volatility random jump (SVJ) models as parametric benchmarks against feedforward neural network (FNN) models, a class of neural network models. Our choice for FNN models is due to their well-studied universal approximation properties of an unknown function and its partial derivatives. Since the partial derivatives of an option pricing formula are risk pricing tools, an accurate estimation of the unknown option pricing function is essential for pricing and hedging. Our findings indicate that FNN models offer themselves as robust option pricing tools, over their sophisticated parametric counterparts in predictive settings. There are two routes to explain the superiority of FNN models over the parametric models in forecast settings. These are nonnormality of return distributions and adaptive learning.
Definition of NASTRAN sets by use of parametric geometry
NASA Technical Reports Server (NTRS)
Baughn, Terry V.; Tiv, Mehran
1989-01-01
Many finite element preprocessors describe finite element model geometry with points, lines, surfaces and volumes. One method for describing these basic geometric entities is by use of parametric cubics which are useful for representing complex shapes. The lines, surfaces and volumes may be discretized for follow on finite element analysis. The ability to limit or selectively recover results from the finite element model is extremely important to the analyst. Equally important is the ability to easily apply boundary conditions. Although graphical preprocessors have made these tasks easier, model complexity may not lend itself to easily identify a group of grid points desired for data recovery or application of constraints. A methodology is presented which makes use of the assignment of grid point locations in parametric coordinates. The parametric coordinates provide a convenient ordering of the grid point locations and a method for retrieving the grid point ID's from the parent geometry. The selected grid points may then be used for the generation of the appropriate set and constraint cards.
Multivariate decoding of brain images using ordinal regression.
Doyle, O M; Ashburner, J; Zelaya, F O; Williams, S C R; Mehta, M A; Marquand, A F
2013-11-01
Neuroimaging data are increasingly being used to predict potential outcomes or groupings, such as clinical severity, drug dose response, and transitional illness states. In these examples, the variable (target) we want to predict is ordinal in nature. Conventional classification schemes assume that the targets are nominal and hence ignore their ranked nature, whereas parametric and/or non-parametric regression models enforce a metric notion of distance between classes. Here, we propose a novel, alternative multivariate approach that overcomes these limitations - whole brain probabilistic ordinal regression using a Gaussian process framework. We applied this technique to two data sets of pharmacological neuroimaging data from healthy volunteers. The first study was designed to investigate the effect of ketamine on brain activity and its subsequent modulation with two compounds - lamotrigine and risperidone. The second study investigates the effect of scopolamine on cerebral blood flow and its modulation using donepezil. We compared ordinal regression to multi-class classification schemes and metric regression. Considering the modulation of ketamine with lamotrigine, we found that ordinal regression significantly outperformed multi-class classification and metric regression in terms of accuracy and mean absolute error. However, for risperidone ordinal regression significantly outperformed metric regression but performed similarly to multi-class classification both in terms of accuracy and mean absolute error. For the scopolamine data set, ordinal regression was found to outperform both multi-class and metric regression techniques considering the regional cerebral blood flow in the anterior cingulate cortex. Ordinal regression was thus the only method that performed well in all cases. Our results indicate the potential of an ordinal regression approach for neuroimaging data while providing a fully probabilistic framework with elegant approaches for model selection. Copyright © 2013. Published by Elsevier Inc.
Mediation analysis with time varying exposures and mediators
VanderWeele, Tyler J.; Tchetgen Tchetgen, Eric J.
2016-01-01
Summary In this paper we consider causal mediation analysis when exposures and mediators vary over time. We give non-parametric identification results, discuss parametric implementation, and also provide a weighting approach to direct and indirect effects based on combining the results of two marginal structural models. We also discuss how our results give rise to a causal interpretation of the effect estimates produced from longitudinal structural equation models. When there are time-varying confounders affected by prior exposure and mediator, natural direct and indirect effects are not identified. However, we define a randomized interventional analogue of natural direct and indirect effects that are identified in this setting. The formula that identifies these effects we refer to as the “mediational g-formula.” When there is no mediation, the mediational g-formula reduces to Robins’ regular g-formula for longitudinal data. When there are no time-varying confounders affected by prior exposure and mediator values, then the mediational g-formula reduces to a longitudinal version of Pearl’s mediation formula. However, the mediational g-formula itself can accommodate both mediation and time-varying confounders and constitutes a general approach to mediation analysis with time-varying exposures and mediators. PMID:28824285
Parametric dictionary learning for modeling EAP and ODF in diffusion MRI.
Merlet, Sylvain; Caruyer, Emmanuel; Deriche, Rachid
2012-01-01
In this work, we propose an original and efficient approach to exploit the ability of Compressed Sensing (CS) to recover diffusion MRI (dMRI) signals from a limited number of samples while efficiently recovering important diffusion features such as the ensemble average propagator (EAP) and the orientation distribution function (ODF). Some attempts to sparsely represent the diffusion signal have already been performed. However and contrarly to what has been presented in CS dMRI, in this work we propose and advocate the use of a well adapted learned dictionary and show that it leads to a sparser signal estimation as well as to an efficient reconstruction of very important diffusion features. We first propose to learn and design a sparse and parametric dictionary from a set of training diffusion data. Then, we propose a framework to analytically estimate in closed form two important diffusion features: the EAP and the ODF. Various experiments on synthetic, phantom and human brain data have been carried out and promising results with reduced number of atoms have been obtained on diffusion signal reconstruction, thus illustrating the added value of our method over state-of-the-art SHORE and SPF based approaches.
Mediation analysis with time varying exposures and mediators.
VanderWeele, Tyler J; Tchetgen Tchetgen, Eric J
2017-06-01
In this paper we consider causal mediation analysis when exposures and mediators vary over time. We give non-parametric identification results, discuss parametric implementation, and also provide a weighting approach to direct and indirect effects based on combining the results of two marginal structural models. We also discuss how our results give rise to a causal interpretation of the effect estimates produced from longitudinal structural equation models. When there are time-varying confounders affected by prior exposure and mediator, natural direct and indirect effects are not identified. However, we define a randomized interventional analogue of natural direct and indirect effects that are identified in this setting. The formula that identifies these effects we refer to as the "mediational g-formula." When there is no mediation, the mediational g-formula reduces to Robins' regular g-formula for longitudinal data. When there are no time-varying confounders affected by prior exposure and mediator values, then the mediational g-formula reduces to a longitudinal version of Pearl's mediation formula. However, the mediational g-formula itself can accommodate both mediation and time-varying confounders and constitutes a general approach to mediation analysis with time-varying exposures and mediators.
SHIPS: Spectral Hierarchical Clustering for the Inference of Population Structure in Genetic Studies
Bouaziz, Matthieu; Paccard, Caroline; Guedj, Mickael; Ambroise, Christophe
2012-01-01
Inferring the structure of populations has many applications for genetic research. In addition to providing information for evolutionary studies, it can be used to account for the bias induced by population stratification in association studies. To this end, many algorithms have been proposed to cluster individuals into genetically homogeneous sub-populations. The parametric algorithms, such as Structure, are very popular but their underlying complexity and their high computational cost led to the development of faster parametric alternatives such as Admixture. Alternatives to these methods are the non-parametric approaches. Among this category, AWclust has proven efficient but fails to properly identify population structure for complex datasets. We present in this article a new clustering algorithm called Spectral Hierarchical clustering for the Inference of Population Structure (SHIPS), based on a divisive hierarchical clustering strategy, allowing a progressive investigation of population structure. This method takes genetic data as input to cluster individuals into homogeneous sub-populations and with the use of the gap statistic estimates the optimal number of such sub-populations. SHIPS was applied to a set of simulated discrete and admixed datasets and to real SNP datasets, that are data from the HapMap and Pan-Asian SNP consortium. The programs Structure, Admixture, AWclust and PCAclust were also investigated in a comparison study. SHIPS and the parametric approach Structure were the most accurate when applied to simulated datasets both in terms of individual assignments and estimation of the correct number of clusters. The analysis of the results on the real datasets highlighted that the clusterings of SHIPS were the more consistent with the population labels or those produced by the Admixture program. The performances of SHIPS when applied to SNP data, along with its relatively low computational cost and its ease of use make this method a promising solution to infer fine-scale genetic patterns. PMID:23077494
A level-set method for two-phase flows with moving contact line and insoluble surfactant
NASA Astrophysics Data System (ADS)
Xu, Jian-Jun; Ren, Weiqing
2014-04-01
A level-set method for two-phase flows with moving contact line and insoluble surfactant is presented. The mathematical model consists of the Navier-Stokes equation for the flow field, a convection-diffusion equation for the surfactant concentration, together with the Navier boundary condition and a condition for the dynamic contact angle derived by Ren et al. (2010) [37]. The numerical method is based on the level-set continuum surface force method for two-phase flows with surfactant developed by Xu et al. (2012) [54] with some cautious treatment for the boundary conditions. The numerical method consists of three components: a flow solver for the velocity field, a solver for the surfactant concentration, and a solver for the level-set function. In the flow solver, the surface force is dealt with using the continuum surface force model. The unbalanced Young stress at the moving contact line is incorporated into the Navier boundary condition. A convergence study of the numerical method and a parametric study are presented. The influence of surfactant on the dynamics of the moving contact line is illustrated using examples. The capability of the level-set method to handle complex geometries is demonstrated by simulating a pendant drop detaching from a wall under gravity.
Bredbenner, Todd L.; Eliason, Travis D.; Francis, W. Loren; McFarland, John M.; Merkle, Andrew C.; Nicolella, Daniel P.
2014-01-01
Cervical spinal injuries are a significant concern in all trauma injuries. Recent military conflicts have demonstrated the substantial risk of spinal injury for the modern warfighter. Finite element models used to investigate injury mechanisms often fail to examine the effects of variation in geometry or material properties on mechanical behavior. The goals of this study were to model geometric variation for a set of cervical spines, to extend this model to a parametric finite element model, and, as a first step, to validate the parametric model against experimental data for low-loading conditions. Individual finite element models were created using cervical spine (C3–T1) computed tomography data for five male cadavers. Statistical shape modeling (SSM) was used to generate a parametric finite element model incorporating variability of spine geometry, and soft-tissue material property variation was also included. The probabilistic loading response of the parametric model was determined under flexion-extension, axial rotation, and lateral bending and validated by comparison to experimental data. Based on qualitative and quantitative comparison of the experimental loading response and model simulations, we suggest that the model performs adequately under relatively low-level loading conditions in multiple loading directions. In conclusion, SSM methods coupled with finite element analyses within a probabilistic framework, along with the ability to statistically validate the overall model performance, provide innovative and important steps toward describing the differences in vertebral morphology, spinal curvature, and variation in material properties. We suggest that these methods, with additional investigation and validation under injurious loading conditions, will lead to understanding and mitigating the risks of injury in the spine and other musculoskeletal structures. PMID:25506051
Efficient model reduction of parametrized systems by matrix discrete empirical interpolation
NASA Astrophysics Data System (ADS)
Negri, Federico; Manzoni, Andrea; Amsallem, David
2015-12-01
In this work, we apply a Matrix version of the so-called Discrete Empirical Interpolation (MDEIM) for the efficient reduction of nonaffine parametrized systems arising from the discretization of linear partial differential equations. Dealing with affinely parametrized operators is crucial in order to enhance the online solution of reduced-order models (ROMs). However, in many cases such an affine decomposition is not readily available, and must be recovered through (often) intrusive procedures, such as the empirical interpolation method (EIM) and its discrete variant DEIM. In this paper we show that MDEIM represents a very efficient approach to deal with complex physical and geometrical parametrizations in a non-intrusive, efficient and purely algebraic way. We propose different strategies to combine MDEIM with a state approximation resulting either from a reduced basis greedy approach or Proper Orthogonal Decomposition. A posteriori error estimates accounting for the MDEIM error are also developed in the case of parametrized elliptic and parabolic equations. Finally, the capability of MDEIM to generate accurate and efficient ROMs is demonstrated on the solution of two computationally-intensive classes of problems occurring in engineering contexts, namely PDE-constrained shape optimization and parametrized coupled problems.
Kramer, Gerbrand Maria; Frings, Virginie; Heijtel, Dennis; Smit, E F; Hoekstra, Otto S; Boellaard, Ronald
2017-06-01
The objective of this study was to validate several parametric methods for quantification of 3'-deoxy-3'- 18 F-fluorothymidine ( 18 F-FLT) PET in advanced-stage non-small cell lung carcinoma (NSCLC) patients with an activating epidermal growth factor receptor mutation who were treated with gefitinib or erlotinib. Furthermore, we evaluated the impact of noise on accuracy and precision of the parametric analyses of dynamic 18 F-FLT PET/CT to assess the robustness of these methods. Methods : Ten NSCLC patients underwent dynamic 18 F-FLT PET/CT at baseline and 7 and 28 d after the start of treatment. Parametric images were generated using plasma input Logan graphic analysis and 2 basis functions-based methods: a 2-tissue-compartment basis function model (BFM) and spectral analysis (SA). Whole-tumor-averaged parametric pharmacokinetic parameters were compared with those obtained by nonlinear regression of the tumor time-activity curve using a reversible 2-tissue-compartment model with blood volume fraction. In addition, 2 statistically equivalent datasets were generated by countwise splitting the original list-mode data, each containing 50% of the total counts. Both new datasets were reconstructed, and parametric pharmacokinetic parameters were compared between the 2 replicates and the original data. Results: After the settings of each parametric method were optimized, distribution volumes (V T ) obtained with Logan graphic analysis, BFM, and SA all correlated well with those derived using nonlinear regression at baseline and during therapy ( R 2 ≥ 0.94; intraclass correlation coefficient > 0.97). SA-based V T images were most robust to increased noise on a voxel-level (repeatability coefficient, 16% vs. >26%). Yet BFM generated the most accurate K 1 values ( R 2 = 0.94; intraclass correlation coefficient, 0.96). Parametric K 1 data showed a larger variability in general; however, no differences were found in robustness between methods (repeatability coefficient, 80%-84%). Conclusion: Both BFM and SA can generate quantitatively accurate parametric 18 F-FLT V T images in NSCLC patients before and during therapy. SA was more robust to noise, yet BFM provided more accurate parametric K 1 data. We therefore recommend BFM as the preferred parametric method for analysis of dynamic 18 F-FLT PET/CT studies; however, SA can also be used. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.
Bim and Gis: when Parametric Modeling Meets Geospatial Data
NASA Astrophysics Data System (ADS)
Barazzetti, L.; Banfi, F.
2017-12-01
Geospatial data have a crucial role in several projects related to infrastructures and land management. GIS software are able to perform advanced geospatial analyses, but they lack several instruments and tools for parametric modelling typically available in BIM. At the same time, BIM software designed for buildings have limited tools to handle geospatial data. As things stand at the moment, BIM and GIS could appear as complementary solutions, notwithstanding research work is currently under development to ensure a better level of interoperability, especially at the scale of the building. On the other hand, the transition from the local (building) scale to the infrastructure (where geospatial data cannot be neglected) has already demonstrated that parametric modelling integrated with geoinformation is a powerful tool to simplify and speed up some phases of the design workflow. This paper reviews such mixed approaches with both simulated and real examples, demonstrating that integration is already a reality at specific scales, which are not dominated by "pure" GIS or BIM. The paper will also demonstrate that some traditional operations carried out with GIS software are also available in parametric modelling software for BIM, such as transformation between reference systems, DEM generation, feature extraction, and geospatial queries. A real case study is illustrated and discussed to show the advantage of a combined use of both technologies. BIM and GIS integration can generate greater usage of geospatial data in the AECOO (Architecture, Engineering, Construction, Owner and Operator) industry, as well as new solutions for parametric modelling with additional geoinformation.
NASA Astrophysics Data System (ADS)
Fernández-Llamazares, Álvaro; Belmonte, Jordina; Delgado, Rosario; De Linares, Concepción
2014-04-01
Airborne pollen records are a suitable indicator for the study of climate change. The present work focuses on the role of annual pollen indices for the detection of bioclimatic trends through the analysis of the aerobiological spectra of 11 taxa of great biogeographical relevance in Catalonia over an 18-year period (1994-2011), by means of different parametric and non-parametric statistical methods. Among others, two non-parametric rank-based statistical tests were performed for detecting monotonic trends in time series data of the selected airborne pollen types and we have observed that they have similar power in detecting trends. Except for those cases in which the pollen data can be well-modeled by a normal distribution, it is better to apply non-parametric statistical methods to aerobiological studies. Our results provide a reliable representation of the pollen trends in the region and suggest that greater pollen quantities are being liberated to the atmosphere in the last years, specially by Mediterranean taxa such as Pinus, Total Quercus and Evergreen Quercus, although the trends may differ geographically. Longer aerobiological monitoring periods are required to corroborate these results and survey the increasing levels of certain pollen types that could exert an impact in terms of public health.
Nonlinear Adjustment with or without Constraints, Applicable to Geodetic Models
1989-03-01
corrections are neglected, resulting in the familiar (linearized) observation equations. In matrix notation, the latter are expressed by V = A X + I...where A is the design matrix, x=X -x is the column-vector of parametric corrections , VzLa-L b is the column-vector of residuals, and L=L -Lb is the...X0 . corresponds to the set ua of model-surface 0 coordinates describing the initial point P. The final set of parametric corrections , X, then
Evaluation of Two Energy Balance Closure Parametrizations
NASA Astrophysics Data System (ADS)
Eder, Fabian; De Roo, Frederik; Kohnert, Katrin; Desjardins, Raymond L.; Schmid, Hans Peter; Mauder, Matthias
2014-05-01
A general lack of energy balance closure indicates that tower-based eddy-covariance (EC) measurements underestimate turbulent heat fluxes, which calls for robust correction schemes. Two parametrization approaches that can be found in the literature were tested using data from the Canadian Twin Otter research aircraft and from tower-based measurements of the German Terrestrial Environmental Observatories (TERENO) programme. Our analysis shows that the approach of Huang et al. (Boundary-Layer Meteorol 127:273-292, 2008), based on large-eddy simulation, is not applicable to typical near-surface flux measurements because it was developed for heights above the surface layer and over homogeneous terrain. The biggest shortcoming of this parametrization is that the grid resolution of the model was too coarse so that the surface layer, where EC measurements are usually made, is not properly resolved. The empirical approach of Panin and Bernhofer (Izvestiya Atmos Oceanic Phys 44:701-716, 2008) considers landscape-level roughness heterogeneities that induce secondary circulations and at least gives a qualitative estimate of the energy balance closure. However, it does not consider any feature of landscape-scale heterogeneity other than surface roughness, such as surface temperature, surface moisture or topography. The failures of both approaches might indicate that the influence of mesoscale structures is not a sufficient explanation for the energy balance closure problem. However, our analysis of different wind-direction sectors shows that the upwind landscape-scale heterogeneity indeed influences the energy balance closure determined from tower flux data. We also analyzed the aircraft measurements with respect to the partitioning of the "missing energy" between sensible and latent heat fluxes and we could confirm the assumption of scalar similarity only for Bowen ratios 1.
Increasing Flexibility in Energy Code Compliance: Performance Packages
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, Philip R.; Rosenberg, Michael I.
Energy codes and standards have provided significant increases in building efficiency over the last 38 years, since the first national energy code was published in late 1975. The most commonly used path in energy codes, the prescriptive path, appears to be reaching a point of diminishing returns. As the code matures, the prescriptive path becomes more complicated, and also more restrictive. It is likely that an approach that considers the building as an integrated system will be necessary to achieve the next real gains in building efficiency. Performance code paths are increasing in popularity; however, there remains a significant designmore » team overhead in following the performance path, especially for smaller buildings. This paper focuses on development of one alternative format, prescriptive packages. A method to develop building-specific prescriptive packages is reviewed based on a multiple runs of prototypical building models that are used to develop parametric decision analysis to determines a set of packages with equivalent energy performance. The approach is designed to be cost-effective and flexible for the design team while achieving a desired level of energy efficiency performance. A demonstration of the approach based on mid-sized office buildings with two HVAC system types is shown along with a discussion of potential applicability in the energy code process.« less
Power flow analysis of two coupled plates with arbitrary characteristics
NASA Technical Reports Server (NTRS)
Cuschieri, J. M.
1990-01-01
In the last progress report (Feb. 1988) some results were presented for a parametric analysis on the vibrational power flow between two coupled plate structures using the mobility power flow approach. The results reported then were for changes in the structural parameters of the two plates, but with the two plates identical in their structural characteristics. Herein, limitation is removed. The vibrational power input and output are evaluated for different values of the structural damping loss factor for the source and receiver plates. In performing this parametric analysis, the source plate characteristics are kept constant. The purpose of this parametric analysis is to determine the most critical parameters that influence the flow of vibrational power from the source plate to the receiver plate. In the case of the structural damping parametric analysis, the influence of changes in the source plate damping is also investigated. The results obtained from the mobility power flow approach are compared to results obtained using a statistical energy analysis (SEA) approach. The significance of the power flow results are discussed together with a discussion and a comparison between the SEA results and the mobility power flow results. Furthermore, the benefits derived from using the mobility power flow approach are examined.
Robust stability of fractional order polynomials with complicated uncertainty structure
Şenol, Bilal; Pekař, Libor
2017-01-01
The main aim of this article is to present a graphical approach to robust stability analysis for families of fractional order (quasi-)polynomials with complicated uncertainty structure. More specifically, the work emphasizes the multilinear, polynomial and general structures of uncertainty and, moreover, the retarded quasi-polynomials with parametric uncertainty are studied. Since the families with these complex uncertainty structures suffer from the lack of analytical tools, their robust stability is investigated by numerical calculation and depiction of the value sets and subsequent application of the zero exclusion condition. PMID:28662173
Constructing a simple parametric model of shoulder from medical images
NASA Astrophysics Data System (ADS)
Atmani, H.; Fofi, D.; Merienne, F.; Trouilloud, P.
2006-02-01
The modelling of the shoulder joint is an important step to set a Computer-Aided Surgery System for shoulder prosthesis placement. Our approach mainly concerns the bones structures of the scapulo-humeral joint. Our goal is to develop a tool that allows the surgeon to extract morphological data from medical images in order to interpret the biomechanical behaviour of a prosthesised shoulder for preoperative and peroperative virtual surgery. To provide a light and easy-handling representation of the shoulder, a geometrical model composed of quadrics, planes and other simple forms is proposed.
Validation of a Parametric Approach for 3d Fortification Modelling: Application to Scale Models
NASA Astrophysics Data System (ADS)
Jacquot, K.; Chevrier, C.; Halin, G.
2013-02-01
Parametric modelling approach applied to cultural heritage virtual representation is a field of research explored for years since it can address many limitations of digitising tools. For example, essential historical sources for fortification virtual reconstructions like plans-reliefs have several shortcomings when they are scanned. To overcome those problems, knowledge based-modelling can be used: knowledge models based on the analysis of theoretical literature of a specific domain such as bastioned fortification treatises can be the cornerstone of the creation of a parametric library of fortification components. Implemented in Grasshopper, these components are manually adjusted on the data available (i.e. 3D surveys of plans-reliefs or scanned maps). Most of the fortification area is now modelled and the question of accuracy assessment is raised. A specific method is used to evaluate the accuracy of the parametric components. The results of the assessment process will allow us to validate the parametric approach. The automation of the adjustment process can finally be planned. The virtual model of fortification is part of a larger project aimed at valorising and diffusing a very unique cultural heritage item: the collection of plans-reliefs. As such, knowledge models are precious assets when automation and semantic enhancements will be considered.
1987-03-01
would be transcribed as L =AX - V where L, X, and V are the vectors of constant terms, parametric corrections , and b_o bresiduals, respectively. The...tensor. a Just as du’ represents the parametric corrections in tensor notations, the necessary associated metric tensor a’ corresponds to the variance...observations, n residuals, and 0 n- parametric corrections to X (an initial set of parameters), respectively. b 0 b The vctor L is formed as 1. L where
Nonrelativistic approaches derived from point-coupling relativistic models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lourenco, O.; Dutra, M.; Delfino, A.
2010-03-15
We construct nonrelativistic versions of relativistic nonlinear hadronic point-coupling models, based on new normalized spinor wave functions after small component reduction. These expansions give us energy density functionals that can be compared to their relativistic counterparts. We show that the agreement between the nonrelativistic limit approach and the Skyrme parametrizations becomes strongly dependent on the incompressibility of each model. We also show that the particular case A=B=0 (Walecka model) leads to the same energy density functional of the Skyrme parametrizations SV and ZR2, while the truncation scheme, up to order {rho}{sup 3}, leads to parametrizations for which {sigma}=1.
Parametric nanomechanical amplification at very high frequency.
Karabalin, R B; Feng, X L; Roukes, M L
2009-09-01
Parametric resonance and amplification are important in both fundamental physics and technological applications. Here we report very high frequency (VHF) parametric resonators and mechanical-domain amplifiers based on nanoelectromechanical systems (NEMS). Compound mechanical nanostructures patterned by multilayer, top-down nanofabrication are read out by a novel scheme that parametrically modulates longitudinal stress in doubly clamped beam NEMS resonators. Parametric pumping and signal amplification are demonstrated for VHF resonators up to approximately 130 MHz and provide useful enhancement of both resonance signal amplitude and quality factor. We find that Joule heating and reduced thermal conductance in these nanostructures ultimately impose an upper limit to device performance. We develop a theoretical model to account for both the parametric response and nonequilibrium thermal transport in these composite nanostructures. The results closely conform to our experimental observations, elucidate the frequency and threshold-voltage scaling in parametric VHF NEMS resonators and sensors, and establish the ultimate sensitivity limits of this approach.
The degenerate parametric oscillator and Ince's equation
NASA Astrophysics Data System (ADS)
Cordero-Soto, Ricardo; Suslov, Sergei K.
2011-01-01
We construct Green's function for the quantum degenerate parametric oscillator in the coordinate representation in terms of standard solutions of Ince's equation in a framework of a general approach to variable quadratic Hamiltonians. Exact time-dependent wavefunctions and their connections with dynamical invariants and SU(1, 1) group are also discussed. An extension to the degenerate parametric oscillator with time-dependent amplitude and phase is also mentioned.
Multigrid Approach to Incompressible Viscous Cavity Flows
NASA Technical Reports Server (NTRS)
Wood, William A.
1996-01-01
Two-dimensional incompressible viscous driven-cavity flows are computed for Reynolds numbers on the range 100-20,000 using a loosely coupled, implicit, second-order centrally-different scheme. Mesh sequencing and three-level V-cycle multigrid error smoothing are incorporated into the symmetric Gauss-Seidel time-integration algorithm. Parametrics on the numerical parameters are performed, achieving reductions in solution times by more than 60 percent with the full multigrid approach. Details of the circulation patterns are investigated in cavities of 2-to-1, 1-to-1, and 1-to-2 depth to width ratios.
Advanced Imaging Methods for Long-Baseline Optical Interferometry
NASA Astrophysics Data System (ADS)
Le Besnerais, G.; Lacour, S.; Mugnier, L. M.; Thiebaut, E.; Perrin, G.; Meimon, S.
2008-11-01
We address the data processing methods needed for imaging with a long baseline optical interferometer. We first describe parametric reconstruction approaches and adopt a general formulation of nonparametric image reconstruction as the solution of a constrained optimization problem. Within this framework, we present two recent reconstruction methods, Mira and Wisard, representative of the two generic approaches for dealing with the missing phase information. Mira is based on an implicit approach and a direct optimization of a Bayesian criterion while Wisard adopts a self-calibration approach and an alternate minimization scheme inspired from radio-astronomy. Both methods can handle various regularization criteria. We review commonly used regularization terms and introduce an original quadratic regularization called ldquosoft support constraintrdquo that favors the object compactness. It yields images of quality comparable to nonquadratic regularizations on the synthetic data we have processed. We then perform image reconstructions, both parametric and nonparametric, on astronomical data from the IOTA interferometer, and discuss the respective roles of parametric and nonparametric approaches for optical interferometric imaging.
Robust Control Design via Linear Programming
NASA Technical Reports Server (NTRS)
Keel, L. H.; Bhattacharyya, S. P.
1998-01-01
This paper deals with the problem of synthesizing or designing a feedback controller of fixed dynamic order. The closed loop specifications considered here are given in terms of a target performance vector representing a desired set of closed loop transfer functions connecting various signals. In general these point targets are unattainable with a fixed order controller. By enlarging the target from a fixed point set to an interval set the solvability conditions with a fixed order controller are relaxed and a solution is more easily enabled. Results from the parametric robust control literature can be used to design the interval target family so that the performance deterioration is acceptable, even when plant uncertainty is present. It is shown that it is possible to devise a computationally simple linear programming approach that attempts to meet the desired closed loop specifications.
Adapting Active Shape Models for 3D segmentation of tubular structures in medical images.
de Bruijne, Marleen; van Ginneken, Bram; Viergever, Max A; Niessen, Wiro J
2003-07-01
Active Shape Models (ASM) have proven to be an effective approach for image segmentation. In some applications, however, the linear model of gray level appearance around a contour that is used in ASM is not sufficient for accurate boundary localization. Furthermore, the statistical shape model may be too restricted if the training set is limited. This paper describes modifications to both the shape and the appearance model of the original ASM formulation. Shape model flexibility is increased, for tubular objects, by modeling the axis deformation independent of the cross-sectional deformation, and by adding supplementary cylindrical deformation modes. Furthermore, a novel appearance modeling scheme that effectively deals with a highly varying background is developed. In contrast with the conventional ASM approach, the new appearance model is trained on both boundary and non-boundary points, and the probability that a given point belongs to the boundary is estimated non-parametrically. The methods are evaluated on the complex task of segmenting thrombus in abdominal aortic aneurysms (AAA). Shape approximation errors were successfully reduced using the two shape model extensions. Segmentation using the new appearance model significantly outperformed the original ASM scheme; average volume errors are 5.1% and 45% respectively.
Thin-Film Photovoltaic Solar Array Parametric Assessment
NASA Technical Reports Server (NTRS)
Hoffman, David J.; Kerslake, Thomas W.; Hepp, Aloysius F.; Jacobs, Mark K.; Ponnusamy, Deva
2000-01-01
This paper summarizes a study that had the objective to develop a model and parametrically determine the circumstances for which lightweight thin-film photovoltaic solar arrays would be more beneficial, in terms of mass and cost, than arrays using high-efficiency crystalline solar cells. Previous studies considering arrays with near-term thin-film technology for Earth orbiting applications are briefly reviewed. The present study uses a parametric approach that evaluated the performance of lightweight thin-film arrays with cell efficiencies ranging from 5 to 20 percent. The model developed for this study is described in some detail. Similar mass and cost trends for each array option were found across eight missions of various power levels in locations ranging from Venus to Jupiter. The results for one specific mission, a main belt asteroid tour, indicate that only moderate thin-film cell efficiency (approx. 12 percent) is necessary to match the mass of arrays using crystalline cells with much greater efficiency (35 percent multi-junction GaAs based and 20 percent thin-silicon). Regarding cost, a 12 percent efficient thin-film array is projected to cost about half is much as a 4-junction GaAs array. While efficiency improvements beyond 12 percent did not significantly further improve the mass and cost benefits for thin-film arrays, higher efficiency will be needed to mitigate the spacecraft-level impacts associated with large deployed array areas. A low-temperature approach to depositing thin-film cells on lightweight, flexible plastic substrates is briefly described. The paper concludes with the observation that with the characteristics assumed for this study, ultra-lightweight arrays using efficient, thin-film cells on flexible substrates may become a leading alternative for a wide variety of space missions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duesbery, M.S.
1993-02-26
This program aims at improving current methods of lifetime assessment by building in the characteristics of the micro-mechanisms known to be responsible for damage and failure. The broad approach entails the integration and, where necessary, augmentation of the micro-scale research results currently available in the literature into a macro-scale model with predictive capability. In more detail, the program will develop a set of hierarchically structured models at different length scales, from atomic to macroscopic, at each level taking as parametric input the results of the model at the next smaller scale. In this way the known microscopic properties can bemore » transported by systematic procedures to the unknown macro-scale region. It may not be possible to eliminate empiricism completely, because some of the quantities involved cannot yet be estimated to the required degree of precision. In this case the aim will be at least to eliminate functional empiricism.« less
A graph grammar approach to artificial life.
Kniemeyer, Ole; Buck-Sorlin, Gerhard H; Kurth, Winfried
2004-01-01
We present the high-level language of relational growth grammars (RGGs) as a formalism designed for the specification of ALife models. RGGs can be seen as an extension of the well-known parametric Lindenmayer systems and contain rule-based, procedural, and object-oriented features. They are defined as rewriting systems operating on graphs with the edges coming from a set of user-defined relations, whereas the nodes can be associated with objects. We demonstrate their ability to represent genes, regulatory networks of metabolites, and morphologically structured organisms, as well as developmental aspects of these entities, in a common formal framework. Mutation, crossing over, selection, and the dynamics of a network of gene regulation can all be represented with simple graph rewriting rules. This is demonstrated in some detail on the classical example of Dawkins' biomorphs and the ABC model of flower morphogenesis: other applications are briefly sketched. An interactive program was implemented, enabling the execution of the formalism and the visualization of the results.
Gender Wage Disparities among the Highly Educated.
Black, Dan A; Haviland, Amelia; Sanders, Seth G; Taylor, Lowell J
2008-01-01
In the U.S. college-educated women earn approximately 30 percent less than their non-Hispanic white male counterparts. We conduct an empirical examination of this wage disparity for four groups of women-non-Hispanic white, black, Hispanic, and Asian-using the National Survey of College Graduates, a large data set that provides unusually detailed information on higher-level education. Nonparametric matching analysis indicates that among men and women who speak English at home, between 44 and 73 percent of the gender wage gaps are accounted for by such pre-market factors as highest degree and major. When we restrict attention further to women who have "high labor force attachment" (i.e., work experience that is similar to male comparables) we account for 54 to 99 percent of gender wage gaps. Our nonparametric approach differs from familiar regression-based decompositions, so for the sake of comparison we conduct parametric analyses as well. Inferences drawn from these latter decompositions can be quite misleading.
Gender Wage Disparities among the Highly Educated
Black, Dan A.; Haviland, Amelia; Sanders, Seth G.; Taylor, Lowell J.
2015-01-01
In the U.S. college-educated women earn approximately 30 percent less than their non-Hispanic white male counterparts. We conduct an empirical examination of this wage disparity for four groups of women—non-Hispanic white, black, Hispanic, and Asian—using the National Survey of College Graduates, a large data set that provides unusually detailed information on higher-level education. Nonparametric matching analysis indicates that among men and women who speak English at home, between 44 and 73 percent of the gender wage gaps are accounted for by such pre-market factors as highest degree and major. When we restrict attention further to women who have “high labor force attachment” (i.e., work experience that is similar to male comparables) we account for 54 to 99 percent of gender wage gaps. Our nonparametric approach differs from familiar regression-based decompositions, so for the sake of comparison we conduct parametric analyses as well. Inferences drawn from these latter decompositions can be quite misleading. PMID:26097255
Performance characterization of a low power hydrazine arcjet
NASA Technical Reports Server (NTRS)
Knowles, S. C.; Smith, W. W.; Curran, F. M.; Haag, T. W.
1987-01-01
Hydrazine arcjets, which offer substantial performance advantages over alternatives in geosynchronous satellite stationkeeping applications, have undergone startup, materials compatibility, lifetime, and power conditioning unit design issues. Devices in the 1000-3000 W output range have been characterized for several different electrode configurations. Constrictor length and diameter, electrode gap setting, and vortex strength have been parametrically studied in order to ascertain the influence of each on specific impulse and efficiency; specific impulse levels greater than 700 sec have been achieved.
NASA Astrophysics Data System (ADS)
Amsallem, David; Tezaur, Radek; Farhat, Charbel
2016-12-01
A comprehensive approach for real-time computations using a database of parametric, linear, projection-based reduced-order models (ROMs) based on arbitrary underlying meshes is proposed. In the offline phase of this approach, the parameter space is sampled and linear ROMs defined by linear reduced operators are pre-computed at the sampled parameter points and stored. Then, these operators and associated ROMs are transformed into counterparts that satisfy a certain notion of consistency. In the online phase of this approach, a linear ROM is constructed in real-time at a queried but unsampled parameter point by interpolating the pre-computed linear reduced operators on matrix manifolds and therefore computing an interpolated linear ROM. The proposed overall model reduction framework is illustrated with two applications: a parametric inverse acoustic scattering problem associated with a mockup submarine, and a parametric flutter prediction problem associated with a wing-tank system. The second application is implemented on a mobile device, illustrating the capability of the proposed computational framework to operate in real-time.
Method of the active contour for segmentation of bone systems on bitmap images
NASA Astrophysics Data System (ADS)
Vu, Hai Anh; Safonov, Roman A.; Kolesnikova, Anna S.; Kirillova, Irina V.; Kossovich, Leonid U.
2018-02-01
It is developed within a method of the active contours the approach, which is allowing to realize separation of a contour of a object of the image in case of its segmentation. This approach exceeds a parametric method on speed, but also does not concede to it on decision accuracy. The approach is offered within this operation will allow to realize allotment of a contour with high accuracy of the image and quicker than a parametric method of the active contours.
A unified framework for weighted parametric multiple test procedures.
Xi, Dong; Glimm, Ekkehard; Maurer, Willi; Bretz, Frank
2017-09-01
We describe a general framework for weighted parametric multiple test procedures based on the closure principle. We utilize general weighting strategies that can reflect complex study objectives and include many procedures in the literature as special cases. The proposed weighted parametric tests bridge the gap between rejection rules using either adjusted significance levels or adjusted p-values. This connection is made by allowing intersection hypotheses of the underlying closed test procedure to be tested at level smaller than α. This may be also necessary to take certain study situations into account. For such cases we introduce a subclass of exact α-level parametric tests that satisfy the consonance property. When the correlation is known only for certain subsets of the test statistics, a new procedure is proposed to fully utilize this knowledge within each subset. We illustrate the proposed weighted parametric tests using a clinical trial example and conduct a simulation study to investigate its operating characteristics. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Empirical intrinsic geometry for nonlinear modeling and time series filtering.
Talmon, Ronen; Coifman, Ronald R
2013-07-30
In this paper, we present a method for time series analysis based on empirical intrinsic geometry (EIG). EIG enables one to reveal the low-dimensional parametric manifold as well as to infer the underlying dynamics of high-dimensional time series. By incorporating concepts of information geometry, this method extends existing geometric analysis tools to support stochastic settings and parametrizes the geometry of empirical distributions. However, the statistical models are not required as priors; hence, EIG may be applied to a wide range of real signals without existing definitive models. We show that the inferred model is noise-resilient and invariant under different observation and instrumental modalities. In addition, we show that it can be extended efficiently to newly acquired measurements in a sequential manner. These two advantages enable us to revisit the Bayesian approach and incorporate empirical dynamics and intrinsic geometry into a nonlinear filtering framework. We show applications to nonlinear and non-Gaussian tracking problems as well as to acoustic signal localization.
NASA Technical Reports Server (NTRS)
Gerberich, Matthew W.; Oleson, Steven R.
2013-01-01
The Collaborative Modeling for Parametric Assessment of Space Systems (COMPASS) team at Glenn Research Center has performed integrated system analysis of conceptual spacecraft mission designs since 2006 using a multidisciplinary concurrent engineering process. The set of completed designs was archived in a database, to allow for the study of relationships between design parameters. Although COMPASS uses a parametric spacecraft costing model, this research investigated the possibility of using a top-down approach to rapidly estimate the overall vehicle costs. This paper presents the relationships between significant design variables, including breakdowns of dry mass, wet mass, and cost. It also develops a model for a broad estimate of these parameters through basic mission characteristics, including the target location distance, the payload mass, the duration, the delta-v requirement, and the type of mission, propulsion, and electrical power. Finally, this paper examines the accuracy of this model in regards to past COMPASS designs, with an assessment of outlying spacecraft, and compares the results to historical data of completed NASA missions.
Modeling the Earth's magnetospheric magnetic field confined within a realistic magnetopause
NASA Technical Reports Server (NTRS)
Tsyganenko, N. A.
1995-01-01
Empirical data-based models of the magnetosphereic magnetic field have been widely used during recent years. However, the existing models (Tsyganenko, 1987, 1989a) have three serious deficiencies: (1) an unstable de facto magnetopause, (2) a crude parametrization by the K(sub p) index, and (3) inaccuracies in the equatorial magnetotail B(sub z) values. This paper describes a new approach to the problem; the essential new features are (1) a realistic shape and size of the magnetopause, based on fits to a large number of observed crossing (allowing a parametrization by the solar wind pressure), (2) fully controlled shielding of the magnetic field produced by all magnetospheric current systems, (3) new flexible representations for the tail and ring currents, and (4) a new directional criterion for fitting the model field to spacecraft data, providing improved accuracy for field line mapping. Results are presented from initial efforts to create models assembled from these modules and calibrated against spacecraft data sets.
Direct Parametric Reconstruction With Joint Motion Estimation/Correction for Dynamic Brain PET Data.
Jiao, Jieqing; Bousse, Alexandre; Thielemans, Kris; Burgos, Ninon; Weston, Philip S J; Schott, Jonathan M; Atkinson, David; Arridge, Simon R; Hutton, Brian F; Markiewicz, Pawel; Ourselin, Sebastien
2017-01-01
Direct reconstruction of parametric images from raw photon counts has been shown to improve the quantitative analysis of dynamic positron emission tomography (PET) data. However it suffers from subject motion which is inevitable during the typical acquisition time of 1-2 hours. In this work we propose a framework to jointly estimate subject head motion and reconstruct the motion-corrected parametric images directly from raw PET data, so that the effects of distorted tissue-to-voxel mapping due to subject motion can be reduced in reconstructing the parametric images with motion-compensated attenuation correction and spatially aligned temporal PET data. The proposed approach is formulated within the maximum likelihood framework, and efficient solutions are derived for estimating subject motion and kinetic parameters from raw PET photon count data. Results from evaluations on simulated [ 11 C]raclopride data using the Zubal brain phantom and real clinical [ 18 F]florbetapir data of a patient with Alzheimer's disease show that the proposed joint direct parametric reconstruction motion correction approach can improve the accuracy of quantifying dynamic PET data with large subject motion.
Álvarez, Aitor; Sierra, Basilio; Arruti, Andoni; López-Gil, Juan-Miguel; Garay-Vitoria, Nestor
2015-01-01
In this paper, a new supervised classification paradigm, called classifier subset selection for stacked generalization (CSS stacking), is presented to deal with speech emotion recognition. The new approach consists of an improvement of a bi-level multi-classifier system known as stacking generalization by means of an integration of an estimation of distribution algorithm (EDA) in the first layer to select the optimal subset from the standard base classifiers. The good performance of the proposed new paradigm was demonstrated over different configurations and datasets. First, several CSS stacking classifiers were constructed on the RekEmozio dataset, using some specific standard base classifiers and a total of 123 spectral, quality and prosodic features computed using in-house feature extraction algorithms. These initial CSS stacking classifiers were compared to other multi-classifier systems and the employed standard classifiers built on the same set of speech features. Then, new CSS stacking classifiers were built on RekEmozio using a different set of both acoustic parameters (extended version of the Geneva Minimalistic Acoustic Parameter Set (eGeMAPS)) and standard classifiers and employing the best meta-classifier of the initial experiments. The performance of these two CSS stacking classifiers was evaluated and compared. Finally, the new paradigm was tested on the well-known Berlin Emotional Speech database. We compared the performance of single, standard stacking and CSS stacking systems using the same parametrization of the second phase. All of the classifications were performed at the categorical level, including the six primary emotions plus the neutral one. PMID:26712757
Extracting the QCD ΛMS¯ parameter in Drell-Yan process using Collins-Soper-Sterman approach
NASA Astrophysics Data System (ADS)
Taghavi, R.; Mirjalili, A.
2017-03-01
In this work, we directly fit the QCD dimensional transmutation parameter, ΛMS¯, to experimental data of Drell-Yan (DY) observables. For this purpose, we first obtain the evolution of transverse momentum dependent parton distribution functions (TMDPDFs) up to the next-to-next-to-leading logarithm (NNLL) approximation based on Collins-Soper-Sterman (CSS) formalism. As is expecting the TMDPDFs are appearing at larger values of transverse momentum by increasing the energy scales and also the order of approximation. Then we calculate the cross-section related to the TMDPDFs in the DY process. As a consequence of global fitting to the five sets of experimental data at different low center-of-mass energies and one set at high center-of-mass energy, using CETQ06 parametrizations as our boundary condition, we obtain ΛMS¯ = 221 ± 7(stat) ± 54(theory) MeV corresponding to the renormalized coupling constant αs(Mz2) = 0.117 ± 0.001(stat) ± 0.004(theory) which is within the acceptable range for this quantity. The goodness of χ2/d.o.f = 1.34 shows the results for DY cross-section are in good agreement with different experimental sets, containing E288, E605 and R209 at low center-of-mass energies and D0, CDF data at high center-of-mass energy. The repeated calculations, using HERAPDFs parametrizations is yielding us numerical values for fitted parameters very close to what we obtain using CETQ06 PDFs set. This indicates that the obtained results have enough stability by variations in the boundary conditions.
Borri, Marco; Schmidt, Maria A.; Powell, Ceri; Koh, Dow-Mu; Riddell, Angela M.; Partridge, Mike; Bhide, Shreerang A.; Nutting, Christopher M.; Harrington, Kevin J.; Newbold, Katie L.; Leach, Martin O.
2015-01-01
Purpose To describe a methodology, based on cluster analysis, to partition multi-parametric functional imaging data into groups (or clusters) of similar functional characteristics, with the aim of characterizing functional heterogeneity within head and neck tumour volumes. To evaluate the performance of the proposed approach on a set of longitudinal MRI data, analysing the evolution of the obtained sub-sets with treatment. Material and Methods The cluster analysis workflow was applied to a combination of dynamic contrast-enhanced and diffusion-weighted imaging MRI data from a cohort of squamous cell carcinoma of the head and neck patients. Cumulative distributions of voxels, containing pre and post-treatment data and including both primary tumours and lymph nodes, were partitioned into k clusters (k = 2, 3 or 4). Principal component analysis and cluster validation were employed to investigate data composition and to independently determine the optimal number of clusters. The evolution of the resulting sub-regions with induction chemotherapy treatment was assessed relative to the number of clusters. Results The clustering algorithm was able to separate clusters which significantly reduced in voxel number following induction chemotherapy from clusters with a non-significant reduction. Partitioning with the optimal number of clusters (k = 4), determined with cluster validation, produced the best separation between reducing and non-reducing clusters. Conclusion The proposed methodology was able to identify tumour sub-regions with distinct functional properties, independently separating clusters which were affected differently by treatment. This work demonstrates that unsupervised cluster analysis, with no prior knowledge of the data, can be employed to provide a multi-parametric characterization of functional heterogeneity within tumour volumes. PMID:26398888
Non-parametric transient classification using adaptive wavelets
NASA Astrophysics Data System (ADS)
Varughese, Melvin M.; von Sachs, Rainer; Stephanou, Michael; Bassett, Bruce A.
2015-11-01
Classifying transients based on multiband light curves is a challenging but crucial problem in the era of GAIA and Large Synoptic Sky Telescope since the sheer volume of transients will make spectroscopic classification unfeasible. We present a non-parametric classifier that predicts the transient's class given training data. It implements two novel components: the use of the BAGIDIS wavelet methodology - a characterization of functional data using hierarchical wavelet coefficients - as well as the introduction of a ranked probability classifier on the wavelet coefficients that handles both the heteroscedasticity of the data in addition to the potential non-representativity of the training set. The classifier is simple to implement while a major advantage of the BAGIDIS wavelets is that they are translation invariant. Hence, BAGIDIS does not need the light curves to be aligned to extract features. Further, BAGIDIS is non-parametric so it can be used effectively in blind searches for new objects. We demonstrate the effectiveness of our classifier against the Supernova Photometric Classification Challenge to correctly classify supernova light curves as Type Ia or non-Ia. We train our classifier on the spectroscopically confirmed subsample (which is not representative) and show that it works well for supernova with observed light-curve time spans greater than 100 d (roughly 55 per cent of the data set). For such data, we obtain a Ia efficiency of 80.5 per cent and a purity of 82.4 per cent, yielding a highly competitive challenge score of 0.49. This indicates that our `model-blind' approach may be particularly suitable for the general classification of astronomical transients in the era of large synoptic sky surveys.
NASA Technical Reports Server (NTRS)
Salazar, George A. (Inventor)
1993-01-01
This invention relates to a reconfigurable fuzzy cell comprising a digital control programmable gain operation amplifier, an analog-to-digital converter, an electrically erasable PROM, and 8-bit counter and comparator, and supporting logic configured to achieve in real-time fuzzy systems high throughput, grade-of-membership or membership-value conversion of multi-input sensor data. The invention provides a flexible multiplexing-capable configuration, implemented entirely in hardware, for effectuating S-, Z-, and PI-membership functions or combinations thereof, based upon fuzzy logic level-set theory. A membership value table storing 'knowledge data' for each of S-, Z-, and PI-functions is contained within a nonvolatile memory for storing bits of membership and parametric information in a plurality of address spaces. Based upon parametric and control signals, analog sensor data is digitized and converted into grade-of-membership data. In situ learn and recognition modes of operation are also provided.
Results of the JIMO Follow-on Destinations Parametric Studies
NASA Technical Reports Server (NTRS)
Noca, Muriel A.; Hack, Kurt J.
2005-01-01
NASA's proposed Jupiter Icy Moon Orbiter (JIMO) mission currently in conceptual development is to be the first one of a series of highly capable Nuclear Electric Propulsion (NEP) science driven missions. To understand the implications of a multi-mission capability requirement on the JIMO vehicle and mission, the NASA Prometheus Program initiated a set of parametric high-level studies to be followed by a series of more in-depth studies. The JIMO potential follow-on destinations identified include a Saturn system tour, a Neptune system tour, a Kuiper Belt Objects rendezvous, an Interstellar Precursor mission, a Multiple Asteroid Sample Return and a Comet Sample Return. This paper shows that the baseline JIMO reactor and design envelop can satisfy five out of six of the follow-on destinations. Flight time to these destinations can significantly be reduced by increasing the launch energy or/and by inserting gravity assists to the heliocentric phase.
Do Students Expect Compensation for Wage Risk?
ERIC Educational Resources Information Center
Schweri, Juerg; Hartog, Joop; Wolter, Stefan C.
2011-01-01
We use a unique data set about the wage distribution that Swiss students expect for themselves ex ante, deriving parametric and non-parametric measures to capture expected wage risk. These wage risk measures are unfettered by heterogeneity which handicapped the use of actual market wage dispersion as risk measure in earlier studies. Students in…
Model Adaptation in Parametric Space for POD-Galerkin Models
NASA Astrophysics Data System (ADS)
Gao, Haotian; Wei, Mingjun
2017-11-01
The development of low-order POD-Galerkin models is largely motivated by the expectation to use the model developed with a set of parameters at their native values to predict the dynamic behaviors of the same system under different parametric values, in other words, a successful model adaptation in parametric space. However, most of time, even small deviation of parameters from their original value may lead to large deviation or unstable results. It has been shown that adding more information (e.g. a steady state, mean value of a different unsteady state, or an entire different set of POD modes) may improve the prediction of flow with other parametric states. For a simple case of the flow passing a fixed cylinder, an orthogonal mean mode at a different Reynolds number may stabilize the POD-Galerkin model when Reynolds number is changed. For a more complicated case of the flow passing an oscillatory cylinder, a global POD-Galerkin model is first applied to handle the moving boundaries, then more information (e.g. more POD modes) is required to predicate the flow under different oscillatory frequencies. Supported by ARL.
NASA Technical Reports Server (NTRS)
Gersh-Range, Jessica A.; Arnold, William R.; Peck, Mason A.; Stahl, H. Philip
2011-01-01
Since future astrophysics missions require space telescopes with apertures of at least 10 meters, there is a need for on-orbit assembly methods that decouple the size of the primary mirror from the choice of launch vehicle. One option is to connect the segments edgewise using mechanisms analogous to damped springs. To evaluate the feasibility of this approach, a parametric ANSYS model that calculates the mode shapes, natural frequencies, and disturbance response of such a mirror, as well as of the equivalent monolithic mirror, has been developed. This model constructs a mirror using rings of hexagonal segments that are either connected continuously along the edges (to form a monolith) or at discrete locations corresponding to the mechanism locations (to form a segmented mirror). As an example, this paper presents the case of a mirror whose segments are connected edgewise by mechanisms analogous to a set of four collocated single-degree-of-freedom damped springs. The results of a set of parameter studies suggest that such mechanisms can be used to create a 15-m segmented mirror that behaves similarly to a monolith, although fully predicting the segmented mirror performance would require incorporating measured mechanism properties into the model. Keywords: segmented mirror, edgewise connectivity, space telescope
Modeling of second order space charge driven coherent sum and difference instabilities
NASA Astrophysics Data System (ADS)
Yuan, Yao-Shuo; Boine-Frankenheim, Oliver; Hofmann, Ingo
2017-10-01
Second order coherent oscillation modes in intense particle beams play an important role for beam stability in linear or circular accelerators. In addition to the well-known second order even envelope modes and their instability, coupled even envelope modes and odd (skew) modes have recently been shown in [Phys. Plasmas 23, 090705 (2016), 10.1063/1.4963851] to lead to parametric instabilities in periodic focusing lattices with sufficiently different tunes. While this work was partly using the usual envelope equations, partly also particle-in-cell (PIC) simulation, we revisit these modes here and show that the complete set of second order even and odd mode phenomena can be obtained in a unifying approach by using a single set of linearized rms moment equations based on "Chernin's equations." This has the advantage that accurate information on growth rates can be obtained and gathered in a "tune diagram." In periodic focusing we retrieve the parametric sum instabilities of coupled even and of odd modes. The stop bands obtained from these equations are compared with results from PIC simulations for waterbag beams and found to show very good agreement. The "tilting instability" obtained in constant focusing confirms the equivalence of this method with the linearized Vlasov-Poisson system evaluated in second order.
Accounting for misclassification error in retrospective smoking data.
Kenkel, Donald S; Lillard, Dean R; Mathios, Alan D
2004-10-01
Recent waves of major longitudinal surveys in the US and other countries include retrospective questions about the timing of smoking initiation and cessation, creating a potentially important but under-utilized source of information on smoking behavior over the life course. In this paper, we explore the extent of, consequences of, and possible solutions to misclassification errors in models of smoking participation that use data generated from retrospective reports. In our empirical work, we exploit the fact that the National Longitudinal Survey of Youth 1979 provides both contemporaneous and retrospective information about smoking status in certain years. We compare the results from four sets of models of smoking participation. The first set of results are from baseline probit models of smoking participation from contemporaneously reported information. The second set of results are from models that are identical except that the dependent variable is based on retrospective information. The last two sets of results are from models that take a parametric approach to account for a simple form of misclassification error. Our preliminary results suggest that accounting for misclassification error is important. However, the adjusted maximum likelihood estimation approach to account for misclassification does not always perform as expected. Copyright 2004 John Wiley & Sons, Ltd.
A methodology to enable rapid evaluation of aviation environmental impacts and aircraft technologies
NASA Astrophysics Data System (ADS)
Becker, Keith Frederick
Commercial aviation has become an integral part of modern society and enables unprecedented global connectivity by increasing rapid business, cultural, and personal connectivity. In the decades following World War II, passenger travel through commercial aviation quickly grew at a rate of roughly 8% per year globally. The FAA's most recent Terminal Area Forecast predicts growth to continue at a rate of 2.5% domestically, and the market outlooks produced by Airbus and Boeing generally predict growth to continue at a rate of 5% per year globally over the next several decades, which translates into a need for up to 30,000 new aircraft produced by 2025. With such large numbers of new aircraft potentially entering service, any negative consequences of commercial aviation must undergo examination and mitigation by governing bodies so that growth may still be achieved. Options to simultaneously grow while reducing environmental impact include evolution of the commercial fleet through changes in operations, aircraft mix, and technology adoption. Methods to rapidly evaluate fleet environmental metrics are needed to enable decision makers to quickly compare the impact of different scenarios and weigh the impact of multiple policy options. As the fleet evolves, interdependencies may emerge in the form of tradeoffs between improvements in different environmental metrics as new technologies are brought into service. In order to include the impacts of these interdependencies on fleet evolution, physics-based modeling is required at the appropriate level of fidelity. Evaluation of environmental metrics in a physics-based manner can be done at the individual aircraft level, but will then not capture aggregate fleet metrics. Contrastingly, evaluation of environmental metrics at the fleet level is already being done for aircraft in the commercial fleet, but current tools and approaches require enhancement because they currently capture technology implementation through post-processing, which does not capture physical interdependencies that may arise at the aircraft-level. The goal of the work that has been conducted here was the development of a methodology to develop surrogate fleet approaches that leverage the capability of physics-based aircraft models and the development of connectivity to fleet-level analysis tools to enable rapid evaluation of fuel burn and emissions metrics. Instead of requiring development of an individual physics-based model for each vehicle in the fleet, the surrogate fleet approaches seek to reduce the number of such models needed while still accurately capturing performance of the fleet. By reducing the number of models, both development time and execution time to generate fleet-level results may also be reduced. The initial steps leading to surrogate fleet formulation were a characterization of the commercial fleet into groups based on capability followed by the selection of a reference vehicle model and a reference set of operations for each group. Next, three potential surrogate fleet approaches were formulated. These approaches include the parametric correction factor approach, in which the results of a reference vehicle model are corrected to match the aggregate results of each group; the average replacement approach, in which a new vehicle model is developed to generate aggregate results of each group, and the best-in-class replacement approach, in which results for a reference vehicle are simply substituted for the entire group. Once candidate surrogate fleet approaches were developed, they were each applied to and evaluated over the set of reference operations. Then each approach was evaluated for their ability to model variations in operations. Finally, the ability of each surrogate fleet approach to capture implementation of different technology suites along with corresponding interdependencies between fuel burn and emissions was evaluated using the concept of a virtual fleet to simulate the technology response of multiple aircraft families. The results of experimentation led to a down selection to the best approach to use to rapidly characterize the performance of the commercial fleet for accurately in the context of acceptability of current fleet evaluation methods. The parametric correction factor and average replacement approaches were shown to be successful in capturing reference fleet results as well as fleet performance with variations in operations. The best-in-class replacement approach was shown to be unacceptable as a model for the larger fleet in each of the scenarios tested. Finally, the average replacement approach was the only one that was successful in capturing the impact of technologies on a larger fleet. These results are meaningful because they show that it is possible to calculate the fuel burn and emissions of a larger fleet with a reduced number of physics-based models within acceptable bounds of accuracy. At the same time, the physics-based modeling also provides the ability to evaluate the impact of technologies on fleet-level fuel burn and emissions metrics. The value of such a capability is that multiple future fleet scenarios involving changes in both aircraft operations and technology levels may now be rapidly evaluated to inform and equip policy makers of the implications of impacts of changes on fleet-level metrics.
Strict Constraint Feasibility in Analysis and Design of Uncertain Systems
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Giesy, Daniel P.; Kenny, Sean P.
2006-01-01
This paper proposes a methodology for the analysis and design optimization of models subject to parametric uncertainty, where hard inequality constraints are present. Hard constraints are those that must be satisfied for all parameter realizations prescribed by the uncertainty model. Emphasis is given to uncertainty models prescribed by norm-bounded perturbations from a nominal parameter value, i.e., hyper-spheres, and by sets of independently bounded uncertain variables, i.e., hyper-rectangles. These models make it possible to consider sets of parameters having comparable as well as dissimilar levels of uncertainty. Two alternative formulations for hyper-rectangular sets are proposed, one based on a transformation of variables and another based on an infinity norm approach. The suite of tools developed enable us to determine if the satisfaction of hard constraints is feasible by identifying critical combinations of uncertain parameters. Since this practice is performed without sampling or partitioning the parameter space, the resulting assessments of robustness are analytically verifiable. Strategies that enable the comparison of the robustness of competing design alternatives, the approximation of the robust design space, and the systematic search for designs with improved robustness characteristics are also proposed. Since the problem formulation is generic and the solution methods only require standard optimization algorithms for their implementation, the tools developed are applicable to a broad range of problems in several disciplines.
Parent formulation at the Lagrangian level
NASA Astrophysics Data System (ADS)
Grigoriev, Maxim
2011-07-01
The recently proposed first-order parent formalism at the level of equations of motion is specialized to the case of Lagrangian systems. It is shown that for diffeomorphism-invariant theories the parent formulation takes the form of an AKSZ-type sigma model. The proposed formulation can be also seen as a Lagrangian version of the BV-BRST extension of the Vasiliev unfolded approach. We also discuss its possible interpretation as a multidimensional generalization of the Hamiltonian BFV-BRST formalism. The general construction is illustrated by examples of (parametrized) mechanics, relativistic particle, Yang-Mills theory, and gravity.
Species richness in soil bacterial communities: a proposed approach to overcome sample size bias.
Youssef, Noha H; Elshahed, Mostafa S
2008-09-01
Estimates of species richness based on 16S rRNA gene clone libraries are increasingly utilized to gauge the level of bacterial diversity within various ecosystems. However, previous studies have indicated that regardless of the utilized approach, species richness estimates obtained are dependent on the size of the analyzed clone libraries. We here propose an approach to overcome sample size bias in species richness estimates in complex microbial communities. Parametric (Maximum likelihood-based and rarefaction curve-based) and non-parametric approaches were used to estimate species richness in a library of 13,001 near full-length 16S rRNA clones derived from soil, as well as in multiple subsets of the original library. Species richness estimates obtained increased with the increase in library size. To obtain a sample size-unbiased estimate of species richness, we calculated the theoretical clone library sizes required to encounter the estimated species richness at various clone library sizes, used curve fitting to determine the theoretical clone library size required to encounter the "true" species richness, and subsequently determined the corresponding sample size-unbiased species richness value. Using this approach, sample size-unbiased estimates of 17,230, 15,571, and 33,912 were obtained for the ML-based, rarefaction curve-based, and ACE-1 estimators, respectively, compared to bias-uncorrected values of 15,009, 11,913, and 20,909.
NASA Astrophysics Data System (ADS)
Han, Feng; Zheng, Yi
2018-06-01
Significant Input uncertainty is a major source of error in watershed water quality (WWQ) modeling. It remains challenging to address the input uncertainty in a rigorous Bayesian framework. This study develops the Bayesian Analysis of Input and Parametric Uncertainties (BAIPU), an approach for the joint analysis of input and parametric uncertainties through a tight coupling of Markov Chain Monte Carlo (MCMC) analysis and Bayesian Model Averaging (BMA). The formal likelihood function for this approach is derived considering a lag-1 autocorrelated, heteroscedastic, and Skew Exponential Power (SEP) distributed error model. A series of numerical experiments were performed based on a synthetic nitrate pollution case and on a real study case in the Newport Bay Watershed, California. The Soil and Water Assessment Tool (SWAT) and Differential Evolution Adaptive Metropolis (DREAM(ZS)) were used as the representative WWQ model and MCMC algorithm, respectively. The major findings include the following: (1) the BAIPU can be implemented and used to appropriately identify the uncertain parameters and characterize the predictive uncertainty; (2) the compensation effect between the input and parametric uncertainties can seriously mislead the modeling based management decisions, if the input uncertainty is not explicitly accounted for; (3) the BAIPU accounts for the interaction between the input and parametric uncertainties and therefore provides more accurate calibration and uncertainty results than a sequential analysis of the uncertainties; and (4) the BAIPU quantifies the credibility of different input assumptions on a statistical basis and can be implemented as an effective inverse modeling approach to the joint inference of parameters and inputs.
Dynamic whole body PET parametric imaging: II. Task-oriented statistical estimation
Karakatsanis, Nicolas A.; Lodge, Martin A.; Zhou, Y.; Wahl, Richard L.; Rahmim, Arman
2013-01-01
In the context of oncology, dynamic PET imaging coupled with standard graphical linear analysis has been previously employed to enable quantitative estimation of tracer kinetic parameters of physiological interest at the voxel level, thus, enabling quantitative PET parametric imaging. However, dynamic PET acquisition protocols have been confined to the limited axial field-of-view (~15–20cm) of a single bed position and have not been translated to the whole-body clinical imaging domain. On the contrary, standardized uptake value (SUV) PET imaging, considered as the routine approach in clinical oncology, commonly involves multi-bed acquisitions, but is performed statically, thus not allowing for dynamic tracking of the tracer distribution. Here, we pursue a transition to dynamic whole body PET parametric imaging, by presenting, within a unified framework, clinically feasible multi-bed dynamic PET acquisition protocols and parametric imaging methods. In a companion study, we presented a novel clinically feasible dynamic (4D) multi-bed PET acquisition protocol as well as the concept of whole body PET parametric imaging employing Patlak ordinary least squares (OLS) regression to estimate the quantitative parameters of tracer uptake rate Ki and total blood distribution volume V. In the present study, we propose an advanced hybrid linear regression framework, driven by Patlak kinetic voxel correlations, to achieve superior trade-off between contrast-to-noise ratio (CNR) and mean squared error (MSE) than provided by OLS for the final Ki parametric images, enabling task-based performance optimization. Overall, whether the observer's task is to detect a tumor or quantitatively assess treatment response, the proposed statistical estimation framework can be adapted to satisfy the specific task performance criteria, by adjusting the Patlak correlation-coefficient (WR) reference value. The multi-bed dynamic acquisition protocol, as optimized in the preceding companion study, was employed along with extensive Monte Carlo simulations and an initial clinical FDG patient dataset to validate and demonstrate the potential of the proposed statistical estimation methods. Both simulated and clinical results suggest that hybrid regression in the context of whole-body Patlak Ki imaging considerably reduces MSE without compromising high CNR. Alternatively, for a given CNR, hybrid regression enables larger reductions than OLS in the number of dynamic frames per bed, allowing for even shorter acquisitions of ~30min, thus further contributing to the clinical adoption of the proposed framework. Compared to the SUV approach, whole body parametric imaging can provide better tumor quantification, and can act as a complement to SUV, for the task of tumor detection. PMID:24080994
Dynamic whole-body PET parametric imaging: II. Task-oriented statistical estimation.
Karakatsanis, Nicolas A; Lodge, Martin A; Zhou, Y; Wahl, Richard L; Rahmim, Arman
2013-10-21
In the context of oncology, dynamic PET imaging coupled with standard graphical linear analysis has been previously employed to enable quantitative estimation of tracer kinetic parameters of physiological interest at the voxel level, thus, enabling quantitative PET parametric imaging. However, dynamic PET acquisition protocols have been confined to the limited axial field-of-view (~15-20 cm) of a single-bed position and have not been translated to the whole-body clinical imaging domain. On the contrary, standardized uptake value (SUV) PET imaging, considered as the routine approach in clinical oncology, commonly involves multi-bed acquisitions, but is performed statically, thus not allowing for dynamic tracking of the tracer distribution. Here, we pursue a transition to dynamic whole-body PET parametric imaging, by presenting, within a unified framework, clinically feasible multi-bed dynamic PET acquisition protocols and parametric imaging methods. In a companion study, we presented a novel clinically feasible dynamic (4D) multi-bed PET acquisition protocol as well as the concept of whole-body PET parametric imaging employing Patlak ordinary least squares (OLS) regression to estimate the quantitative parameters of tracer uptake rate Ki and total blood distribution volume V. In the present study, we propose an advanced hybrid linear regression framework, driven by Patlak kinetic voxel correlations, to achieve superior trade-off between contrast-to-noise ratio (CNR) and mean squared error (MSE) than provided by OLS for the final Ki parametric images, enabling task-based performance optimization. Overall, whether the observer's task is to detect a tumor or quantitatively assess treatment response, the proposed statistical estimation framework can be adapted to satisfy the specific task performance criteria, by adjusting the Patlak correlation-coefficient (WR) reference value. The multi-bed dynamic acquisition protocol, as optimized in the preceding companion study, was employed along with extensive Monte Carlo simulations and an initial clinical (18)F-deoxyglucose patient dataset to validate and demonstrate the potential of the proposed statistical estimation methods. Both simulated and clinical results suggest that hybrid regression in the context of whole-body Patlak Ki imaging considerably reduces MSE without compromising high CNR. Alternatively, for a given CNR, hybrid regression enables larger reductions than OLS in the number of dynamic frames per bed, allowing for even shorter acquisitions of ~30 min, thus further contributing to the clinical adoption of the proposed framework. Compared to the SUV approach, whole-body parametric imaging can provide better tumor quantification, and can act as a complement to SUV, for the task of tumor detection.
Multiple Hypothesis Testing for Experimental Gingivitis Based on Wilcoxon Signed Rank Statistics
Preisser, John S.; Sen, Pranab K.; Offenbacher, Steven
2011-01-01
Dental research often involves repeated multivariate outcomes on a small number of subjects for which there is interest in identifying outcomes that exhibit change in their levels over time as well as to characterize the nature of that change. In particular, periodontal research often involves the analysis of molecular mediators of inflammation for which multivariate parametric methods are highly sensitive to outliers and deviations from Gaussian assumptions. In such settings, nonparametric methods may be favored over parametric ones. Additionally, there is a need for statistical methods that control an overall error rate for multiple hypothesis testing. We review univariate and multivariate nonparametric hypothesis tests and apply them to longitudinal data to assess changes over time in 31 biomarkers measured from the gingival crevicular fluid in 22 subjects whereby gingivitis was induced by temporarily withholding tooth brushing. To identify biomarkers that can be induced to change, multivariate Wilcoxon signed rank tests for a set of four summary measures based upon area under the curve are applied for each biomarker and compared to their univariate counterparts. Multiple hypothesis testing methods with choice of control of the false discovery rate or strong control of the family-wise error rate are examined. PMID:21984957
Hybrid-Wing-Body Vehicle Composite Fuselage Analysis and Case Study
NASA Technical Reports Server (NTRS)
Mukhopadhyay, Vivek
2014-01-01
Recent progress in the structural analysis of a Hybrid Wing-Body (HWB) fuselage concept is presented with the objective of structural weight reduction under a set of critical design loads. This pressurized efficient HWB fuselage design is presently being investigated by the NASA Environmentally Responsible Aviation (ERA) project in collaboration with the Boeing Company, Huntington Beach. The Pultruded Rod-Stiffened Efficient Unitized Structure (PRSEUS) composite concept, developed at the Boeing Company, is approximately modeled for an analytical study and finite element analysis. Stiffened plate linear theories are employed for a parametric case study. Maximum deflection and stress levels are obtained with appropriate assumptions for a set of feasible stiffened panel configurations. An analytical parametric case study is presented to examine the effects of discrete stiffener spacing and skin thickness on structural weight, deflection and stress. A finite-element model (FEM) of an integrated fuselage section with bulkhead is developed for an independent assessment. Stress analysis and scenario based case studies are conducted for design improvement. The FEM model specific weight of the improved fuselage concept is computed and compared to previous studies, in order to assess the relative weight/strength advantages of this advanced composite airframe technology
NASA Astrophysics Data System (ADS)
Tresser, Shachar; Dolev, Amit; Bucher, Izhak
2018-02-01
High-speed machinery is often designed to pass several "critical speeds", where vibration levels can be very high. To reduce vibrations, rotors usually undergo a mass balancing process, where the machine is rotated at its full speed range, during which the dynamic response near critical speeds can be measured. High sensitivity, which is required for a successful balancing process, is achieved near the critical speeds, where a single deflection mode shape becomes dominant, and is excited by the projection of the imbalance on it. The requirement to rotate the machine at high speeds is an obstacle in many cases, where it is impossible to perform measurements at high speeds, due to harsh conditions such as high temperatures and inaccessibility (e.g., jet engines). This paper proposes a novel balancing method of flexible rotors, which does not require the machine to be rotated at high speeds. With this method, the rotor is spun at low speeds, while subjecting it to a set of externally controlled forces. The external forces comprise a set of tuned, response dependent, parametric excitations, and nonlinear stiffness terms. The parametric excitation can isolate any desired mode, while keeping the response directly linked to the imbalance. A software controlled nonlinear stiffness term limits the response, hence preventing the rotor to become unstable. These forces warrant sufficient sensitivity required to detect the projection of the imbalance on any desired mode without rotating the machine at high speeds. Analytical, numerical and experimental results are shown to validate and demonstrate the method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liangzhe Zhang; Anthony D. Rollett; Timothy Bartel
2012-02-01
A calibrated Monte Carlo (cMC) approach, which quantifies grain boundary kinetics within a generic setting, is presented. The influence of misorientation is captured by adding a scaling coefficient in the spin flipping probability equation, while the contribution of different driving forces is weighted using a partition function. The calibration process relies on the established parametric links between Monte Carlo (MC) and sharp-interface models. The cMC algorithm quantifies microstructural evolution under complex thermomechanical environments and remedies some of the difficulties associated with conventional MC models. After validation, the cMC approach is applied to quantify the texture development of polycrystalline materials withmore » influences of misorientation and inhomogeneous bulk energy across grain boundaries. The results are in good agreement with theory and experiments.« less
NASA Astrophysics Data System (ADS)
Matos, José P.; Schaefli, Bettina; Schleiss, Anton J.
2017-04-01
Uncertainty affects hydrological modelling efforts from the very measurements (or forecasts) that serve as inputs to the more or less inaccurate predictions that are produced. Uncertainty is truly inescapable in hydrology and yet, due to the theoretical and technical hurdles associated with its quantification, it is at times still neglected or estimated only qualitatively. In recent years the scientific community has made a significant effort towards quantifying this hydrologic prediction uncertainty. Despite this, most of the developed methodologies can be computationally demanding, are complex from a theoretical point of view, require substantial expertise to be employed, and are constrained by a number of assumptions about the model error distribution. These assumptions limit the reliability of many methods in case of errors that show particular cases of non-normality, heteroscedasticity, or autocorrelation. The present contribution builds on a non-parametric data-driven approach that was developed for uncertainty quantification in operational (real-time) forecasting settings. The approach is based on the concept of Pareto optimality and can be used as a standalone forecasting tool or as a postprocessor. By virtue of its non-parametric nature and a general operating principle, it can be applied directly and with ease to predictions of streamflow, water stage, or even accumulated runoff. Also, it is a methodology capable of coping with high heteroscedasticity and seasonal hydrological regimes (e.g. snowmelt and rainfall driven events in the same catchment). Finally, the training and operation of the model are very fast, making it a tool particularly adapted to operational use. To illustrate its practical use, the uncertainty quantification method is coupled with a process-based hydrological model to produce statistically reliable forecasts for an Alpine catchment located in Switzerland. Results are presented and discussed in terms of their reliability and resolution.
NASA Astrophysics Data System (ADS)
Baudrenghien, P.; Mastoridis, T.
2017-01-01
The interaction between beam dynamics and the radio frequency (rf) station in circular colliders is complex and can lead to longitudinal coupled-bunch instabilities at high beam currents. The excitation of the cavity higher order modes is traditionally damped using passive devices. But the wakefield developed at the cavity fundamental frequency falls in the frequency range of the rf power system and can, in theory, be compensated by modulating the generator drive. Such a regulation is the responsibility of the low-level rf (llrf) system that measures the cavity field (or beam current) and generates the rf power drive. The Large Hadron Collider (LHC) rf was designed for the nominal LHC parameter of 0.55 A DC beam current. At 7 TeV the synchrotron radiation damping time is 13 hours. Damping of the instability growth rates due to the cavity fundamental (400.789 MHz) can only come from the synchrotron tune spread (Landau damping) and will be very small (time constant in the order of 0.1 s). In this work, the ability of the present llrf compensation to prevent coupled-bunch instabilities with the planned high luminosity LHC (HiLumi LHC) doubling of the beam current to 1.1 A DC is investigated. The paper conclusions are based on the measured performances of the present llrf system. Models of the rf and llrf systems were developed at the LHC start-up. Following comparisons with measurements, the system was parametrized using these models. The parametric model then provides a more realistic estimation of the instability growth rates than an ideal model of the rf blocks. With this modeling approach, the key rf settings can be varied around their set value allowing for a sensitivity analysis (growth rate sensitivity to rf and llrf parameters). Finally, preliminary measurements from the LHC at 0.44 A DC are presented to support the conclusions of this work.
Parametric and Non-Parametric Vibration-Based Structural Identification Under Earthquake Excitation
NASA Astrophysics Data System (ADS)
Pentaris, Fragkiskos P.; Fouskitakis, George N.
2014-05-01
The problem of modal identification in civil structures is of crucial importance, and thus has been receiving increasing attention in recent years. Vibration-based methods are quite promising as they are capable of identifying the structure's global characteristics, they are relatively easy to implement and they tend to be time effective and less expensive than most alternatives [1]. This paper focuses on the off-line structural/modal identification of civil (concrete) structures subjected to low-level earthquake excitations, under which, they remain within their linear operating regime. Earthquakes and their details are recorded and provided by the seismological network of Crete [2], which 'monitors' the broad region of south Hellenic arc, an active seismic region which functions as a natural laboratory for earthquake engineering of this kind. A sufficient number of seismic events are analyzed in order to reveal the modal characteristics of the structures under study, that consist of the two concrete buildings of the School of Applied Sciences, Technological Education Institute of Crete, located in Chania, Crete, Hellas. Both buildings are equipped with high-sensitivity and accuracy seismographs - providing acceleration measurements - established at the basement (structure's foundation) presently considered as the ground's acceleration (excitation) and at all levels (ground floor, 1st floor, 2nd floor and terrace). Further details regarding the instrumentation setup and data acquisition may be found in [3]. The present study invokes stochastic, both non-parametric (frequency-based) and parametric methods for structural/modal identification (natural frequencies and/or damping ratios). Non-parametric methods include Welch-based spectrum and Frequency response Function (FrF) estimation, while parametric methods, include AutoRegressive (AR), AutoRegressive with eXogeneous input (ARX) and Autoregressive Moving-Average with eXogeneous input (ARMAX) models[4, 5]. Preliminary results indicate that parametric methods are capable of sufficiently providing the structural/modal characteristics such as natural frequencies and damping ratios. The study also aims - at a further level of investigation - to provide a reliable statistically-based methodology for structural health monitoring after major seismic events which potentially cause harming consequences in structures. Acknowledgments This work was supported by the State Scholarships Foundation of Hellas. References [1] J. S. Sakellariou and S. D. Fassois, "Stochastic output error vibration-based damage detection and assessment in structures under earthquake excitation," Journal of Sound and Vibration, vol. 297, pp. 1048-1067, 2006. [2] G. Hloupis, I. Papadopoulos, J. P. Makris, and F. Vallianatos, "The South Aegean seismological network - HSNC," Adv. Geosci., vol. 34, pp. 15-21, 2013. [3] F. P. Pentaris, J. Stonham, and J. P. Makris, "A review of the state-of-the-art of wireless SHM systems and an experimental set-up towards an improved design," presented at the EUROCON, 2013 IEEE, Zagreb, 2013. [4] S. D. Fassois, "Parametric Identification of Vibrating Structures," in Encyclopedia of Vibration, S. G. Braun, D. J. Ewins, and S. S. Rao, Eds., ed London: Academic Press, London, 2001. [5] S. D. Fassois and J. S. Sakellariou, "Time-series methods for fault detection and identification in vibrating structures," Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, vol. 365, pp. 411-448, February 15 2007.
1980-06-01
problems, a parametric model was built which uses the TI - 59 programmable calculator as its ve- hicle. Although the calculator has many disadvantages for...previous experience using the TI 59 programmable calculator . For example, explicit instructions for reading cards into the memory set will not be given
Modality-Driven Classification and Visualization of Ensemble Variance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bensema, Kevin; Gosink, Luke; Obermaier, Harald
Advances in computational power now enable domain scientists to address conceptual and parametric uncertainty by running simulations multiple times in order to sufficiently sample the uncertain input space. While this approach helps address conceptual and parametric uncertainties, the ensemble datasets produced by this technique present a special challenge to visualization researchers as the ensemble dataset records a distribution of possible values for each location in the domain. Contemporary visualization approaches that rely solely on summary statistics (e.g., mean and variance) cannot convey the detailed information encoded in ensemble distributions that are paramount to ensemble analysis; summary statistics provide no informationmore » about modality classification and modality persistence. To address this problem, we propose a novel technique that classifies high-variance locations based on the modality of the distribution of ensemble predictions. Additionally, we develop a set of confidence metrics to inform the end-user of the quality of fit between the distribution at a given location and its assigned class. We apply a similar method to time-varying ensembles to illustrate the relationship between peak variance and bimodal or multimodal behavior. These classification schemes enable a deeper understanding of the behavior of the ensemble members by distinguishing between distributions that can be described by a single tendency and distributions which reflect divergent trends in the ensemble.« less
Nonparametric tests for equality of psychometric functions.
García-Pérez, Miguel A; Núñez-Antón, Vicente
2017-12-07
Many empirical studies measure psychometric functions (curves describing how observers' performance varies with stimulus magnitude) because these functions capture the effects of experimental conditions. To assess these effects, parametric curves are often fitted to the data and comparisons are carried out by testing for equality of mean parameter estimates across conditions. This approach is parametric and, thus, vulnerable to violations of the implied assumptions. Furthermore, testing for equality of means of parameters may be misleading: Psychometric functions may vary meaningfully across conditions on an observer-by-observer basis with no effect on the mean values of the estimated parameters. Alternative approaches to assess equality of psychometric functions per se are thus needed. This paper compares three nonparametric tests that are applicable in all situations of interest: The existing generalized Mantel-Haenszel test, a generalization of the Berry-Mielke test that was developed here, and a split variant of the generalized Mantel-Haenszel test also developed here. Their statistical properties (accuracy and power) are studied via simulation and the results show that all tests are indistinguishable as to accuracy but they differ non-uniformly as to power. Empirical use of the tests is illustrated via analyses of published data sets and practical recommendations are given. The computer code in MATLAB and R to conduct these tests is available as Electronic Supplemental Material.
The parametric resonance—from LEGO Mindstorms to cold atoms
NASA Astrophysics Data System (ADS)
Kawalec, Tomasz; Sierant, Aleksandra
2017-07-01
We show an experimental setup based on a popular LEGO Mindstorms set, allowing us to both observe and investigate the parametric resonance phenomenon. The presented method is simple but covers a variety of student activities like embedded software development, conducting measurements, data collection and analysis. It may be used during science shows, as part of student projects and to illustrate the parametric resonance in mechanics or even quantum physics, during lectures or classes. The parametrically driven LEGO pendulum gains energy in a spectacular way, increasing its amplitude from 10° to about 100° within a few tens of seconds. We provide also a short description of a wireless absolute orientation sensor that may be used in quantitative analysis of driven or free pendulum movement.
Generation and subsequent amplification of few-cycle femtosecond pulses from a picosecond pump laser
NASA Astrophysics Data System (ADS)
Mukhin, I. B.; Kuznetsov, I. I.; Palashov, O. V.
2018-04-01
Using a new approach, in which generation of femtosecond pulses as short as a few field cycles is implemented directly from the radiation of a picosecond pump laser, pulses with the microjoule energy, the repetition rate 10 kHz, and the duration less than 26 fs are generated in the spectral range 1.3 ‑ 1.4 μm. In the process of generating this radiation, use was made of a method providing passive phase stabilisation of the carrier oscillation of the electromagnetic field and its slow envelope. The radiation spectrum was converted into the range of parametric amplification in the BBO crystal by the broadband second harmonic generation; the pulse was parametrically amplified up to the microjoule level and compressed by chirped mirrors to a duration of 28 fs.
Statistical plant set estimation using Schroeder-phased multisinusoidal input design
NASA Technical Reports Server (NTRS)
Bayard, D. S.
1992-01-01
A frequency domain method is developed for plant set estimation. The estimation of a plant 'set' rather than a point estimate is required to support many methods of modern robust control design. The approach here is based on using a Schroeder-phased multisinusoid input design which has the special property of placing input energy only at the discrete frequency points used in the computation. A detailed analysis of the statistical properties of the frequency domain estimator is given, leading to exact expressions for the probability distribution of the estimation error, and many important properties. It is shown that, for any nominal parametric plant estimate, one can use these results to construct an overbound on the additive uncertainty to any prescribed statistical confidence. The 'soft' bound thus obtained can be used to replace 'hard' bounds presently used in many robust control analysis and synthesis methods.
SOFIA: a flexible source finder for 3D spectral line data
NASA Astrophysics Data System (ADS)
Serra, Paolo; Westmeier, Tobias; Giese, Nadine; Jurek, Russell; Flöer, Lars; Popping, Attila; Winkel, Benjamin; van der Hulst, Thijs; Meyer, Martin; Koribalski, Bärbel S.; Staveley-Smith, Lister; Courtois, Hélène
2015-04-01
We introduce SOFIA, a flexible software application for the detection and parametrization of sources in 3D spectral line data sets. SOFIA combines for the first time in a single piece of software a set of new source-finding and parametrization algorithms developed on the way to future H I surveys with ASKAP (WALLABY, DINGO) and APERTIF. It is designed to enable the general use of these new algorithms by the community on a broad range of data sets. The key advantages of SOFIA are the ability to: search for line emission on multiple scales to detect 3D sources in a complete and reliable way, taking into account noise level variations and the presence of artefacts in a data cube; estimate the reliability of individual detections; look for signal in arbitrarily large data cubes using a catalogue of 3D coordinates as a prior; provide a wide range of source parameters and output products which facilitate further analysis by the user. We highlight the modularity of SOFIA, which makes it a flexible package allowing users to select and apply only the algorithms useful for their data and science questions. This modularity makes it also possible to easily expand SOFIA in order to include additional methods as they become available. The full SOFIA distribution, including a dedicated graphical user interface, is publicly available for download.
Parametric modelling of cost data in medical studies.
Nixon, R M; Thompson, S G
2004-04-30
The cost of medical resources used is often recorded for each patient in clinical studies in order to inform decision-making. Although cost data are generally skewed to the right, interest is in making inferences about the population mean cost. Common methods for non-normal data, such as data transformation, assuming asymptotic normality of the sample mean or non-parametric bootstrapping, are not ideal. This paper describes possible parametric models for analysing cost data. Four example data sets are considered, which have different sample sizes and degrees of skewness. Normal, gamma, log-normal, and log-logistic distributions are fitted, together with three-parameter versions of the latter three distributions. Maximum likelihood estimates of the population mean are found; confidence intervals are derived by a parametric BC(a) bootstrap and checked by MCMC methods. Differences between model fits and inferences are explored.Skewed parametric distributions fit cost data better than the normal distribution, and should in principle be preferred for estimating the population mean cost. However for some data sets, we find that models that fit badly can give similar inferences to those that fit well. Conversely, particularly when sample sizes are not large, different parametric models that fit the data equally well can lead to substantially different inferences. We conclude that inferences are sensitive to choice of statistical model, which itself can remain uncertain unless there is enough data to model the tail of the distribution accurately. Investigating the sensitivity of conclusions to choice of model should thus be an essential component of analysing cost data in practice. Copyright 2004 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Rosenberg, Leigh; Hihn, Jairus; Roust, Kevin; Warfield, Keith
2000-01-01
This paper presents an overview of a parametric cost model that has been built at JPL to estimate costs of future, deep space, robotic science missions. Due to the recent dramatic changes in JPL business practices brought about by an internal reengineering effort known as develop new products (DNP), high-level historic cost data is no longer considered analogous to future missions. Therefore, the historic data is of little value in forecasting costs for projects developed using the DNP process. This has lead to the development of an approach for obtaining expert opinion and also for combining actual data with expert opinion to provide a cost database for future missions. In addition, the DNP cost model has a maximum of objective cost drivers which reduces the likelihood of model input error. Version 2 is now under development which expands the model capabilities, links it more tightly with key design technical parameters, and is grounded in more rigorous statistical techniques. The challenges faced in building this model will be discussed, as well as it's background, development approach, status, validation, and future plans.
Heralded creation of photonic qudits from parametric down-conversion using linear optics
NASA Astrophysics Data System (ADS)
Yoshikawa, Jun-ichi; Bergmann, Marcel; van Loock, Peter; Fuwa, Maria; Okada, Masanori; Takase, Kan; Toyama, Takeshi; Makino, Kenzo; Takeda, Shuntaro; Furusawa, Akira
2018-05-01
We propose an experimental scheme to generate, in a heralded fashion, arbitrary quantum superpositions of two-mode optical states with a fixed total photon number n based on weakly squeezed two-mode squeezed state resources (obtained via weak parametric down-conversion), linear optics, and photon detection. Arbitrary d -level (qudit) states can be created this way where d =n +1 . Furthermore, we experimentally demonstrate our scheme for n =2 . The resulting qutrit states are characterized via optical homodyne tomography. We also discuss possible extensions to more than two modes concluding that, in general, our approach ceases to work in this case. For illustration and with regards to possible applications, we explicitly calculate a few examples such as NOON states and logical qubit states for quantum error correction. In particular, our approach enables one to construct bosonic qubit error-correction codes against amplitude damping (photon loss) with a typical suppression of √{n }-1 losses and spanned by two logical codewords that each correspond to an n -photon superposition for two bosonic modes.
NASA Astrophysics Data System (ADS)
1995-03-01
This volume is the third of a 3 volume set that addresses the structural trade study plan that will identify the most suitable structural configuration for an SSTO winged vehicle capable of delivering 25,000 lbs to a 220 nm circular orbit at 51.6 deg inclination. The most suitable Reusable Hydrogen Composite Tank System (RHCTS), and Graphite Composite Tank System (GCPS) composite materials for intertank, wing and thrust structures are identified. Vehicle resizing charts, selection criteria and back-up charts, parametric costing approach and the finite element method analysis are discussed.
NASA Technical Reports Server (NTRS)
1995-01-01
This volume is the third of a 3 volume set that addresses the structural trade study plan that will identify the most suitable structural configuration for an SSTO winged vehicle capable of delivering 25,000 lbs to a 220 nm circular orbit at 51.6 deg inclination. The most suitable Reusable Hydrogen Composite Tank System (RHCTS), and Graphite Composite Tank System (GCPS) composite materials for intertank, wing and thrust structures are identified. Vehicle resizing charts, selection criteria and back-up charts, parametric costing approach and the finite element method analysis are discussed.
Machine learning for many-body physics: The case of the Anderson impurity model
Arsenault, Louis-François; Lopez-Bezanilla, Alejandro; von Lilienfeld, O. Anatole; ...
2014-10-31
We applied machine learning methods in order to find the Green's function of the Anderson impurity model, a basic model system of quantum many-body condensed-matter physics. Furthermore, different methods of parametrizing the Green's function are investigated; a representation in terms of Legendre polynomials is found to be superior due to its limited number of coefficients and its applicability to state of the art methods of solution. The dependence of the errors on the size of the training set is determined. Our results indicate that a machine learning approach to dynamical mean-field theory may be feasible.
Machine learning for many-body physics: The case of the Anderson impurity model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arsenault, Louis-François; Lopez-Bezanilla, Alejandro; von Lilienfeld, O. Anatole
We applied machine learning methods in order to find the Green's function of the Anderson impurity model, a basic model system of quantum many-body condensed-matter physics. Furthermore, different methods of parametrizing the Green's function are investigated; a representation in terms of Legendre polynomials is found to be superior due to its limited number of coefficients and its applicability to state of the art methods of solution. The dependence of the errors on the size of the training set is determined. Our results indicate that a machine learning approach to dynamical mean-field theory may be feasible.
A level set approach for shock-induced α-γ phase transition of RDX
NASA Astrophysics Data System (ADS)
Josyula, Kartik; Rahul; De, Suvranu
2018-02-01
We present a thermodynamically consistent level sets approach based on regularization energy functional which can be directly incorporated into a Galerkin finite element framework to model interface motion. The regularization energy leads to a diffusive form of flux that is embedded within the level sets evolution equation which maintains the signed distance property of the level set function. The scheme is shown to compare well with the velocity extension method in capturing the interface position. The proposed level sets approach is employed to study the α-γphase transformation in RDX single crystal shocked along the (100) plane. Example problems in one and three dimensions are presented. We observe smooth evolution of the phase interface along the shock direction in both models. There is no diffusion of the interface during the zero level set evolution in the three dimensional model. The level sets approach is shown to capture the characteristics of the shock-induced α-γ phase transformation such as stress relaxation behind the phase interface and the finite time required for the phase transformation to complete. The regularization energy based level sets approach is efficient, robust, and easy to implement.
Practical statistics in pain research.
Kim, Tae Kyun
2017-10-01
Pain is subjective, while statistics related to pain research are objective. This review was written to help researchers involved in pain research make statistical decisions. The main issues are related with the level of scales that are often used in pain research, the choice of statistical methods between parametric or nonparametric statistics, and problems which arise from repeated measurements. In the field of pain research, parametric statistics used to be applied in an erroneous way. This is closely related with the scales of data and repeated measurements. The level of scales includes nominal, ordinal, interval, and ratio scales. The level of scales affects the choice of statistics between parametric or non-parametric methods. In the field of pain research, the most frequently used pain assessment scale is the ordinal scale, which would include the visual analogue scale (VAS). There used to be another view, however, which considered the VAS to be an interval or ratio scale, so that the usage of parametric statistics would be accepted practically in some cases. Repeated measurements of the same subjects always complicates statistics. It means that measurements inevitably have correlations between each other, and would preclude the application of one-way ANOVA in which independence between the measurements is necessary. Repeated measures of ANOVA (RMANOVA), however, would permit the comparison between the correlated measurements as long as the condition of sphericity assumption is satisfied. Conclusively, parametric statistical methods should be used only when the assumptions of parametric statistics, such as normality and sphericity, are established.
Multi-Level Building Reconstruction for Automatic Enhancement of High Resolution Dsms
NASA Astrophysics Data System (ADS)
Arefi, H.; Reinartz, P.
2012-07-01
In this article a multi-level approach is proposed for reconstruction-based improvement of high resolution Digital Surface Models (DSMs). The concept of Levels of Detail (LOD) defined by CityGML standard has been considered as basis for abstraction levels of building roof structures. Here, the LOD1 and LOD2 which are related to prismatic and parametric roof shapes are reconstructed. Besides proposing a new approach for automatic LOD1 and LOD2 generation from high resolution DSMs, the algorithm contains two generalization levels namely horizontal and vertical. Both generalization levels are applied to prismatic model of buildings. The horizontal generalization allows controlling the approximation level of building footprints which is similar to cartographic generalization concept of the urban maps. In vertical generalization, the prismatic model is formed using an individual building height and continuous to included all flat structures locating in different height levels. The concept of LOD1 generation is based on approximation of the building footprints into rectangular or non-rectangular polygons. For a rectangular building containing one main orientation a method based on Minimum Bounding Rectangle (MBR) in employed. In contrast, a Combined Minimum Bounding Rectangle (CMBR) approach is proposed for regularization of non-rectilinear polygons, i.e. buildings without perpendicular edge directions. Both MBRand CMBR-based approaches are iteratively employed on building segments to reduce the original building footprints to a minimum number of nodes with maximum similarity to original shapes. A model driven approach based on the analysis of the 3D points of DSMs in a 2D projection plane is proposed for LOD2 generation. Accordingly, a building block is divided into smaller parts according to the direction and number of existing ridge lines. The 3D model is derived for each building part and finally, a complete parametric model is formed by merging all the 3D models of the individual parts and adjusting the nodes after the merging step. In order to provide an enhanced DSM, a surface model is provided for each building by interpolation of the internal points of the generated models. All interpolated models are situated on a Digital Terrain Model (DTM) of corresponding area to shape the enhanced DSM. Proposed DSM enhancement approach has been tested on a dataset from Munich central area. The original DSM is created using robust stereo matching of Worldview-2 stereo images. A quantitative assessment of the new DSM by comparing the heights of the ridges and eaves shows a standard deviation of better than 50cm.
NASA Technical Reports Server (NTRS)
Wallace, Dolores R.
2003-01-01
In FY01 we learned that hardware reliability models need substantial changes to account for differences in software, thus making software reliability measurements more effective, accurate, and easier to apply. These reliability models are generally based on familiar distributions or parametric methods. An obvious question is 'What new statistical and probability models can be developed using non-parametric and distribution-free methods instead of the traditional parametric method?" Two approaches to software reliability engineering appear somewhat promising. The first study, begin in FY01, is based in hardware reliability, a very well established science that has many aspects that can be applied to software. This research effort has investigated mathematical aspects of hardware reliability and has identified those applicable to software. Currently the research effort is applying and testing these approaches to software reliability measurement, These parametric models require much project data that may be difficult to apply and interpret. Projects at GSFC are often complex in both technology and schedules. Assessing and estimating reliability of the final system is extremely difficult when various subsystems are tested and completed long before others. Parametric and distribution free techniques may offer a new and accurate way of modeling failure time and other project data to provide earlier and more accurate estimates of system reliability.
Benchmark dose analysis via nonparametric regression modeling
Piegorsch, Walter W.; Xiong, Hui; Bhattacharya, Rabi N.; Lin, Lizhen
2013-01-01
Estimation of benchmark doses (BMDs) in quantitative risk assessment traditionally is based upon parametric dose-response modeling. It is a well-known concern, however, that if the chosen parametric model is uncertain and/or misspecified, inaccurate and possibly unsafe low-dose inferences can result. We describe a nonparametric approach for estimating BMDs with quantal-response data based on an isotonic regression method, and also study use of corresponding, nonparametric, bootstrap-based confidence limits for the BMD. We explore the confidence limits’ small-sample properties via a simulation study, and illustrate the calculations with an example from cancer risk assessment. It is seen that this nonparametric approach can provide a useful alternative for BMD estimation when faced with the problem of parametric model uncertainty. PMID:23683057
Yin, Yanchun; Chew, Andrew; Ren, Xiaoming; Li, Jie; Wang, Yang; Wu, Yi; Chang, Zenghu
2017-01-01
We present an approach for both efficient generation and amplification of 4–12 μm pulses by tailoring the phase matching of the nonlinear crystal Zinc Germanium Phosphide (ZGP) in a narrowband-pumped optical parametric chirped pulse amplifier (OPCPA) and a broadband-pumped dual-chirped optical parametric amplifier (DC-OPA), respectively. Preliminary experimental results are obtained for generating 1.8–4.2 μm super broadband spectra, which can be used to seed both the signal of the OPCPA and the pump of the DC-OPA. The theoretical pump-to-idler conversion efficiency reaches 27% in the DC-OPA pumped by a chirped broadband Cr2+:ZnSe/ZnS laser, enabling the generation of Terawatt-level 4–12 μm pulses with an available large-aperture ZGP. Furthermore, the 4–12 μm idler pulses can be compressed to sub-cycle pulses by compensating the tailored positive chirp of the idler pulses using the bulk compressor NaCl, and by indirectly controlling the higher-order idler phase through tuning the signal (2.4–4.0 μm) phase with a commercially available acousto-optic programmable dispersive filter (AOPDF). A similar approach is also described for generating high-energy 4–12 μm sub-cycle pulses via OPCPA pumped by a 2 μm Ho:YLF laser. PMID:28367966
NASA Astrophysics Data System (ADS)
Ray, Anandaroop; Key, Kerry; Bodin, Thomas; Myer, David; Constable, Steven
2014-12-01
We apply a reversible-jump Markov chain Monte Carlo method to sample the Bayesian posterior model probability density function of 2-D seafloor resistivity as constrained by marine controlled source electromagnetic data. This density function of earth models conveys information on which parts of the model space are illuminated by the data. Whereas conventional gradient-based inversion approaches require subjective regularization choices to stabilize this highly non-linear and non-unique inverse problem and provide only a single solution with no model uncertainty information, the method we use entirely avoids model regularization. The result of our approach is an ensemble of models that can be visualized and queried to provide meaningful information about the sensitivity of the data to the subsurface, and the level of resolution of model parameters. We represent models in 2-D using a Voronoi cell parametrization. To make the 2-D problem practical, we use a source-receiver common midpoint approximation with 1-D forward modelling. Our algorithm is transdimensional and self-parametrizing where the number of resistivity cells within a 2-D depth section is variable, as are their positions and geometries. Two synthetic studies demonstrate the algorithm's use in the appraisal of a thin, segmented, resistive reservoir which makes for a challenging exploration target. As a demonstration example, we apply our method to survey data collected over the Scarborough gas field on the Northwest Australian shelf.
Nishiura, Hiroshi
2009-01-01
Determination of the most appropriate quarantine period for those exposed to smallpox is crucial to the construction of an effective preparedness program against a potential bioterrorist attack. This study reanalyzed data on the incubation period distribution of smallpox to allow the optimal quarantine period to be objectively calculated. In total, 131 cases of smallpox were examined; incubation periods were extracted from four different sets of historical data and only cases arising from exposure for a single day were considered. The mean (median and standard deviation (SD)) incubation period was 12.5 (12.0, 2.2) days. Assuming lognormal and gamma distributions for the incubation period, maximum likelihood estimates (and corresponding 95% confidence interval (CI)) of the 95th percentile were 16.4 (95% CI: 15.6, 17.9) and 16.2 (95% CI: 15.5, 17.4) days, respectively. Using a non-parametric method, the 95th percentile point was estimated as 16 (95% CI: 15, 17) days. The upper 95% CIs of the incubation periods at the 90th, 95th and 99th percentiles were shorter than 17, 18 and 23 days, respectively, using both parametric and non-parametric methods. These results suggest that quarantine measures can ensure non-infection among those exposed to smallpox with probabilities higher than 95-99%, if the exposed individuals are quarantined for 18-23 days after the date of contact tracing.
NASA Astrophysics Data System (ADS)
DePrince, A. Eugene; Mazziotti, David A.
2010-01-01
The parametric variational two-electron reduced-density-matrix (2-RDM) method is applied to computing electronic correlation energies of medium-to-large molecular systems by exploiting the spatial locality of electron correlation within the framework of the cluster-in-molecule (CIM) approximation [S. Li et al., J. Comput. Chem. 23, 238 (2002); J. Chem. Phys. 125, 074109 (2006)]. The 2-RDMs of individual molecular fragments within a molecule are determined, and selected portions of these 2-RDMs are recombined to yield an accurate approximation to the correlation energy of the entire molecule. In addition to extending CIM to the parametric 2-RDM method, we (i) suggest a more systematic selection of atomic-orbital domains than that presented in previous CIM studies and (ii) generalize the CIM method for open-shell quantum systems. The resulting method is tested with a series of polyacetylene molecules, water clusters, and diazobenzene derivatives in minimal and nonminimal basis sets. Calculations show that the computational cost of the method scales linearly with system size. We also compute hydrogen-abstraction energies for a series of hydroxyurea derivatives. Abstraction of hydrogen from hydroxyurea is thought to be a key step in its treatment of sickle cell anemia; the design of hydroxyurea derivatives that oxidize more rapidly is one approach to devising more effective treatments.
Markov Chain Monte Carlo Inference of Parametric Dictionaries for Sparse Bayesian Approximations
Chaspari, Theodora; Tsiartas, Andreas; Tsilifis, Panagiotis; Narayanan, Shrikanth
2016-01-01
Parametric dictionaries can increase the ability of sparse representations to meaningfully capture and interpret the underlying signal information, such as encountered in biomedical problems. Given a mapping function from the atom parameter space to the actual atoms, we propose a sparse Bayesian framework for learning the atom parameters, because of its ability to provide full posterior estimates, take uncertainty into account and generalize on unseen data. Inference is performed with Markov Chain Monte Carlo, that uses block sampling to generate the variables of the Bayesian problem. Since the parameterization of dictionary atoms results in posteriors that cannot be analytically computed, we use a Metropolis-Hastings-within-Gibbs framework, according to which variables with closed-form posteriors are generated with the Gibbs sampler, while the remaining ones with the Metropolis Hastings from appropriate candidate-generating densities. We further show that the corresponding Markov Chain is uniformly ergodic ensuring its convergence to a stationary distribution independently of the initial state. Results on synthetic data and real biomedical signals indicate that our approach offers advantages in terms of signal reconstruction compared to previously proposed Steepest Descent and Equiangular Tight Frame methods. This paper demonstrates the ability of Bayesian learning to generate parametric dictionaries that can reliably represent the exemplar data and provides the foundation towards inferring the entire variable set of the sparse approximation problem for signal denoising, adaptation and other applications. PMID:28649173
Prediction of forest fires occurrences with area-level Poisson mixed models.
Boubeta, Miguel; Lombardía, María José; Marey-Pérez, Manuel Francisco; Morales, Domingo
2015-05-01
The number of fires in forest areas of Galicia (north-west of Spain) during the summer period is quite high. Local authorities are interested in analyzing the factors that explain this phenomenon. Poisson regression models are good tools for describing and predicting the number of fires per forest areas. This work employs area-level Poisson mixed models for treating real data about fires in forest areas. A parametric bootstrap method is applied for estimating the mean squared errors of fires predictors. The developed methodology and software are applied to a real data set of fires in forest areas of Galicia. Copyright © 2015 Elsevier Ltd. All rights reserved.
Prampolini, Giacomo; Campetella, Marco; De Mitri, Nicola; Livotto, Paolo Roberto; Cacelli, Ivo
2016-11-08
A robust and automated protocol for the derivation of sound force field parameters, suitable for condensed-phase classical simulations, is here tested and validated on several halogenated hydrocarbons, a class of compounds for which standard force fields have often been reported to deliver rather inaccurate performances. The major strength of the proposed protocol is that all of the parameters are derived only from first principles because all of the information required is retrieved from quantum mechanical data, purposely computed for the investigated molecule. This a priori parametrization is carried out separately for the intra- and intermolecular contributions to the force fields, respectively exploiting the Joyce and Picky programs, previously developed in our group. To avoid high computational costs, all quantum mechanical calculations were performed exploiting the density functional theory. Because the choice of the functional is known to be crucial for the description of the intermolecular interactions, a specific procedure is proposed, which allows for a reliable benchmark of different functionals against higher-level data. The intramolecular and intermolecular contribution are eventually joined together, and the resulting quantum mechanically derived force field is thereafter employed in lengthy molecular dynamics simulations to compute several thermodynamic properties that characterize the resulting bulk phase. The accuracy of the proposed parametrization protocol is finally validated by comparing the computed macroscopic observables with the available experimental counterparts. It is found that, on average, the proposed approach is capable of yielding a consistent description of the investigated set, often outperforming the literature standard force fields, or at least delivering results of similar accuracy.
Lee, Y; Tien, J M
2001-01-01
We present mathematical models that determine the optimal parameters for strategically routing multidestination traffic in an end-to-end network setting. Multidestination traffic refers to a traffic type that can be routed to any one of a multiple number of destinations. A growing number of communication services is based on multidestination routing. In this parameter-driven approach, a multidestination call is routed to one of the candidate destination nodes in accordance with predetermined decision parameters associated with each candidate node. We present three different approaches: (1) a link utilization (LU) approach, (2) a network cost (NC) approach, and (3) a combined parametric (CP) approach. The LU approach provides the solution that would result in an optimally balanced link utilization, whereas the NC approach provides the least expensive way to route traffic to destinations. The CP approach, on the other hand, provides multiple solutions that help leverage link utilization and cost. The LU approach has in fact been implemented by a long distance carrier resulting in a considerable efficiency improvement in its international direct services, as summarized.
Modeling the influence of plate motions on subduction
NASA Astrophysics Data System (ADS)
Hillebrand, Bram; Thieulot, Cedric; van den Berg, Arie; Spakman, Wim
2014-05-01
Subduction zones are widely studied complex geodynamical systems. Their evolution is influenced by a broad range of parameters such as the age of the plates (both subducting and overriding) as well as their rheology, their nature (oceanic or continental), the presence of a crust and the involved plate motions to name a few. To investigate the importance of these different parameters on the evolution of subduction we have created a series of 2D numerical thermomechanical subduction models. These subduction models are multi-material flow models containing continental and oceanic crusts, a lithosphere and a mantle. We use the sticky air approach to allow for topography build up in the model. In order to model multi-material flow in our Eulerian finite element code of SEPRAN (Segal and Praagman, 2000) we use the well benchmarked level set method (Osher and Sethian, 1988) to track the different materials and their mode of deformation through the model domain. To our knowledge the presented results are the first subduction model results with the level set method. We will present preliminary results of our parametric study focusing mainly on the influence of plate motions on the evolution of subduction. S. Osher and J.A. Sethian. Fronts propagating with curvature-dependent speed: Algorithms based on hamilton-jacobi formulations. JCP 1988 A. Segal and N.P. Praagman. The SEPRAN package. Technical report, 2000 This research is funded by The Netherlands Research Centre for Integrated Solid Earth Science (ISES)
Parameter identifiability of linear dynamical systems
NASA Technical Reports Server (NTRS)
Glover, K.; Willems, J. C.
1974-01-01
It is assumed that the system matrices of a stationary linear dynamical system were parametrized by a set of unknown parameters. The question considered here is, when can such a set of unknown parameters be identified from the observed data? Conditions for the local identifiability of a parametrization are derived in three situations: (1) when input/output observations are made, (2) when there exists an unknown feedback matrix in the system and (3) when the system is assumed to be driven by white noise and only output observations are made. Also a sufficient condition for global identifiability is derived.
Formation of parametric images using mixed-effects models: a feasibility study.
Huang, Husan-Ming; Shih, Yi-Yu; Lin, Chieh
2016-03-01
Mixed-effects models have been widely used in the analysis of longitudinal data. By presenting the parameters as a combination of fixed effects and random effects, mixed-effects models incorporating both within- and between-subject variations are capable of improving parameter estimation. In this work, we demonstrate the feasibility of using a non-linear mixed-effects (NLME) approach for generating parametric images from medical imaging data of a single study. By assuming that all voxels in the image are independent, we used simulation and animal data to evaluate whether NLME can improve the voxel-wise parameter estimation. For testing purposes, intravoxel incoherent motion (IVIM) diffusion parameters including perfusion fraction, pseudo-diffusion coefficient and true diffusion coefficient were estimated using diffusion-weighted MR images and NLME through fitting the IVIM model. The conventional method of non-linear least squares (NLLS) was used as the standard approach for comparison of the resulted parametric images. In the simulated data, NLME provides more accurate and precise estimates of diffusion parameters compared with NLLS. Similarly, we found that NLME has the ability to improve the signal-to-noise ratio of parametric images obtained from rat brain data. These data have shown that it is feasible to apply NLME in parametric image generation, and the parametric image quality can be accordingly improved with the use of NLME. With the flexibility to be adapted to other models or modalities, NLME may become a useful tool to improve the parametric image quality in the future. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
Multiresolution and Explicit Methods for Vector Field Analysis and Visualization
NASA Technical Reports Server (NTRS)
1996-01-01
We first report on our current progress in the area of explicit methods for tangent curve computation. The basic idea of this method is to decompose the domain into a collection of triangles (or tetrahedra) and assume linear variation of the vector field over each cell. With this assumption, the equations which define a tangent curve become a system of linear, constant coefficient ODE's which can be solved explicitly. There are five different representation of the solution depending on the eigenvalues of the Jacobian. The analysis of these five cases is somewhat similar to the phase plane analysis often associate with critical point classification within the context of topological methods, but it is not exactly the same. There are some critical differences. Moving from one cell to the next as a tangent curve is tracked, requires the computation of the exit point which is an intersection of the solution of the constant coefficient ODE and the edge of a triangle. There are two possible approaches to this root computation problem. We can express the tangent curve into parametric form and substitute into an implicit form for the edge or we can express the edge in parametric form and substitute in an implicit form of the tangent curve. Normally the solution of a system of ODE's is given in parametric form and so the first approach is the most accessible and straightforward. The second approach requires the 'implicitization' of these parametric curves. The implicitization of parametric curves can often be rather difficult, but in this case we have been successful and have been able to develop algorithms and subsequent computer programs for both approaches. We will give these details along with some comparisons in a forthcoming research paper on this topic.
Sgr A* Emission Parametrizations from GRMHD Simulations
NASA Astrophysics Data System (ADS)
Anantua, Richard; Ressler, Sean; Quataert, Eliot
2018-06-01
Galactic Center emission near the vicinity of the central black hole, Sagittarius (Sgr) A*, is modeled using parametrizations involving the electron temperature, which is found from general relativistic magnetohydrodynamic (GRMHD) simulations to be highest in the disk-outflow corona. Jet-motivated prescriptions generalizing equipartition of particle and magnetic energies, e.g., by scaling relativistic electron energy density to powers of the magnetic field strength, are also introduced. GRMHD jet (or outflow)/accretion disk/black hole (JAB) simulation postprocessing codes IBOTHROS and GRMONTY are employed in the calculation of images and spectra. Various parametric models reproduce spectral and morphological features, such as the sub-mm spectral bump in electron temperature models and asymmetric photon rings in equipartition-based models. The Event Horizon Telescope (EHT) will provide unprecedentedly high-resolution 230+ GHz observations of the "shadow" around Sgr A*'s supermassive black hole, which the synthetic models presented here will reverse-engineer. Both electron temperature and equipartition-based models can be constructed to be compatible with EHT size constraints for the emitting region of Sgr A*. This program sets the groundwork for devising a unified emission parametrization flexible enough to model disk, corona and outflow/jet regions with a small set of parameters including electron heating fraction and plasma beta.
Marginally specified priors for non-parametric Bayesian estimation
Kessler, David C.; Hoff, Peter D.; Dunson, David B.
2014-01-01
Summary Prior specification for non-parametric Bayesian inference involves the difficult task of quantifying prior knowledge about a parameter of high, often infinite, dimension. A statistician is unlikely to have informed opinions about all aspects of such a parameter but will have real information about functionals of the parameter, such as the population mean or variance. The paper proposes a new framework for non-parametric Bayes inference in which the prior distribution for a possibly infinite dimensional parameter is decomposed into two parts: an informative prior on a finite set of functionals, and a non-parametric conditional prior for the parameter given the functionals. Such priors can be easily constructed from standard non-parametric prior distributions in common use and inherit the large support of the standard priors on which they are based. Additionally, posterior approximations under these informative priors can generally be made via minor adjustments to existing Markov chain approximation algorithms for standard non-parametric prior distributions. We illustrate the use of such priors in the context of multivariate density estimation using Dirichlet process mixture models, and in the modelling of high dimensional sparse contingency tables. PMID:25663813
Parametrically excited helicopter ground resonance dynamics with high blade asymmetries
NASA Astrophysics Data System (ADS)
Sanches, L.; Michon, G.; Berlioz, A.; Alazard, D.
2012-07-01
The present work is aimed at verifying the influence of high asymmetries in the variation of in-plane lead-lag stiffness of one blade on the ground resonance phenomenon in helicopters. The periodical equations of motions are analyzed by using Floquet's Theory (FM) and the boundaries of instabilities predicted. The stability chart obtained as a function of asymmetry parameters and rotor speed reveals a complex evolution of critical zones and the existence of bifurcation points at low rotor speed values. Additionally, it is known that when treated as parametric excitations; periodic terms may cause parametric resonances in dynamic systems, some of which can become unstable. Therefore, the helicopter is later considered as a parametrically excited system and the equations are treated analytically by applying the Method of Multiple Scales (MMS). A stability analysis is used to verify the existence of unstable parametric resonances with first and second-order sets of equations. The results are compared and validated with those obtained by Floquet's Theory. Moreover, an explanation is given for the presence of unstable motion at low rotor speeds due to parametric instabilities of the second order.
Mediation Analysis with Survival Outcomes: Accelerated Failure Time vs. Proportional Hazards Models.
Gelfand, Lois A; MacKinnon, David P; DeRubeis, Robert J; Baraldi, Amanda N
2016-01-01
Survival time is an important type of outcome variable in treatment research. Currently, limited guidance is available regarding performing mediation analyses with survival outcomes, which generally do not have normally distributed errors, and contain unobserved (censored) events. We present considerations for choosing an approach, using a comparison of semi-parametric proportional hazards (PH) and fully parametric accelerated failure time (AFT) approaches for illustration. We compare PH and AFT models and procedures in their integration into mediation models and review their ability to produce coefficients that estimate causal effects. Using simulation studies modeling Weibull-distributed survival times, we compare statistical properties of mediation analyses incorporating PH and AFT approaches (employing SAS procedures PHREG and LIFEREG, respectively) under varied data conditions, some including censoring. A simulated data set illustrates the findings. AFT models integrate more easily than PH models into mediation models. Furthermore, mediation analyses incorporating LIFEREG produce coefficients that can estimate causal effects, and demonstrate superior statistical properties. Censoring introduces bias in the coefficient estimate representing the treatment effect on outcome-underestimation in LIFEREG, and overestimation in PHREG. With LIFEREG, this bias can be addressed using an alternative estimate obtained from combining other coefficients, whereas this is not possible with PHREG. When Weibull assumptions are not violated, there are compelling advantages to using LIFEREG over PHREG for mediation analyses involving survival-time outcomes. Irrespective of the procedures used, the interpretation of coefficients, effects of censoring on coefficient estimates, and statistical properties should be taken into account when reporting results.
The detection of pleural effusion using a parametric EIT technique.
Arad, M; Zlochiver, S; Davidson, T; Shoenfeld, Y; Adunsky, A; Abboud, S
2009-04-01
The bioimpedance technique provides a safe, low-cost and non-invasive alternative for routine monitoring of lung fluid levels in patients. In this study we have investigated the feasibility of bioimpedance measurements to monitor pleural effusion (PE) patients. The measurement system (eight-electrode thoracic belt, opposite sequential current injections, 3 mA, 20 kHz) employed a parametric reconstruction algorithm to assess the left and right lung resistivity values. Bioimpedance measurements were taken before and after the removal of pleural fluids, while the patient was sitting at rest during tidal respiration in order to minimize movements of the thoracic cavity. The mean resistivity difference between the lung on the side with PE and the lung on the other side was -48 Omega cm. A high correlation was found between the mean lung resistivity value before the removal of the fluids and the volume of pleural fluids removed, with a sensitivity of -0.17 Omega cm ml(-1) (linear regression, R=0.53). The present study further supports the feasibility and applicability of the bioimpedance technique, and specifically the approach of parametric left and right lung resistivity reconstruction, in monitoring lung patients.
Parametrizing linear generalized Langevin dynamics from explicit molecular dynamics simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gottwald, Fabian; Karsten, Sven; Ivanov, Sergei D., E-mail: sergei.ivanov@uni-rostock.de
2015-06-28
Fundamental understanding of complex dynamics in many-particle systems on the atomistic level is of utmost importance. Often the systems of interest are of macroscopic size but can be partitioned into a few important degrees of freedom which are treated most accurately and others which constitute a thermal bath. Particular attention in this respect attracts the linear generalized Langevin equation, which can be rigorously derived by means of a linear projection technique. Within this framework, a complicated interaction with the bath can be reduced to a single memory kernel. This memory kernel in turn is parametrized for a particular system studied,more » usually by means of time-domain methods based on explicit molecular dynamics data. Here, we discuss that this task is more naturally achieved in frequency domain and develop a Fourier-based parametrization method that outperforms its time-domain analogues. Very surprisingly, the widely used rigid bond method turns out to be inappropriate in general. Importantly, we show that the rigid bond approach leads to a systematic overestimation of relaxation times, unless the system under study consists of a harmonic bath bi-linearly coupled to the relevant degrees of freedom.« less
2011-01-01
We report a reparameterization of the glycosidic torsion χ of the Cornell et al. AMBER force field for RNA, χOL. The parameters remove destabilization of the anti region found in the ff99 force field and thus prevent formation of spurious ladder-like structural distortions in RNA simulations. They also improve the description of the syn region and the syn–anti balance as well as enhance MD simulations of various RNA structures. Although χOL can be combined with both ff99 and ff99bsc0, we recommend the latter. We do not recommend using χOL for B-DNA because it does not improve upon ff99bsc0 for canonical structures. However, it might be useful in simulations of DNA molecules containing syn nucleotides. Our parametrization is based on high-level QM calculations and differs from conventional parametrization approaches in that it incorporates some previously neglected solvation-related effects (which appear to be essential for obtaining correct anti/high-anti balance). Our χOL force field is compared with several previous glycosidic torsion parametrizations. PMID:21921995
Parametric pendulum based wave energy converter
NASA Astrophysics Data System (ADS)
Yurchenko, Daniil; Alevras, Panagiotis
2018-01-01
The paper investigates the dynamics of a novel wave energy converter based on the parametrically excited pendulum. The herein developed concept of the parametric pendulum allows reducing the influence of the gravity force thereby significantly improving the device performance at a regular sea state, which could not be achieved in the earlier proposed original point-absorber design. The suggested design of a wave energy converter achieves a dominant rotational motion without any additional mechanisms, like a gearbox, or any active control involvement. Presented numerical results of deterministic and stochastic modeling clearly reflect the advantage of the proposed design. A set of experimental results confirms the numerical findings and validates the new design of a parametric pendulum based wave energy converter. Power harvesting potential of the novel device is also presented.
Spectral linewidth preservation in parametric frequency combs seeded by dual pumps.
Tong, Zhi; Wiberg, Andreas O J; Myslivets, Evgeny; Kuo, Bill P P; Alic, Nikola; Radic, Stojan
2012-07-30
We demonstrate new technique for generation of programmable-pitch, wideband frequency combs with low phase noise. The comb generation was achieved using cavity-less, multistage mixer driven by two tunable continuous-wave pump seeds. The approach relies on phase-correlated continuous-wave pumps in order to cancel spectral linewidth broadening inherent to parametric comb generation. Parametric combs with over 200-nm bandwidth were obtained and characterized with respect to phase noise scaling to demonstrate linewidth preservation over 100 generated tones.
A general approach for predicting the behavior of the Supreme Court of the United States
Bommarito, Michael J.; Blackman, Josh
2017-01-01
Building on developments in machine learning and prior work in the science of judicial prediction, we construct a model designed to predict the behavior of the Supreme Court of the United States in a generalized, out-of-sample context. To do so, we develop a time-evolving random forest classifier that leverages unique feature engineering to predict more than 240,000 justice votes and 28,000 cases outcomes over nearly two centuries (1816-2015). Using only data available prior to decision, our model outperforms null (baseline) models at both the justice and case level under both parametric and non-parametric tests. Over nearly two centuries, we achieve 70.2% accuracy at the case outcome level and 71.9% at the justice vote level. More recently, over the past century, we outperform an in-sample optimized null model by nearly 5%. Our performance is consistent with, and improves on the general level of prediction demonstrated by prior work; however, our model is distinctive because it can be applied out-of-sample to the entire past and future of the Court, not a single term. Our results represent an important advance for the science of quantitative legal prediction and portend a range of other potential applications. PMID:28403140
Direct parametric reconstruction in dynamic PET myocardial perfusion imaging: in vivo studies.
Petibon, Yoann; Rakvongthai, Yothin; El Fakhri, Georges; Ouyang, Jinsong
2017-05-07
Dynamic PET myocardial perfusion imaging (MPI) used in conjunction with tracer kinetic modeling enables the quantification of absolute myocardial blood flow (MBF). However, MBF maps computed using the traditional indirect method (i.e. post-reconstruction voxel-wise fitting of kinetic model to PET time-activity-curves-TACs) suffer from poor signal-to-noise ratio (SNR). Direct reconstruction of kinetic parameters from raw PET projection data has been shown to offer parametric images with higher SNR compared to the indirect method. The aim of this study was to extend and evaluate the performance of a direct parametric reconstruction method using in vivo dynamic PET MPI data for the purpose of quantifying MBF. Dynamic PET MPI studies were performed on two healthy pigs using a Siemens Biograph mMR scanner. List-mode PET data for each animal were acquired following a bolus injection of ~7-8 mCi of 18 F-flurpiridaz, a myocardial perfusion agent. Fully-3D dynamic PET sinograms were obtained by sorting the coincidence events into 16 temporal frames covering ~5 min after radiotracer administration. Additionally, eight independent noise realizations of both scans-each containing 1/8th of the total number of events-were generated from the original list-mode data. Dynamic sinograms were then used to compute parametric maps using the conventional indirect method and the proposed direct method. For both methods, a one-tissue compartment model accounting for spillover from the left and right ventricle blood-pools was used to describe the kinetics of 18 F-flurpiridaz. An image-derived arterial input function obtained from a TAC taken in the left ventricle cavity was used for tracer kinetic analysis. For the indirect method, frame-by-frame images were estimated using two fully-3D reconstruction techniques: the standard ordered subset expectation maximization (OSEM) reconstruction algorithm on one side, and the one-step late maximum a posteriori (OSL-MAP) algorithm on the other side, which incorporates a quadratic penalty function. The parametric images were then calculated using voxel-wise weighted least-square fitting of the reconstructed myocardial PET TACs. For the direct method, parametric images were estimated directly from the dynamic PET sinograms using a maximum a posteriori (MAP) parametric reconstruction algorithm which optimizes an objective function comprised of the Poisson log-likelihood term, the kinetic model and a quadratic penalty function. Maximization of the objective function with respect to each set of parameters was achieved using a preconditioned conjugate gradient algorithm with a specifically developed pre-conditioner. The performance of the direct method was evaluated by comparing voxel- and segment-wise estimates of [Formula: see text], the tracer transport rate (ml · min -1 · ml -1 ), to those obtained using the indirect method applied to both OSEM and OSL-MAP dynamic reconstructions. The proposed direct reconstruction method produced [Formula: see text] maps with visibly lower noise than the indirect method based on OSEM and OSL-MAP reconstructions. At normal count levels, the direct method was shown to outperform the indirect method based on OSL-MAP in the sense that at matched level of bias, reduced regional noise levels were obtained. At lower count levels, the direct method produced [Formula: see text] estimates with significantly lower standard deviation across noise realizations than the indirect method based on OSL-MAP at matched bias level. In all cases, the direct method yielded lower noise and standard deviation than the indirect method based on OSEM. Overall, the proposed direct reconstruction offered a better bias-variance tradeoff than the indirect method applied to either OSEM and OSL-MAP. Direct parametric reconstruction as applied to in vivo dynamic PET MPI data is therefore a promising method for producing MBF maps with lower variance.
Direct parametric reconstruction in dynamic PET myocardial perfusion imaging: in-vivo studies
Petibon, Yoann; Rakvongthai, Yothin; Fakhri, Georges El; Ouyang, Jinsong
2017-01-01
Dynamic PET myocardial perfusion imaging (MPI) used in conjunction with tracer kinetic modeling enables the quantification of absolute myocardial blood flow (MBF). However, MBF maps computed using the traditional indirect method (i.e. post-reconstruction voxel-wise fitting of kinetic model to PET time-activity-curves -TACs) suffer from poor signal-to-noise ratio (SNR). Direct reconstruction of kinetic parameters from raw PET projection data has been shown to offer parametric images with higher SNR compared to the indirect method. The aim of this study was to extend and evaluate the performance of a direct parametric reconstruction method using in-vivo dynamic PET MPI data for the purpose of quantifying MBF. Dynamic PET MPI studies were performed on two healthy pigs using a Siemens Biograph mMR scanner. List-mode PET data for each animal were acquired following a bolus injection of ~7-8 mCi of 18F-flurpiridaz, a myocardial perfusion agent. Fully-3D dynamic PET sinograms were obtained by sorting the coincidence events into 16 temporal frames covering ~5 min after radiotracer administration. Additionally, eight independent noise realizations of both scans - each containing 1/8th of the total number of events - were generated from the original list-mode data. Dynamic sinograms were then used to compute parametric maps using the conventional indirect method and the proposed direct method. For both methods, a one-tissue compartment model accounting for spillover from the left and right ventricle blood-pools was used to describe the kinetics of 18F-flurpiridaz. An image-derived arterial input function obtained from a TAC taken in the left ventricle cavity was used for tracer kinetic analysis. For the indirect method, frame-by-frame images were estimated using two fully-3D reconstruction techniques: the standard Ordered Subset Expectation Maximization (OSEM) reconstruction algorithm on one side, and the One-Step Late Maximum a Posteriori (OSL-MAP) algorithm on the other side, which incorporates a quadratic penalty function. The parametric images were then calculated using voxel-wise weighted least-square fitting of the reconstructed myocardial PET TACs. For the direct method, parametric images were estimated directly from the dynamic PET sinograms using a maximum a posteriori (MAP) parametric reconstruction algorithm which optimizes an objective function comprised of the Poisson log-likelihood term, the kinetic model and a quadratic penalty function. Maximization of the objective function with respect to each set of parameters was achieved using a preconditioned conjugate gradient algorithm with a specifically developed pre-conditioner. The performance of the direct method was evaluated by comparing voxel- and segment-wise estimates of K1, the tracer transport rate (mL.min−1.mL−1), to those obtained using the indirect method applied to both OSEM and OSL-MAP dynamic reconstructions. The proposed direct reconstruction method produced K1 maps with visibly lower noise than the indirect method based on OSEM and OSL-MAP reconstructions. At normal count levels, the direct method was shown to outperform the indirect method based on OSL-MAP in the sense that at matched level of bias, reduced regional noise levels were obtained. At lower count levels, the direct method produced K1 estimates with significantly lower standard deviation across noise realizations than the indirect method based on OSL-MAP at matched bias level. In all cases, the direct method yielded lower noise and standard deviation than the indirect method based on OSEM. Overall, the proposed direct reconstruction offered a better bias-variance tradeoff than the indirect method applied to either OSEM and OSL-MAP. Direct parametric reconstruction as applied to in-vivo dynamic PET MPI data is therefore a promising method for producing MBF maps with lower variance. PMID:28379843
Direct parametric reconstruction in dynamic PET myocardial perfusion imaging: in vivo studies
NASA Astrophysics Data System (ADS)
Petibon, Yoann; Rakvongthai, Yothin; El Fakhri, Georges; Ouyang, Jinsong
2017-05-01
Dynamic PET myocardial perfusion imaging (MPI) used in conjunction with tracer kinetic modeling enables the quantification of absolute myocardial blood flow (MBF). However, MBF maps computed using the traditional indirect method (i.e. post-reconstruction voxel-wise fitting of kinetic model to PET time-activity-curves-TACs) suffer from poor signal-to-noise ratio (SNR). Direct reconstruction of kinetic parameters from raw PET projection data has been shown to offer parametric images with higher SNR compared to the indirect method. The aim of this study was to extend and evaluate the performance of a direct parametric reconstruction method using in vivo dynamic PET MPI data for the purpose of quantifying MBF. Dynamic PET MPI studies were performed on two healthy pigs using a Siemens Biograph mMR scanner. List-mode PET data for each animal were acquired following a bolus injection of ~7-8 mCi of 18F-flurpiridaz, a myocardial perfusion agent. Fully-3D dynamic PET sinograms were obtained by sorting the coincidence events into 16 temporal frames covering ~5 min after radiotracer administration. Additionally, eight independent noise realizations of both scans—each containing 1/8th of the total number of events—were generated from the original list-mode data. Dynamic sinograms were then used to compute parametric maps using the conventional indirect method and the proposed direct method. For both methods, a one-tissue compartment model accounting for spillover from the left and right ventricle blood-pools was used to describe the kinetics of 18F-flurpiridaz. An image-derived arterial input function obtained from a TAC taken in the left ventricle cavity was used for tracer kinetic analysis. For the indirect method, frame-by-frame images were estimated using two fully-3D reconstruction techniques: the standard ordered subset expectation maximization (OSEM) reconstruction algorithm on one side, and the one-step late maximum a posteriori (OSL-MAP) algorithm on the other side, which incorporates a quadratic penalty function. The parametric images were then calculated using voxel-wise weighted least-square fitting of the reconstructed myocardial PET TACs. For the direct method, parametric images were estimated directly from the dynamic PET sinograms using a maximum a posteriori (MAP) parametric reconstruction algorithm which optimizes an objective function comprised of the Poisson log-likelihood term, the kinetic model and a quadratic penalty function. Maximization of the objective function with respect to each set of parameters was achieved using a preconditioned conjugate gradient algorithm with a specifically developed pre-conditioner. The performance of the direct method was evaluated by comparing voxel- and segment-wise estimates of {{K}1} , the tracer transport rate (ml · min-1 · ml-1), to those obtained using the indirect method applied to both OSEM and OSL-MAP dynamic reconstructions. The proposed direct reconstruction method produced {{K}1} maps with visibly lower noise than the indirect method based on OSEM and OSL-MAP reconstructions. At normal count levels, the direct method was shown to outperform the indirect method based on OSL-MAP in the sense that at matched level of bias, reduced regional noise levels were obtained. At lower count levels, the direct method produced {{K}1} estimates with significantly lower standard deviation across noise realizations than the indirect method based on OSL-MAP at matched bias level. In all cases, the direct method yielded lower noise and standard deviation than the indirect method based on OSEM. Overall, the proposed direct reconstruction offered a better bias-variance tradeoff than the indirect method applied to either OSEM and OSL-MAP. Direct parametric reconstruction as applied to in vivo dynamic PET MPI data is therefore a promising method for producing MBF maps with lower variance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Urrego-Blanco, Jorge R.; Hunke, Elizabeth C.; Urban, Nathan M.
Here, we implement a variance-based distance metric (D n) to objectively assess skill of sea ice models when multiple output variables or uncertainties in both model predictions and observations need to be considered. The metric compares observations and model data pairs on common spatial and temporal grids improving upon highly aggregated metrics (e.g., total sea ice extent or volume) by capturing the spatial character of model skill. The D n metric is a gamma-distributed statistic that is more general than the χ 2 statistic commonly used to assess model fit, which requires the assumption that the model is unbiased andmore » can only incorporate observational error in the analysis. The D n statistic does not assume that the model is unbiased, and allows the incorporation of multiple observational data sets for the same variable and simultaneously for different variables, along with different types of variances that can characterize uncertainties in both observations and the model. This approach represents a step to establish a systematic framework for probabilistic validation of sea ice models. The methodology is also useful for model tuning by using the D n metric as a cost function and incorporating model parametric uncertainty as part of a scheme to optimize model functionality. We apply this approach to evaluate different configurations of the standalone Los Alamos sea ice model (CICE) encompassing the parametric uncertainty in the model, and to find new sets of model configurations that produce better agreement than previous configurations between model and observational estimates of sea ice concentration and thickness.« less
Urrego-Blanco, Jorge R.; Hunke, Elizabeth C.; Urban, Nathan M.; ...
2017-04-01
Here, we implement a variance-based distance metric (D n) to objectively assess skill of sea ice models when multiple output variables or uncertainties in both model predictions and observations need to be considered. The metric compares observations and model data pairs on common spatial and temporal grids improving upon highly aggregated metrics (e.g., total sea ice extent or volume) by capturing the spatial character of model skill. The D n metric is a gamma-distributed statistic that is more general than the χ 2 statistic commonly used to assess model fit, which requires the assumption that the model is unbiased andmore » can only incorporate observational error in the analysis. The D n statistic does not assume that the model is unbiased, and allows the incorporation of multiple observational data sets for the same variable and simultaneously for different variables, along with different types of variances that can characterize uncertainties in both observations and the model. This approach represents a step to establish a systematic framework for probabilistic validation of sea ice models. The methodology is also useful for model tuning by using the D n metric as a cost function and incorporating model parametric uncertainty as part of a scheme to optimize model functionality. We apply this approach to evaluate different configurations of the standalone Los Alamos sea ice model (CICE) encompassing the parametric uncertainty in the model, and to find new sets of model configurations that produce better agreement than previous configurations between model and observational estimates of sea ice concentration and thickness.« less
Integrated System-Level Optimization for Concurrent Engineering With Parametric Subsystem Modeling
NASA Technical Reports Server (NTRS)
Schuman, Todd; DeWeck, Oliver L.; Sobieski, Jaroslaw
2005-01-01
The introduction of concurrent design practices to the aerospace industry has greatly increased the productivity of engineers and teams during design sessions as demonstrated by JPL's Team X. Simultaneously, advances in computing power have given rise to a host of potent numerical optimization methods capable of solving complex multidisciplinary optimization problems containing hundreds of variables, constraints, and governing equations. Unfortunately, such methods are tedious to set up and require significant amounts of time and processor power to execute, thus making them unsuitable for rapid concurrent engineering use. This paper proposes a framework for Integration of System-Level Optimization with Concurrent Engineering (ISLOCE). It uses parametric neural-network approximations of the subsystem models. These approximations are then linked to a system-level optimizer that is capable of reaching a solution quickly due to the reduced complexity of the approximations. The integration structure is described in detail and applied to the multiobjective design of a simplified Space Shuttle external fuel tank model. Further, a comparison is made between the new framework and traditional concurrent engineering (without system optimization) through an experimental trial with two groups of engineers. Each method is evaluated in terms of optimizer accuracy, time to solution, and ease of use. The results suggest that system-level optimization, running as a background process during integrated concurrent engineering sessions, is potentially advantageous as long as it is judiciously implemented.
Visualizing Spatially Varying Distribution Data
NASA Technical Reports Server (NTRS)
Kao, David; Luo, Alison; Dungan, Jennifer L.; Pang, Alex; Biegel, Bryan A. (Technical Monitor)
2002-01-01
Box plot is a compact representation that encodes the minimum, maximum, mean, median, and quarters information of a distribution. In practice, a single box plot is drawn for each variable of interest. With the advent of more accessible computing power, we are now facing the problem of visual icing data where there is a distribution at each 2D spatial location. Simply extending the box plot technique to distributions over 2D domain is not straightforward. One challenge is reducing the visual clutter if a box plot is drawn over each grid location in the 2D domain. This paper presents and discusses two general approaches, using parametric statistics and shape descriptors, to present 2D distribution data sets. Both approaches provide additional insights compared to the traditional box plot technique
SysSon - A Framework for Systematic Sonification Design
NASA Astrophysics Data System (ADS)
Vogt, Katharina; Goudarzi, Visda; Holger Rutz, Hanns
2015-04-01
SysSon is a research approach on introducing sonification systematically to a scientific community where it is not yet commonly used - e.g., in climate science. Thereby, both technical and socio-cultural barriers have to be met. The approach was further developed with climate scientists, who participated in contextual inquiries, usability tests and a workshop of collaborative design. Following from these extensive user tests resulted our final software framework. As frontend, a graphical user interface allows climate scientists to parametrize standard sonifications with their own data sets. Additionally, an interactive shell allows to code new sonifications for users competent in sound design. The framework is a standalone desktop application, available as open source (for details see http://sysson.kug.ac.at/) and works with data in NetCDF format.
NASA Astrophysics Data System (ADS)
Antokhin, I. I.
2017-06-01
We propose an efficient and flexible method for solving Fredholm and Abel integral equations of the first kind, frequently appearing in astrophysics. These equations present an ill-posed problem. Our method is based on solving them on a so-called compact set of functions and/or using Tikhonov's regularization. Both approaches are non-parametric and do not require any theoretic model, apart from some very loose a priori constraints on the unknown function. The two approaches can be used independently or in a combination. The advantage of the method, apart from its flexibility, is that it gives uniform convergence of the approximate solution to the exact one, as the errors of input data tend to zero. Simulated and astrophysical examples are presented.
NASA Astrophysics Data System (ADS)
Ern, Manfred; Trinh, Quang Thai; Preusse, Peter; Gille, John C.; Mlynczak, Martin G.; Russell, James M., III; Riese, Martin
2018-04-01
Gravity waves are one of the main drivers of atmospheric dynamics. The spatial resolution of most global atmospheric models, however, is too coarse to properly resolve the small scales of gravity waves, which range from tens to a few thousand kilometers horizontally, and from below 1 km to tens of kilometers vertically. Gravity wave source processes involve even smaller scales. Therefore, general circulation models (GCMs) and chemistry climate models (CCMs) usually parametrize the effect of gravity waves on the global circulation. These parametrizations are very simplified. For this reason, comparisons with global observations of gravity waves are needed for an improvement of parametrizations and an alleviation of model biases. We present a gravity wave climatology based on atmospheric infrared limb emissions observed by satellite (GRACILE). GRACILE is a global data set of gravity wave distributions observed in the stratosphere and the mesosphere by the infrared limb sounding satellite instruments High Resolution Dynamics Limb Sounder (HIRDLS) and Sounding of the Atmosphere using Broadband Emission Radiometry (SABER). Typical distributions (zonal averages and global maps) of gravity wave vertical wavelengths and along-track horizontal wavenumbers are provided, as well as gravity wave temperature variances, potential energies and absolute momentum fluxes. This global data set captures the typical seasonal variations of these parameters, as well as their spatial variations. The GRACILE data set is suitable for scientific studies, and it can serve for comparison with other instruments (ground-based, airborne, or other satellite instruments) and for comparison with gravity wave distributions, both resolved and parametrized, in GCMs and CCMs. The GRACILE data set is available as supplementary data at https://doi.org/10.1594/PANGAEA.879658.
Uncertainty in determining extreme precipitation thresholds
NASA Astrophysics Data System (ADS)
Liu, Bingjun; Chen, Junfan; Chen, Xiaohong; Lian, Yanqing; Wu, Lili
2013-10-01
Extreme precipitation events are rare and occur mostly on a relatively small and local scale, which makes it difficult to set the thresholds for extreme precipitations in a large basin. Based on the long term daily precipitation data from 62 observation stations in the Pearl River Basin, this study has assessed the applicability of the non-parametric, parametric, and the detrended fluctuation analysis (DFA) methods in determining extreme precipitation threshold (EPT) and the certainty to EPTs from each method. Analyses from this study show the non-parametric absolute critical value method is easy to use, but unable to reflect the difference of spatial rainfall distribution. The non-parametric percentile method can account for the spatial distribution feature of precipitation, but the problem with this method is that the threshold value is sensitive to the size of rainfall data series and is subjected to the selection of a percentile thus make it difficult to determine reasonable threshold values for a large basin. The parametric method can provide the most apt description of extreme precipitations by fitting extreme precipitation distributions with probability distribution functions; however, selections of probability distribution functions, the goodness-of-fit tests, and the size of the rainfall data series can greatly affect the fitting accuracy. In contrast to the non-parametric and the parametric methods which are unable to provide information for EPTs with certainty, the DFA method although involving complicated computational processes has proven to be the most appropriate method that is able to provide a unique set of EPTs for a large basin with uneven spatio-temporal precipitation distribution. The consistency between the spatial distribution of DFA-based thresholds with the annual average precipitation, the coefficient of variation (CV), and the coefficient of skewness (CS) for the daily precipitation further proves that EPTs determined by the DFA method are more reasonable and applicable for the Pearl River Basin.
Zilverstand, Anna; Sorger, Bettina; Kaemingk, Anita; Goebel, Rainer
2017-06-01
We employed a novel parametric spider picture set in the context of a parametric fMRI anxiety provocation study, designed to tease apart brain regions involved in threat monitoring from regions representing an exaggerated anxiety response in spider phobics. For the stimulus set, we systematically manipulated perceived proximity of threat by varying a depicted spider's context, size, and posture. All stimuli were validated in a behavioral rating study (phobics n = 20; controls n = 20; all female). An independent group participated in a subsequent fMRI anxiety provocation study (phobics n = 7; controls n = 7; all female), in which we compared a whole-brain categorical to a whole-brain parametric analysis. Results demonstrated that the parametric analysis provided a richer characterization of the functional role of the involved brain networks. In three brain regions-the mid insula, the dorsal anterior cingulate, and the ventrolateral prefrontal cortex-activation was linearly modulated by perceived proximity specifically in the spider phobia group, indicating a quantitative representation of an exaggerated anxiety response. In other regions (e.g., the amygdala), activation was linearly modulated in both groups, suggesting a functional role in threat monitoring. Prefrontal regions, such as dorsolateral prefrontal cortex, were activated during anxiety provocation but did not show a stimulus-dependent linear modulation in either group. The results confirm that brain regions involved in anxiety processing hold a quantitative representation of a pathological anxiety response and more generally suggest that parametric fMRI designs may be a very powerful tool for clinical research in the future, particularly when developing novel brain-based interventions (e.g., neurofeedback training). Hum Brain Mapp 38:3025-3038, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Yan, Chao-Gan; Craddock, R. Cameron; Zuo, Xi-Nian; Zang, Yu-Feng; Milham, Michael P.
2014-01-01
As researchers increase their efforts to characterize variations in the functional connectome across studies and individuals, concerns about the many sources of nuisance variation present and their impact on resting state fMRI (R-fMRI) measures continue to grow. Although substantial within-site variation can exist, efforts to aggregate data across multiple sites such as the 1000 Functional Connectomes Project (FCP) and International Neuroimaging Data-sharing Initiative (INDI) datasets amplify these concerns. The present work draws upon standardization approaches commonly used in the microarray gene expression literature, and to a lesser extent recent imaging studies, and compares them with respect to their impact on relationships between common R-fMRI measures and nuisance variables (e.g., imaging site, motion), as well as phenotypic variables of interest (age, sex). Standardization approaches differed with regard to whether they were applied post-hoc vs. during pre-processing, and at the individual vs. group level; additionally they varied in whether they addressed additive effects vs. additive + multiplicative effects, and were parametric vs. non-parametric. While all standardization approaches were effective at reducing undesirable relationships with nuisance variables, post-hoc approaches were generally more effective than global signal regression (GSR). Across approaches, correction for additive effects (global mean) appeared to be more important than for multiplicative effects (global SD) for all R-fMRI measures, with the exception of amplitude of low frequency fluctuations (ALFF). Group-level post-hoc standardizations for mean-centering and variance-standardization were found to be advantageous in their ability to avoid the introduction of artifactual relationships with standardization parameters; though results between individual and group-level post-hoc approaches were highly similar overall. While post-hoc standardization procedures drastically increased test–retest (TRT) reliability for ALFF, modest reductions were observed for other measures after post-hoc standardizations—a phenomena likely attributable to the separation of voxel-wise from global differences among subjects (global mean and SD demonstrated moderate TRT reliability for these measures). Finally, the present work calls into question previous observations of increased anatomical specificity for GSR over mean centering, and draws attention to the near equivalence of global and gray matter signal regression. PMID:23631983
Extraction of ozone and chlorophyll-A distribution from AVIRIS data
NASA Technical Reports Server (NTRS)
Schaepman, M.; Itten, K. I.; Schlaepfer, D.; Kurer, U.; Veraguth, S.; Keller, J.
1995-01-01
The potential of airborne imaging spectrometry for assessing and monitoring natural resources is studied. Therefore, an AVIRIS scene of the NASA's MacEurope 1991 campaign - acquired in Central Switzerland - is used. The test site consists of an urban area, the Lake Zug with its surrounding fields, the Rigi mountain in the center of the test site, and the Lake of Four Cantons. The region is covered by the AVIRIS flight #910705, run 6 and 7 of the NASA ER-2 aircraft resulting in an average nominal pixel size of about 18 m. Simultaneous to the ER-2 overflight spectroradiometric measurements have been taken in various locations. Preselected reference targets were measured in the field with a GER Mark V spectroradiometer, and radiance measurements were taken to the lake using a Li-Cor LI 1800UW specroradiometer below and above the water surface. A comprehensive meteorological data set was obtained by joining the POLLUMET experiment which carried out measurements to investigate the summer smog in Switzerland on the same day. The quality assessment for the actual data set can be found in detail in Meyer et al. A parametric approach calculating the location of the airplane was used to simulate the observation geometry. This parametric preprocessing procedure, which takes care of effects of flight line and attitude variations as well as the pixel-by-pixel topographic corrections is described in Meyer.
Fitting the constitution type Ia supernova data with the redshift-binned parametrization method
NASA Astrophysics Data System (ADS)
Huang, Qing-Guo; Li, Miao; Li, Xiao-Dong; Wang, Shuang
2009-10-01
In this work, we explore the cosmological consequences of the recently released Constitution sample of 397 Type Ia supernovae (SNIa). By revisiting the Chevallier-Polarski-Linder (CPL) parametrization, we find that, for fitting the Constitution set alone, the behavior of dark energy (DE) significantly deviates from the cosmological constant Λ, where the equation of state (EOS) w and the energy density ρΛ of DE will rapidly decrease along with the increase of redshift z. Inspired by this clue, we separate the redshifts into different bins, and discuss the models of a constant w or a constant ρΛ in each bin, respectively. It is found that for fitting the Constitution set alone, w and ρΛ will also rapidly decrease along with the increase of z, which is consistent with the result of CPL model. Moreover, a step function model in which ρΛ rapidly decreases at redshift z˜0.331 presents a significant improvement (Δχ2=-4.361) over the CPL parametrization, and performs better than other DE models. We also plot the error bars of DE density of this model, and find that this model deviates from the cosmological constant Λ at 68.3% confidence level (CL); this may arise from some biasing systematic errors in the handling of SNIa data, or more interestingly from the nature of DE itself. In addition, for models with same number of redshift bins, a piecewise constant ρΛ model always performs better than a piecewise constant w model; this shows the advantage of using ρΛ, instead of w, to probe the variation of DE.
Fitting the constitution type Ia supernova data with the redshift-binned parametrization method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang Qingguo; Kavli Institute for Theoretical Physics China, Chinese Academy of Sciences, Beijing 100190; Li Miao
2009-10-15
In this work, we explore the cosmological consequences of the recently released Constitution sample of 397 Type Ia supernovae (SNIa). By revisiting the Chevallier-Polarski-Linder (CPL) parametrization, we find that, for fitting the Constitution set alone, the behavior of dark energy (DE) significantly deviates from the cosmological constant {lambda}, where the equation of state (EOS) w and the energy density {rho}{sub {lambda}} of DE will rapidly decrease along with the increase of redshift z. Inspired by this clue, we separate the redshifts into different bins, and discuss the models of a constant w or a constant {rho}{sub {lambda}} in each bin,more » respectively. It is found that for fitting the Constitution set alone, w and {rho}{sub {lambda}} will also rapidly decrease along with the increase of z, which is consistent with the result of CPL model. Moreover, a step function model in which {rho}{sub {lambda}} rapidly decreases at redshift z{approx}0.331 presents a significant improvement ({delta}{chi}{sup 2}=-4.361) over the CPL parametrization, and performs better than other DE models. We also plot the error bars of DE density of this model, and find that this model deviates from the cosmological constant {lambda} at 68.3% confidence level (CL); this may arise from some biasing systematic errors in the handling of SNIa data, or more interestingly from the nature of DE itself. In addition, for models with same number of redshift bins, a piecewise constant {rho}{sub {lambda}} model always performs better than a piecewise constant w model; this shows the advantage of using {rho}{sub {lambda}}, instead of w, to probe the variation of DE.« less
NASA Astrophysics Data System (ADS)
Peng, Machuan; Xie, Lian; Pietrafesa, Leonard J.
The asymmetry of tropical cyclone induced maximum coastal sea level rise (positive surge) and fall (negative surge) is studied using a three-dimensional storm surge model. It is found that the negative surge induced by offshore winds is more sensitive to wind speed and direction changes than the positive surge by onshore winds. As a result, negative surge is inherently more difficult to forecast than positive surge since there is uncertainty in tropical storm wind forecasts. The asymmetry of negative and positive surge under parametric wind forcing is more apparent in shallow water regions. For tropical cyclones with fixed central pressure, the surge asymmetry increases with decreasing storm translation speed. For those with the same translation speed, a weaker tropical cyclone is expected to gain a higher AI (asymmetry index) value though its induced maximum surge and fall are smaller. With fixed RMW (radius of maximum wind), the relationship between central pressure and AI is heterogeneous and depends on the value of RMW. Tropical cyclone's wind inflow angle can also affect surge asymmetry. A set of idealized cases as well as two historic tropical cyclones are used to illustrate the surge asymmetry.
Hartzell, S.
1989-01-01
The July 8, 1986, North Palm Strings earthquake is used as a basis for comparison of several different approaches to the solution for the rupture history of a finite fault. The inversion of different waveform data is considered; both teleseismic P waveforms and local strong ground motion records. Linear parametrizations for slip amplitude are compared with nonlinear parametrizations for both slip amplitude and rupture time. Inversions using both synthetic and empirical Green's functions are considered. In general, accurate Green's functions are more readily calculable for the teleseismic problem where simple ray theory and flat-layered velocity structures are usually sufficient. However, uncertainties in the variation in t* with frequency most limit the resolution of teleseismic inversions. A set of empirical Green's functions that are well recorded at teleseismic distances could avoid the uncertainties in attenuation. In the inversion of strong motion data, the accurate calculation of propagation path effects other than attenuation effects is the limiting factor in the resolution of source parameters. -from Author
Circuit theory and model-based inference for landscape connectivity
Hanks, Ephraim M.; Hooten, Mevin B.
2013-01-01
Circuit theory has seen extensive recent use in the field of ecology, where it is often applied to study functional connectivity. The landscape is typically represented by a network of nodes and resistors, with the resistance between nodes a function of landscape characteristics. The effective distance between two locations on a landscape is represented by the resistance distance between the nodes in the network. Circuit theory has been applied to many other scientific fields for exploratory analyses, but parametric models for circuits are not common in the scientific literature. To model circuits explicitly, we demonstrate a link between Gaussian Markov random fields and contemporary circuit theory using a covariance structure that induces the necessary resistance distance. This provides a parametric model for second-order observations from such a system. In the landscape ecology setting, the proposed model provides a simple framework where inference can be obtained for effects that landscape features have on functional connectivity. We illustrate the approach through a landscape genetics study linking gene flow in alpine chamois (Rupicapra rupicapra) to the underlying landscape.
Self-organising mixture autoregressive model for non-stationary time series modelling.
Ni, He; Yin, Hujun
2008-12-01
Modelling non-stationary time series has been a difficult task for both parametric and nonparametric methods. One promising solution is to combine the flexibility of nonparametric models with the simplicity of parametric models. In this paper, the self-organising mixture autoregressive (SOMAR) network is adopted as a such mixture model. It breaks time series into underlying segments and at the same time fits local linear regressive models to the clusters of segments. In such a way, a global non-stationary time series is represented by a dynamic set of local linear regressive models. Neural gas is used for a more flexible structure of the mixture model. Furthermore, a new similarity measure has been introduced in the self-organising network to better quantify the similarity of time series segments. The network can be used naturally in modelling and forecasting non-stationary time series. Experiments on artificial, benchmark time series (e.g. Mackey-Glass) and real-world data (e.g. numbers of sunspots and Forex rates) are presented and the results show that the proposed SOMAR network is effective and superior to other similar approaches.
Dong, Tuochuan; Kang, Le; Hutson, Alan; Xiong, Chengjie; Tian, Lili
2014-03-01
Although most of the statistical methods for diagnostic studies focus on disease processes with binary disease status, many diseases can be naturally classified into three ordinal diagnostic categories, that is normal, early stage, and fully diseased. For such diseases, the volume under the ROC surface (VUS) is the most commonly used index of diagnostic accuracy. Because the early disease stage is most likely the optimal time window for therapeutic intervention, the sensitivity to the early diseased stage has been suggested as another diagnostic measure. For the purpose of comparing the diagnostic abilities on early disease detection between two markers, it is of interest to estimate the confidence interval of the difference between sensitivities to the early diseased stage. In this paper, we present both parametric and non-parametric methods for this purpose. An extensive simulation study is carried out for a variety of settings for the purpose of evaluating and comparing the performance of the proposed methods. A real example of Alzheimer's disease (AD) is analyzed using the proposed approaches. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Liao, Meng; To, Quy-Dong; Léonard, Céline; Monchiet, Vincent
2018-03-01
In this paper, we use the molecular dynamics simulation method to study gas-wall boundary conditions. Discrete scattering information of gas molecules at the wall surface is obtained from collision simulations. The collision data can be used to identify the accommodation coefficients for parametric wall models such as Maxwell and Cercignani-Lampis scattering kernels. Since these scattering kernels are based on a limited number of accommodation coefficients, we adopt non-parametric statistical methods to construct the kernel to overcome these issues. Different from parametric kernels, the non-parametric kernels require no parameter (i.e. accommodation coefficients) and no predefined distribution. We also propose approaches to derive directly the Navier friction and Kapitza thermal resistance coefficients as well as other interface coefficients associated with moment equations from the non-parametric kernels. The methods are applied successfully to systems composed of CH4 or CO2 and graphite, which are of interest to the petroleum industry.
Parametric amplification in MoS2 drum resonator.
Prasad, Parmeshwar; Arora, Nishta; Naik, A K
2017-11-30
Parametric amplification is widely used in diverse areas from optics to electronic circuits to enhance low level signals by varying relevant system parameters. Parametric amplification has also been performed in several micro-nano resonators including nano-electromechanical system (NEMS) resonators based on a two-dimensional (2D) material. Here, we report the enhancement of mechanical response in a MoS 2 drum resonator using degenerate parametric amplification. We use parametric pumping to modulate the spring constant of the MoS 2 resonator and achieve a 10 dB amplitude gain. We also demonstrate quality factor enhancement in the resonator with parametric amplification. We investigate the effect of cubic nonlinearity on parametric amplification and show that it limits the gain of the mechanical resonator. Amplifying ultra-small displacements at room temperature and understanding the limitations of the amplification in these devices is key for using these devices for practical applications.
Evaluation of portfolio credit risk based on survival analysis for progressive censored data
NASA Astrophysics Data System (ADS)
Jaber, Jamil J.; Ismail, Noriszura; Ramli, Siti Norafidah Mohd
2017-04-01
In credit risk management, the Basel committee provides a choice of three approaches to the financial institutions for calculating the required capital: the standardized approach, the Internal Ratings-Based (IRB) approach, and the Advanced IRB approach. The IRB approach is usually preferred compared to the standard approach due to its higher accuracy and lower capital charges. This paper use several parametric models (Exponential, log-normal, Gamma, Weibull, Log-logistic, Gompertz) to evaluate the credit risk of the corporate portfolio in the Jordanian banks based on the monthly sample collected from January 2010 to December 2015. The best model is selected using several goodness-of-fit criteria (MSE, AIC, BIC). The results indicate that the Gompertz distribution is the best model parametric model for the data.
A Scalable Framework For Segmenting Magnetic Resonance Images
Hore, Prodip; Goldgof, Dmitry B.; Gu, Yuhua; Maudsley, Andrew A.; Darkazanli, Ammar
2009-01-01
A fast, accurate and fully automatic method of segmenting magnetic resonance images of the human brain is introduced. The approach scales well allowing fast segmentations of fine resolution images. The approach is based on modifications of the soft clustering algorithm, fuzzy c-means, that enable it to scale to large data sets. Two types of modifications to create incremental versions of fuzzy c-means are discussed. They are much faster when compared to fuzzy c-means for medium to extremely large data sets because they work on successive subsets of the data. They are comparable in quality to application of fuzzy c-means to all of the data. The clustering algorithms coupled with inhomogeneity correction and smoothing are used to create a framework for automatically segmenting magnetic resonance images of the human brain. The framework is applied to a set of normal human brain volumes acquired from different magnetic resonance scanners using different head coils, acquisition parameters and field strengths. Results are compared to those from two widely used magnetic resonance image segmentation programs, Statistical Parametric Mapping and the FMRIB Software Library (FSL). The results are comparable to FSL while providing significant speed-up and better scalability to larger volumes of data. PMID:20046893
Uncertainties related to the representation of momentum transport in shallow convection
NASA Astrophysics Data System (ADS)
Schlemmer, Linda; Bechtold, Peter; Sandu, Irina; Ahlgrimm, Maike
2017-04-01
The vertical transport of horizontal momentum by convection has an important impact on the general circulation of the atmosphere as well as on the life cycle and track of cyclones. So far convective momentum transport (CMT) has mostly been studied for deep convection, whereas little is known about its characteristics and importance in shallow convection. In this study CMT by shallow convection is investigated by analyzing both data from large-eddy simulations (LES) and simulations performed with the Integrated Forecasting System (IFS) of the European Centre for Medium-Range Weather Forecasts (ECMWF). In addition, the central terms underlying the bulk mass-flux parametrization of CMT are evaluated offline. Further, the uncertainties related to the representation of CMT are explored by running the stochastically perturbed parametrizations (SPP) approach of the IFS. The analyzed cases exhibit shallow convective clouds developing within considerable low-level wind shear. Analysis of the momentum fluxes in the LES data reveals significant momentum transport by the convection in both cases, which is directed down-gradient despite substantial organization of the cloud field. A detailed inspection of the convection parametrization reveals a very good representation of the entrainment and detrainment rates and an appropriate representation of the convective mass and momentum fluxes. To determine the correct values of mass-flux and in-cloud momentum at the cloud base in the parametrization yet remains challenging. The spread in convection-related quantities generated by the SPP is reasonable and addresses many of the identified uncertainties.
Continuous-wave optical parametric oscillators on their way to the terahertz range
NASA Astrophysics Data System (ADS)
Sowade, Rosita; Breunig, Ingo; Kiessling, Jens; Buse, Karsten
2010-02-01
Continuous-wave optical parametric oscillators (OPOs) are known to be working horses for spectroscopy in the near- and mid-infrared. However, strong absorption in nonlinear media like lithium niobate complicates the generation of far-infrared light. This absorption leads to pump thresholds vastly exceeding the power of standard pump lasers. Our first approach was, therefore, to combine the established technique of photomixing with optical parametric oscillators. Here, two OPOs provide one wave each, with a tunable difference frequency. These waves are combined to a beat signal as a source for photomixers. Terahertz radiation between 0.065 and 1.018 THz is generated with powers in the order of nanowatts. To overcome the upper frequency limit of the opto-electronic photomixers, terahertz generation has to rely entirely on optical methods. Our all-optical approach, getting around the high thresholds for terahertz generation, is based on cascaded nonlinear processes: the resonantly enhanced signal field, generated in the primary parametric process, is intense enough to act as the pump for a secondary process, creating idler waves with frequencies in the terahertz regime. The latter ones are monochromatic and tunable with detected powers of more than 2 μW at 1.35 THz. Thus, continuous-wave optical parametric oscillators have entered the field of terahertz photonics.
NASA Astrophysics Data System (ADS)
Braun, David J.; Sutas, Andrius; Vijayakumar, Sethu
2017-01-01
Theory predicts that parametrically excited oscillators, tuned to operate under resonant condition, are capable of large-amplitude oscillation useful in diverse applications, such as signal amplification, communication, and analog computation. However, due to amplitude saturation caused by nonlinearity, lack of robustness to model uncertainty, and limited sensitivity to parameter modulation, these oscillators require fine-tuning and strong modulation to generate robust large-amplitude oscillation. Here we present a principle of self-tuning parametric feedback excitation that alleviates the above-mentioned limitations. This is achieved using a minimalistic control implementation that performs (i) self-tuning (slow parameter adaptation) and (ii) feedback pumping (fast parameter modulation), without sophisticated signal processing past observations. The proposed approach provides near-optimal amplitude maximization without requiring model-based control computation, previously perceived inevitable to implement optimal control principles in practical application. Experimental implementation of the theory shows that the oscillator self-tunes itself near to the onset of dynamic bifurcation to achieve extreme sensitivity to small resonant parametric perturbations. As a result, it achieves large-amplitude oscillations by capitalizing on the effect of nonlinearity, despite substantial model uncertainties and strong unforeseen external perturbations. We envision the present finding to provide an effective and robust approach to parametric excitation when it comes to real-world application.
Implicit Priors in Galaxy Cluster Mass and Scaling Relation Determinations
NASA Technical Reports Server (NTRS)
Mantz, A.; Allen, S. W.
2011-01-01
Deriving the total masses of galaxy clusters from observations of the intracluster medium (ICM) generally requires some prior information, in addition to the assumptions of hydrostatic equilibrium and spherical symmetry. Often, this information takes the form of particular parametrized functions used to describe the cluster gas density and temperature profiles. In this paper, we investigate the implicit priors on hydrostatic masses that result from this fully parametric approach, and the implications of such priors for scaling relations formed from those masses. We show that the application of such fully parametric models of the ICM naturally imposes a prior on the slopes of the derived scaling relations, favoring the self-similar model, and argue that this prior may be influential in practice. In contrast, this bias does not exist for techniques which adopt an explicit prior on the form of the mass profile but describe the ICM non-parametrically. Constraints on the slope of the cluster mass-temperature relation in the literature show a separation based the approach employed, with the results from fully parametric ICM modeling clustering nearer the self-similar value. Given that a primary goal of scaling relation analyses is to test the self-similar model, the application of methods subject to strong, implicit priors should be avoided. Alternative methods and best practices are discussed.
A mixture model-based approach to the clustering of microarray expression data.
McLachlan, G J; Bean, R W; Peel, D
2002-03-01
This paper introduces the software EMMIX-GENE that has been developed for the specific purpose of a model-based approach to the clustering of microarray expression data, in particular, of tissue samples on a very large number of genes. The latter is a nonstandard problem in parametric cluster analysis because the dimension of the feature space (the number of genes) is typically much greater than the number of tissues. A feasible approach is provided by first selecting a subset of the genes relevant for the clustering of the tissue samples by fitting mixtures of t distributions to rank the genes in order of increasing size of the likelihood ratio statistic for the test of one versus two components in the mixture model. The imposition of a threshold on the likelihood ratio statistic used in conjunction with a threshold on the size of a cluster allows the selection of a relevant set of genes. However, even this reduced set of genes will usually be too large for a normal mixture model to be fitted directly to the tissues, and so the use of mixtures of factor analyzers is exploited to reduce effectively the dimension of the feature space of genes. The usefulness of the EMMIX-GENE approach for the clustering of tissue samples is demonstrated on two well-known data sets on colon and leukaemia tissues. For both data sets, relevant subsets of the genes are able to be selected that reveal interesting clusterings of the tissues that are either consistent with the external classification of the tissues or with background and biological knowledge of these sets. EMMIX-GENE is available at http://www.maths.uq.edu.au/~gjm/emmix-gene/
An appraisal of statistical procedures used in derivation of reference intervals.
Ichihara, Kiyoshi; Boyd, James C
2010-11-01
When conducting studies to derive reference intervals (RIs), various statistical procedures are commonly applied at each step, from the planning stages to final computation of RIs. Determination of the necessary sample size is an important consideration, and evaluation of at least 400 individuals in each subgroup has been recommended to establish reliable common RIs in multicenter studies. Multiple regression analysis allows identification of the most important factors contributing to variation in test results, while accounting for possible confounding relationships among these factors. Of the various approaches proposed for judging the necessity of partitioning reference values, nested analysis of variance (ANOVA) is the likely method of choice owing to its ability to handle multiple groups and being able to adjust for multiple factors. Box-Cox power transformation often has been used to transform data to a Gaussian distribution for parametric computation of RIs. However, this transformation occasionally fails. Therefore, the non-parametric method based on determination of the 2.5 and 97.5 percentiles following sorting of the data, has been recommended for general use. The performance of the Box-Cox transformation can be improved by introducing an additional parameter representing the origin of transformation. In simulations, the confidence intervals (CIs) of reference limits (RLs) calculated by the parametric method were narrower than those calculated by the non-parametric approach. However, the margin of difference was rather small owing to additional variability in parametrically-determined RLs introduced by estimation of parameters for the Box-Cox transformation. The parametric calculation method may have an advantage over the non-parametric method in allowing identification and exclusion of extreme values during RI computation.
Source-Modeling Auditory Processes of EEG Data Using EEGLAB and Brainstorm.
Stropahl, Maren; Bauer, Anna-Katharina R; Debener, Stefan; Bleichner, Martin G
2018-01-01
Electroencephalography (EEG) source localization approaches are often used to disentangle the spatial patterns mixed up in scalp EEG recordings. However, approaches differ substantially between experiments, may be strongly parameter-dependent, and results are not necessarily meaningful. In this paper we provide a pipeline for EEG source estimation, from raw EEG data pre-processing using EEGLAB functions up to source-level analysis as implemented in Brainstorm. The pipeline is tested using a data set of 10 individuals performing an auditory attention task. The analysis approach estimates sources of 64-channel EEG data without the prerequisite of individual anatomies or individually digitized sensor positions. First, we show advanced EEG pre-processing using EEGLAB, which includes artifact attenuation using independent component analysis (ICA). ICA is a linear decomposition technique that aims to reveal the underlying statistical sources of mixed signals and is further a powerful tool to attenuate stereotypical artifacts (e.g., eye movements or heartbeat). Data submitted to ICA are pre-processed to facilitate good-quality decompositions. Aiming toward an objective approach on component identification, the semi-automatic CORRMAP algorithm is applied for the identification of components representing prominent and stereotypic artifacts. Second, we present a step-wise approach to estimate active sources of auditory cortex event-related processing, on a single subject level. The presented approach assumes that no individual anatomy is available and therefore the default anatomy ICBM152, as implemented in Brainstorm, is used for all individuals. Individual noise modeling in this dataset is based on the pre-stimulus baseline period. For EEG source modeling we use the OpenMEEG algorithm as the underlying forward model based on the symmetric Boundary Element Method (BEM). We then apply the method of dynamical statistical parametric mapping (dSPM) to obtain physiologically plausible EEG source estimates. Finally, we show how to perform group level analysis in the time domain on anatomically defined regions of interest (auditory scout). The proposed pipeline needs to be tailored to the specific datasets and paradigms. However, the straightforward combination of EEGLAB and Brainstorm analysis tools may be of interest to others performing EEG source localization.
Gebraad, P. M. O.; Teeuwisse, F. W.; van Wingerden, J. W.; ...
2016-01-01
This article presents a wind plant control strategy that optimizes the yaw settings of wind turbines for improved energy production of the whole wind plant by taking into account wake effects. The optimization controller is based on a novel internal parametric model for wake effects, called the FLOw Redirection and Induction in Steady-state (FLORIS) model. The FLORIS model predicts the steady-state wake locations and the effective flow velocities at each turbine, and the resulting turbine electrical energy production levels, as a function of the axial induction and the yaw angle of the different rotors. The FLORIS model has a limitedmore » number of parameters that are estimated based on turbine electrical power production data. In high-fidelity computational fluid dynamics simulations of a small wind plant, we demonstrate that the optimization control based on the FLORIS model increases the energy production of the wind plant, with a reduction of loads on the turbines as an additional effect.« less
Sun, Chao; Feng, Wenquan; Du, Songlin
2018-01-01
As multipath is one of the dominating error sources for high accuracy Global Navigation Satellite System (GNSS) applications, multipath mitigation approaches are employed to minimize this hazardous error in receivers. Binary offset carrier modulation (BOC), as a modernized signal structure, is adopted to achieve significant enhancement. However, because of its multi-peak autocorrelation function, conventional multipath mitigation techniques for binary phase shift keying (BPSK) signal would not be optimal. Currently, non-parametric and parametric approaches have been studied specifically aiming at multipath mitigation for BOC signals. Non-parametric techniques, such as Code Correlation Reference Waveforms (CCRW), usually have good feasibility with simple structures, but suffer from low universal applicability for different BOC signals. Parametric approaches can thoroughly eliminate multipath error by estimating multipath parameters. The problems with this category are at the high computation complexity and vulnerability to the noise. To tackle the problem, we present a practical parametric multipath estimation method in the frequency domain for BOC signals. The received signal is transferred to the frequency domain to separate out the multipath channel transfer function for multipath parameter estimation. During this process, we take the operations of segmentation and averaging to reduce both noise effect and computational load. The performance of the proposed method is evaluated and compared with the previous work in three scenarios. Results indicate that the proposed averaging-Fast Fourier Transform (averaging-FFT) method achieves good robustness in severe multipath environments with lower computational load for both low-order and high-order BOC signals. PMID:29495589
Towards a better understanding of helicopter external noise
NASA Astrophysics Data System (ADS)
Damongeot, A.; Dambra, F.; Masure, B.
The problem of helicopter external noise generation is studied taking into consideration simultaneously the multiple noise sources: rotor rotational-, rotor broadband -, and engine noise. The main data are obtained during flight tests of the rather quiet AS 332 Super Puma. The flight procedures settled by ICAO for noise regulations are used: horizontal flyover at 90 percent of the maximum speed, approach at minimum power velocity, take-off at best rate of climb. Noise source levels are assessed through narrow band analysis of ground microphone recordings, ground measurements of engine noise and theoretical means. With the perceived noise level unit used throughout the study, relative magnitude of noise sources is shown to be different from that obtained with linear noise unit. A parametric study of the influence of some helicopter parameters on external noise has shown that thickness-tapered, chord-tapered, and swept-back blade tips are good means to reduce the overall noise level in flyover and approach.
Robust interval-based regulation for anaerobic digestion processes.
Alcaraz-González, V; Harmand, J; Rapaport, A; Steyer, J P; González-Alvarez, V; Pelayo-Ortiz, C
2005-01-01
A robust regulation law is applied to the stabilization of a class of biochemical reactors exhibiting partially known highly nonlinear dynamic behavior. An uncertain environment with the presence of unknown inputs is considered. Based on some structural and operational conditions, this regulation law is shown to exponentially stabilize the aforementioned bioreactors around a desired set-point. This approach is experimentally applied and validated on a pilot-scale (1 m3) anaerobic digestion process for the treatment of raw industrial wine distillery wastewater where the objective is the regulation of the chemical oxygen demand (COD) by using the dilution rate as the manipulated variable. Despite large disturbances on the input COD and state and parametric uncertainties, this regulation law gave excellent performances leading the output COD towards its set-point and keeping it inside a pre-specified interval.
Weakly Supervised Segmentation-Aided Classification of Urban Scenes from 3d LIDAR Point Clouds
NASA Astrophysics Data System (ADS)
Guinard, S.; Landrieu, L.
2017-05-01
We consider the problem of the semantic classification of 3D LiDAR point clouds obtained from urban scenes when the training set is limited. We propose a non-parametric segmentation model for urban scenes composed of anthropic objects of simple shapes, partionning the scene into geometrically-homogeneous segments which size is determined by the local complexity. This segmentation can be integrated into a conditional random field classifier (CRF) in order to capture the high-level structure of the scene. For each cluster, this allows us to aggregate the noisy predictions of a weakly-supervised classifier to produce a higher confidence data term. We demonstrate the improvement provided by our method over two publicly-available large-scale data sets.
2016-05-31
and included explosives such as TATP, HMTD, RDX, RDX, ammonium nitrate , potassium perchlorate, potassium nitrate , sugar, and TNT. The approach...Distribution Unlimited UU UU UU UU 31-05-2016 15-Apr-2014 14-Jan-2015 Final Report: Technical Topic 3.2.2. d Bayesian and Non- parametric Statistics...of Papers published in non peer-reviewed journals: Final Report: Technical Topic 3.2.2. d Bayesian and Non-parametric Statistics: Integration of Neural
Seo, Seongho; Kim, Su Jin; Lee, Dong Soo; Lee, Jae Sung
2014-10-01
Tracer kinetic modeling in dynamic positron emission tomography (PET) has been widely used to investigate the characteristic distribution patterns or dysfunctions of neuroreceptors in brain diseases. Its practical goal has progressed from regional data quantification to parametric mapping that produces images of kinetic-model parameters by fully exploiting the spatiotemporal information in dynamic PET data. Graphical analysis (GA) is a major parametric mapping technique that is independent on any compartmental model configuration, robust to noise, and computationally efficient. In this paper, we provide an overview of recent advances in the parametric mapping of neuroreceptor binding based on GA methods. The associated basic concepts in tracer kinetic modeling are presented, including commonly-used compartment models and major parameters of interest. Technical details of GA approaches for reversible and irreversible radioligands are described, considering both plasma input and reference tissue input models. Their statistical properties are discussed in view of parametric imaging.
Incorporating parametric uncertainty into population viability analysis models
McGowan, Conor P.; Runge, Michael C.; Larson, Michael A.
2011-01-01
Uncertainty in parameter estimates from sampling variation or expert judgment can introduce substantial uncertainty into ecological predictions based on those estimates. However, in standard population viability analyses, one of the most widely used tools for managing plant, fish and wildlife populations, parametric uncertainty is often ignored in or discarded from model projections. We present a method for explicitly incorporating this source of uncertainty into population models to fully account for risk in management and decision contexts. Our method involves a two-step simulation process where parametric uncertainty is incorporated into the replication loop of the model and temporal variance is incorporated into the loop for time steps in the model. Using the piping plover, a federally threatened shorebird in the USA and Canada, as an example, we compare abundance projections and extinction probabilities from simulations that exclude and include parametric uncertainty. Although final abundance was very low for all sets of simulations, estimated extinction risk was much greater for the simulation that incorporated parametric uncertainty in the replication loop. Decisions about species conservation (e.g., listing, delisting, and jeopardy) might differ greatly depending on the treatment of parametric uncertainty in population models.
On parametrized cold dense matter equation-of-state inference
NASA Astrophysics Data System (ADS)
Riley, Thomas E.; Raaijmakers, Geert; Watts, Anna L.
2018-07-01
Constraining the equation of state of cold dense matter in compact stars is a major science goal for observing programmes being conducted using X-ray, radio, and gravitational wave telescopes. We discuss Bayesian hierarchical inference of parametrized dense matter equations of state. In particular, we generalize and examine two inference paradigms from the literature: (i) direct posterior equation-of-state parameter estimation, conditioned on observations of a set of rotating compact stars; and (ii) indirect parameter estimation, via transformation of an intermediary joint posterior distribution of exterior spacetime parameters (such as gravitational masses and coordinate equatorial radii). We conclude that the former paradigm is not only tractable for large-scale analyses, but is principled and flexible from a Bayesian perspective while the latter paradigm is not. The thematic problem of Bayesian prior definition emerges as the crux of the difference between these paradigms. The second paradigm should in general only be considered as an ill-defined approach to the problem of utilizing archival posterior constraints on exterior spacetime parameters; we advocate for an alternative approach whereby such information is repurposed as an approximative likelihood function. We also discuss why conditioning on a piecewise-polytropic equation-of-state model - currently standard in the field of dense matter study - can easily violate conditions required for transformation of a probability density distribution between spaces of exterior (spacetime) and interior (source matter) parameters.
Petri Nets with Fuzzy Logic (PNFL): Reverse Engineering and Parametrization
Küffner, Robert; Petri, Tobias; Windhager, Lukas; Zimmer, Ralf
2010-01-01
Background The recent DREAM4 blind assessment provided a particularly realistic and challenging setting for network reverse engineering methods. The in silico part of DREAM4 solicited the inference of cycle-rich gene regulatory networks from heterogeneous, noisy expression data including time courses as well as knockout, knockdown and multifactorial perturbations. Methodology and Principal Findings We inferred and parametrized simulation models based on Petri Nets with Fuzzy Logic (PNFL). This completely automated approach correctly reconstructed networks with cycles as well as oscillating network motifs. PNFL was evaluated as the best performer on DREAM4 in silico networks of size 10 with an area under the precision-recall curve (AUPR) of 81%. Besides topology, we inferred a range of additional mechanistic details with good reliability, e.g. distinguishing activation from inhibition as well as dependent from independent regulation. Our models also performed well on new experimental conditions such as double knockout mutations that were not included in the provided datasets. Conclusions The inference of biological networks substantially benefits from methods that are expressive enough to deal with diverse datasets in a unified way. At the same time, overly complex approaches could generate multiple different models that explain the data equally well. PNFL appears to strike the balance between expressive power and complexity. This also applies to the intuitive representation of PNFL models combining a straightforward graphical notation with colloquial fuzzy parameters. PMID:20862218
Body Bias usage in UTBB FDSOI designs: A parametric exploration approach
NASA Astrophysics Data System (ADS)
Puschini, Diego; Rodas, Jorge; Beigne, Edith; Altieri, Mauricio; Lesecq, Suzanne
2016-03-01
Some years ago, UTBB FDSOI has appeared in the horizon of low-power circuit designers. With the 14 nm and 10 nm nodes in the road-map, the industrialized 28 nm platform promises highly efficient designs with Ultra-Wide Voltage Range (UWVR) thanks to extended Body Bias properties. From the power management perspective, this new opportunity is considered as a new degree of freedom in addition to the classic Dynamic Voltage Scaling (DVS), increasing the complexity of the power optimization problem at design time. However, so far no formal or empiric tool allows to early evaluate the real need for a Dynamic Body Bias (DBB) mechanism on future designs. This paper presents a parametric exploration approach that analyzes the benefits of using Body Bias in 28 nm UTBB FDSOI circuits. The exploration is based on electrical simulations of a ring-oscillator structure. These experiences show that a Body Bias strategy is not always required but, they underline the large power reduction that can be achieved when mandatory. Results are summarized in order to help designers to analyze how to choose the best dynamic power management strategy for a given set of operating conditions in terms of temperature, circuit activity and process choice. This exploration contributes to the identification of conditions that make DBB more efficient than DVS, and vice versa, and when both methods are mandatory to optimize power consumption.
NASA Astrophysics Data System (ADS)
Pollard, David; DeConto, Robert; Gomez, Natalya
2016-04-01
To date, most modeling of the Antarctic Ice Sheet's response to future warming has been calibrated using recent and modern observations. As an alternate approach, we apply a hybrid 3-D ice sheet-shelf model to the last deglacial retreat of Antarctica, making use of geologic data of the last ~20,000 years to test the model against the large-scale variations during this period. The ice model is coupled to a global Earth-sea level model to improve modeling of the bedrock response and to capture ocean-ice gravitational interactions. Following several recent ice-sheet studies, we use Large Ensemble (LE) statistical methods, performing sets of 625 runs from 30,000 years to present with systematically varying model parameters. Objective scores for each run are calculated using modern data and past reconstructed grounding lines, relative sea level records, cosmogenic elevation-age data and uplift rates. The LE results are analyzed to calibrate 4 particularly uncertain model parameters that concern marginal ice processes and interaction with the ocean. LE's are extended into the future with climates following RCP scenarios. An additional scoring criterion tests the model's ability to reproduce estimated sea-level high stands in the warm mid-Pliocene, for which drastic retreat mechanisms of hydrofracturing and ice-cliff failure are needed in the model. The LE analysis provides future sea-level-rise envelopes with well-defined parametric uncertainty bounds. Sensitivities of future LE results to Pliocene sea-level estimates, coupling to the Earth-sea level model, and vertical profiles of Earth properties, will be presented.
Global geometric torsion estimation in adolescent idiopathic scoliosis.
Kadoury, Samuel; Shen, Jesse; Parent, Stefan
2014-04-01
Several attempts have been made to measure geometrical torsion in adolescent idiopathic scoliosis (AIS) and quantify the three-dimensional (3D) deformation of the spine. However, these approaches are sensitive to imprecisions in the 3D modeling of the anatomy and can only capture the effect locally at the vertebrae, ignoring the global effect at the regional level and thus have never been widely used to follow the progression of a deformity. The goal of this work was to evaluate the relevance of a novel geometric torsion descriptor based on a parametric modeling of the spinal curve as a 3D index of scoliosis. First, an image-based approach anchored on prior statistical distributions is used to reconstruct the spine in 3D from biplanar X-rays. Geometric torsion measuring the twisting effect of the spine is then estimated using a technique that approximates local arc-lengths with parametric curve fitting centered at the neutral vertebra in different spinal regions. We first evaluated the method with simulated experiments, demonstrating the method's robustness toward added noise and reconstruction inaccuracies. A pilot study involving 65 scoliotic patients exhibiting different types of deformities was also conducted. Results show the method is able to discriminate between different types of deformation based on this novel 3D index evaluated in the main thoracic and thoracolumbar/lumbar regions. This demonstrates that geometric torsion modeled by parametric spinal curve fitting is a robust tool that can be used to quantify the 3D deformation of AIS and possibly exploited as an index to classify the 3D shape.
Carvajal, Roberto C; Arias, Luis E; Garces, Hugo O; Sbarbaro, Daniel G
2016-04-01
This work presents a non-parametric method based on a principal component analysis (PCA) and a parametric one based on artificial neural networks (ANN) to remove continuous baseline features from spectra. The non-parametric method estimates the baseline based on a set of sampled basis vectors obtained from PCA applied over a previously composed continuous spectra learning matrix. The parametric method, however, uses an ANN to filter out the baseline. Previous studies have demonstrated that this method is one of the most effective for baseline removal. The evaluation of both methods was carried out by using a synthetic database designed for benchmarking baseline removal algorithms, containing 100 synthetic composed spectra at different signal-to-baseline ratio (SBR), signal-to-noise ratio (SNR), and baseline slopes. In addition to deomonstrating the utility of the proposed methods and to compare them in a real application, a spectral data set measured from a flame radiation process was used. Several performance metrics such as correlation coefficient, chi-square value, and goodness-of-fit coefficient were calculated to quantify and compare both algorithms. Results demonstrate that the PCA-based method outperforms the one based on ANN both in terms of performance and simplicity. © The Author(s) 2016.
Discriminative Bayesian Dictionary Learning for Classification.
Akhtar, Naveed; Shafait, Faisal; Mian, Ajmal
2016-12-01
We propose a Bayesian approach to learn discriminative dictionaries for sparse representation of data. The proposed approach infers probability distributions over the atoms of a discriminative dictionary using a finite approximation of Beta Process. It also computes sets of Bernoulli distributions that associate class labels to the learned dictionary atoms. This association signifies the selection probabilities of the dictionary atoms in the expansion of class-specific data. Furthermore, the non-parametric character of the proposed approach allows it to infer the correct size of the dictionary. We exploit the aforementioned Bernoulli distributions in separately learning a linear classifier. The classifier uses the same hierarchical Bayesian model as the dictionary, which we present along the analytical inference solution for Gibbs sampling. For classification, a test instance is first sparsely encoded over the learned dictionary and the codes are fed to the classifier. We performed experiments for face and action recognition; and object and scene-category classification using five public datasets and compared the results with state-of-the-art discriminative sparse representation approaches. Experiments show that the proposed Bayesian approach consistently outperforms the existing approaches.
Propulsion integration of hypersonic air-breathing vehicles utilizing a top-down design methodology
NASA Astrophysics Data System (ADS)
Kirkpatrick, Brad Kenneth
In recent years, a focus of aerospace engineering design has been the development of advanced design methodologies and frameworks to account for increasingly complex and integrated vehicles. Techniques such as parametric modeling, global vehicle analyses, and interdisciplinary data sharing have been employed in an attempt to improve the design process. The purpose of this study is to introduce a new approach to integrated vehicle design known as the top-down design methodology. In the top-down design methodology, the main idea is to relate design changes on the vehicle system and sub-system level to a set of over-arching performance and customer requirements. Rather than focusing on the performance of an individual system, the system is analyzed in terms of the net effect it has on the overall vehicle and other vehicle systems. This detailed level of analysis can only be accomplished through the use of high fidelity computational tools such as Computational Fluid Dynamics (CFD) or Finite Element Analysis (FEA). The utility of the top-down design methodology is investigated through its application to the conceptual and preliminary design of a long-range hypersonic air-breathing vehicle for a hypothetical next generation hypersonic vehicle (NHRV) program. System-level design is demonstrated through the development of the nozzle section of the propulsion system. From this demonstration of the methodology, conclusions are made about the benefits, drawbacks, and cost of using the methodology.
The use of algorithmic behavioural transfer functions in parametric EO system performance models
NASA Astrophysics Data System (ADS)
Hickman, Duncan L.; Smith, Moira I.
2015-10-01
The use of mathematical models to predict the overall performance of an electro-optic (EO) system is well-established as a methodology and is used widely to support requirements definition, system design, and produce performance predictions. Traditionally these models have been based upon cascades of transfer functions based on established physical theory, such as the calculation of signal levels from radiometry equations, as well as the use of statistical models. However, the performance of an EO system is increasing being dominated by the on-board processing of the image data and this automated interpretation of image content is complex in nature and presents significant modelling challenges. Models and simulations of EO systems tend to either involve processing of image data as part of a performance simulation (image-flow) or else a series of mathematical functions that attempt to define the overall system characteristics (parametric). The former approach is generally more accurate but statistically and theoretically weak in terms of specific operational scenarios, and is also time consuming. The latter approach is generally faster but is unable to provide accurate predictions of a system's performance under operational conditions. An alternative and novel architecture is presented in this paper which combines the processing speed attributes of parametric models with the accuracy of image-flow representations in a statistically valid framework. An additional dimension needed to create an effective simulation is a robust software design whose architecture reflects the structure of the EO System and its interfaces. As such, the design of the simulator can be viewed as a software prototype of a new EO System or an abstraction of an existing design. This new approach has been used successfully to model a number of complex military systems and has been shown to combine improved performance estimation with speed of computation. Within the paper details of the approach and architecture are described in detail, and example results based on a practical application are then given which illustrate the performance benefits. Finally, conclusions are drawn and comments given regarding the benefits and uses of the new approach.
A tool for the estimation of the distribution of landslide area in R
NASA Astrophysics Data System (ADS)
Rossi, M.; Cardinali, M.; Fiorucci, F.; Marchesini, I.; Mondini, A. C.; Santangelo, M.; Ghosh, S.; Riguer, D. E. L.; Lahousse, T.; Chang, K. T.; Guzzetti, F.
2012-04-01
We have developed a tool in R (the free software environment for statistical computing, http://www.r-project.org/) to estimate the probability density and the frequency density of landslide area. The tool implements parametric and non-parametric approaches to the estimation of the probability density and the frequency density of landslide area, including: (i) Histogram Density Estimation (HDE), (ii) Kernel Density Estimation (KDE), and (iii) Maximum Likelihood Estimation (MLE). The tool is available as a standard Open Geospatial Consortium (OGC) Web Processing Service (WPS), and is accessible through the web using different GIS software clients. We tested the tool to compare Double Pareto and Inverse Gamma models for the probability density of landslide area in different geological, morphological and climatological settings, and to compare landslides shown in inventory maps prepared using different mapping techniques, including (i) field mapping, (ii) visual interpretation of monoscopic and stereoscopic aerial photographs, (iii) visual interpretation of monoscopic and stereoscopic VHR satellite images and (iv) semi-automatic detection and mapping from VHR satellite images. Results show that both models are applicable in different geomorphological settings. In most cases the two models provided very similar results. Non-parametric estimation methods (i.e., HDE and KDE) provided reasonable results for all the tested landslide datasets. For some of the datasets, MLE failed to provide a result, for convergence problems. The two tested models (Double Pareto and Inverse Gamma) resulted in very similar results for large and very large datasets (> 150 samples). Differences in the modeling results were observed for small datasets affected by systematic biases. A distinct rollover was observed in all analyzed landslide datasets, except for a few datasets obtained from landslide inventories prepared through field mapping or by semi-automatic mapping from VHR satellite imagery. The tool can also be used to evaluate the probability density and the frequency density of landslide volume.
Decomposing cross-country differences in quality adjusted life expectancy: the impact of value sets.
Heijink, Richard; van Baal, Pieter; Oppe, Mark; Koolman, Xander; Westert, Gert
2011-06-23
The validity, reliability and cross-country comparability of summary measures of population health (SMPH) have been persistently debated. In this debate, the measurement and valuation of nonfatal health outcomes have been defined as key issues. Our goal was to quantify and decompose international differences in health expectancy based on health-related quality of life (HRQoL). We focused on the impact of value set choice on cross-country variation. We calculated Quality Adjusted Life Expectancy (QALE) at age 20 for 15 countries in which EQ-5D population surveys had been conducted. We applied the Sullivan approach to combine the EQ-5D based HRQoL data with life tables from the Human Mortality Database. Mean HRQoL by country-gender-age was estimated using a parametric model. We used nonparametric bootstrap techniques to compute confidence intervals. QALE was then compared across the six country-specific time trade-off value sets that were available. Finally, three counterfactual estimates were generated in order to assess the contribution of mortality, health states and health-state values to cross-country differences in QALE. QALE at age 20 ranged from 33 years in Armenia to almost 61 years in Japan, using the UK value set. The value sets of the other five countries generated different estimates, up to seven years higher. The relative impact of choosing a different value set differed across country-gender strata between 2% and 20%. In 50% of the country-gender strata the ranking changed by two or more positions across value sets. The decomposition demonstrated a varying impact of health states, health-state values, and mortality on QALE differences across countries. The choice of the value set in SMPH may seriously affect cross-country comparisons of health expectancy, even across populations of similar levels of wealth and education. In our opinion, it is essential to get more insight into the drivers of differences in health-state values across populations. This will enhance the usefulness of health-expectancy measures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Centini, M.; Sciscione, L.; Sibilia, C.
A description of spontaneous parametric down-conversion in finite-length one-dimensional nonlinear photonic crystals is developed using semiclassical and quantum approaches. It is shown that if a suitable averaging is added to the semiclassical model, its results are in very good agreement with the quantum approach. We propose two structures made with GaN/AlN that generate both degenerate and nondegenerate entangled photon pairs. Both structures are designed so as to achieve a high efficiency of the nonlinear process.
Mediation Analysis with Survival Outcomes: Accelerated Failure Time vs. Proportional Hazards Models
Gelfand, Lois A.; MacKinnon, David P.; DeRubeis, Robert J.; Baraldi, Amanda N.
2016-01-01
Objective: Survival time is an important type of outcome variable in treatment research. Currently, limited guidance is available regarding performing mediation analyses with survival outcomes, which generally do not have normally distributed errors, and contain unobserved (censored) events. We present considerations for choosing an approach, using a comparison of semi-parametric proportional hazards (PH) and fully parametric accelerated failure time (AFT) approaches for illustration. Method: We compare PH and AFT models and procedures in their integration into mediation models and review their ability to produce coefficients that estimate causal effects. Using simulation studies modeling Weibull-distributed survival times, we compare statistical properties of mediation analyses incorporating PH and AFT approaches (employing SAS procedures PHREG and LIFEREG, respectively) under varied data conditions, some including censoring. A simulated data set illustrates the findings. Results: AFT models integrate more easily than PH models into mediation models. Furthermore, mediation analyses incorporating LIFEREG produce coefficients that can estimate causal effects, and demonstrate superior statistical properties. Censoring introduces bias in the coefficient estimate representing the treatment effect on outcome—underestimation in LIFEREG, and overestimation in PHREG. With LIFEREG, this bias can be addressed using an alternative estimate obtained from combining other coefficients, whereas this is not possible with PHREG. Conclusions: When Weibull assumptions are not violated, there are compelling advantages to using LIFEREG over PHREG for mediation analyses involving survival-time outcomes. Irrespective of the procedures used, the interpretation of coefficients, effects of censoring on coefficient estimates, and statistical properties should be taken into account when reporting results. PMID:27065906
Mayer, Rulon; Simone, Charles B; Skinner, William; Turkbey, Baris; Choykey, Peter
2018-03-01
Gleason Score (GS) is a validated predictor of prostate cancer (PCa) disease progression and outcomes. GS from invasive needle biopsies suffers from significant inter-observer variability and possible sampling error, leading to underestimating disease severity ("underscoring") and can result in possible complications. A robust non-invasive image-based approach is, therefore, needed. Use spatially registered multi-parametric MRI (MP-MRI), signatures, and supervised target detection algorithms (STDA) to non-invasively GS PCa at the voxel level. This study retrospectively analyzed 26 MP-MRI from The Cancer Imaging Archive. The MP-MRI (T2, Diffusion Weighted, Dynamic Contrast Enhanced) were spatially registered to each other, combined into stacks, and stitched together to form hypercubes. Multi-parametric (or multi-spectral) signatures derived from a training set of registered MP-MRI were transformed using statistics-based Whitening-Dewhitening (WD). Transformed signatures were inserted into STDA (having conical decision surfaces) applied to registered MP-MRI determined the tumor GS. The MRI-derived GS was quantitatively compared to the pathologist's assessment of the histology of sectioned whole mount prostates from patients who underwent radical prostatectomy. In addition, a meta-analysis of 17 studies of needle biopsy determined GS with confusion matrices and was compared to the MRI-determined GS. STDA and histology determined GS are highly correlated (R = 0.86, p < 0.02). STDA more accurately determined GS and reduced GS underscoring of PCa relative to needle biopsy as summarized by meta-analysis (p < 0.05). This pilot study found registered MP-MRI, STDA, and WD transforms of signatures shows promise in non-invasively GS PCa and reducing underscoring with high spatial resolution. Copyright © 2018 Elsevier Ltd. All rights reserved.
Riccati Parametric Deformations of the Cornu Spiral
NASA Astrophysics Data System (ADS)
Rosu, Haret C.; Mancas, Stefan C.; Flores-Garduño, Elizabeth
2018-06-01
In this article, a parametric deformation of the Cornu spiral is introduced. The parameter is an integration constant which appears in the general solution of the Riccati equation and is related to the Fresnel integrals. The Argand plots of the deformed spirals are presented and a supersymmetric (Darboux) structure of the deformation is revealed through the factorization approach.
Simplified estimation of age-specific reference intervals for skewed data.
Wright, E M; Royston, P
1997-12-30
Age-specific reference intervals are commonly used in medical screening and clinical practice, where interest lies in the detection of extreme values. Many different statistical approaches have been published on this topic. The advantages of a parametric method are that they necessarily produce smooth centile curves, the entire density is estimated and an explicit formula is available for the centiles. The method proposed here is a simplified version of a recent approach proposed by Royston and Wright. Basic transformations of the data and multiple regression techniques are combined to model the mean, standard deviation and skewness. Using these simple tools, which are implemented in almost all statistical computer packages, age-specific reference intervals may be obtained. The scope of the method is illustrated by fitting models to several real data sets and assessing each model using goodness-of-fit techniques.
NASA Astrophysics Data System (ADS)
Dai, Xiaoqian; Tian, Jie; Chen, Zhe
2010-03-01
Parametric images can represent both spatial distribution and quantification of the biological and physiological parameters of tracer kinetics. The linear least square (LLS) method is a well-estimated linear regression method for generating parametric images by fitting compartment models with good computational efficiency. However, bias exists in LLS-based parameter estimates, owing to the noise present in tissue time activity curves (TTACs) that propagates as correlated error in the LLS linearized equations. To address this problem, a volume-wise principal component analysis (PCA) based method is proposed. In this method, firstly dynamic PET data are properly pre-transformed to standardize noise variance as PCA is a data driven technique and can not itself separate signals from noise. Secondly, the volume-wise PCA is applied on PET data. The signals can be mostly represented by the first few principle components (PC) and the noise is left in the subsequent PCs. Then the noise-reduced data are obtained using the first few PCs by applying 'inverse PCA'. It should also be transformed back according to the pre-transformation method used in the first step to maintain the scale of the original data set. Finally, the obtained new data set is used to generate parametric images using the linear least squares (LLS) estimation method. Compared with other noise-removal method, the proposed method can achieve high statistical reliability in the generated parametric images. The effectiveness of the method is demonstrated both with computer simulation and with clinical dynamic FDG PET study.
Dai, James Y.; Hughes, James P.
2012-01-01
The meta-analytic approach to evaluating surrogate end points assesses the predictiveness of treatment effect on the surrogate toward treatment effect on the clinical end point based on multiple clinical trials. Definition and estimation of the correlation of treatment effects were developed in linear mixed models and later extended to binary or failure time outcomes on a case-by-case basis. In a general regression setting that covers nonnormal outcomes, we discuss in this paper several metrics that are useful in the meta-analytic evaluation of surrogacy. We propose a unified 3-step procedure to assess these metrics in settings with binary end points, time-to-event outcomes, or repeated measures. First, the joint distribution of estimated treatment effects is ascertained by an estimating equation approach; second, the restricted maximum likelihood method is used to estimate the means and the variance components of the random treatment effects; finally, confidence intervals are constructed by a parametric bootstrap procedure. The proposed method is evaluated by simulations and applications to 2 clinical trials. PMID:22394448
Galka, Andreas; Siniatchkin, Michael; Stephani, Ulrich; Groening, Kristina; Wolff, Stephan; Bosch-Bayard, Jorge; Ozaki, Tohru
2010-12-01
The analysis of time series obtained by functional magnetic resonance imaging (fMRI) may be approached by fitting predictive parametric models, such as nearest-neighbor autoregressive models with exogeneous input (NNARX). As a part of the modeling procedure, it is possible to apply instantaneous linear transformations to the data. Spatial smoothing, a common preprocessing step, may be interpreted as such a transformation. The autoregressive parameters may be constrained, such that they provide a response behavior that corresponds to the canonical haemodynamic response function (HRF). We present an algorithm for estimating the parameters of the linear transformations and of the HRF within a rigorous maximum-likelihood framework. Using this approach, an optimal amount of both the spatial smoothing and the HRF can be estimated simultaneously for a given fMRI data set. An example from a motor-task experiment is discussed. It is found that, for this data set, weak, but non-zero, spatial smoothing is optimal. Furthermore, it is demonstrated that activated regions can be estimated within the maximum-likelihood framework.
Parametric boundary reconstruction algorithm for industrial CT metrology application.
Yin, Zhye; Khare, Kedar; De Man, Bruno
2009-01-01
High-energy X-ray computed tomography (CT) systems have been recently used to produce high-resolution images in various nondestructive testing and evaluation (NDT/NDE) applications. The accuracy of the dimensional information extracted from CT images is rapidly approaching the accuracy achieved with a coordinate measuring machine (CMM), the conventional approach to acquire the metrology information directly. On the other hand, CT systems generate the sinogram which is transformed mathematically to the pixel-based images. The dimensional information of the scanned object is extracted later by performing edge detection on reconstructed CT images. The dimensional accuracy of this approach is limited by the grid size of the pixel-based representation of CT images since the edge detection is performed on the pixel grid. Moreover, reconstructed CT images usually display various artifacts due to the underlying physical process and resulting object boundaries from the edge detection fail to represent the true boundaries of the scanned object. In this paper, a novel algorithm to reconstruct the boundaries of an object with uniform material composition and uniform density is presented. There are three major benefits in the proposed approach. First, since the boundary parameters are reconstructed instead of image pixels, the complexity of the reconstruction algorithm is significantly reduced. The iterative approach, which can be computationally intensive, will be practical with the parametric boundary reconstruction. Second, the object of interest in metrology can be represented more directly and accurately by the boundary parameters instead of the image pixels. By eliminating the extra edge detection step, the overall dimensional accuracy and process time can be improved. Third, since the parametric reconstruction approach shares the boundary representation with other conventional metrology modalities such as CMM, boundary information from other modalities can be directly incorporated as prior knowledge to improve the convergence of an iterative approach. In this paper, the feasibility of parametric boundary reconstruction algorithm is demonstrated with both simple and complex simulated objects. Finally, the proposed algorithm is applied to the experimental industrial CT system data.
Ince-Strutt stability charts for ship parametric roll resonance in irregular waves
NASA Astrophysics Data System (ADS)
Zhang, Xiao; Yang, He-zhen; Xiao, Fei; Xu, Pei-ji
2017-08-01
Ince-Strutt stability chart of ship parametric roll resonance in irregular waves is conducted and utilized for the exploration of the parametric roll resonance in irregular waves. Ship parametric roll resonance will lead to large amplitude roll motion and even wreck. Firstly, the equation describing the parametric roll resonance in irregular waves is derived according to Grim's effective theory and the corresponding Ince-Strutt stability charts are obtained. Secondly, the differences of stability charts for the parametric roll resonance in irregular and regular waves are compared. Thirdly, wave phases and peak periods are taken into consideration to obtain a more realistic sea condition. The influence of random wave phases should be taken into consideration when the analyzed points are located near the instability boundary. Stability charts for different wave peak periods are various. Stability charts are helpful for the parameter determination in design stage to better adapt to sailing condition. Last, ship variables are analyzed according to stability charts by a statistical approach. The increase of the metacentric height will help improve ship stability.
NASA Astrophysics Data System (ADS)
Taxak, A. K.; Ojha, C. S. P.
2017-12-01
Land use and land cover (LULC) changes within a watershed are recognised as an important factor affecting hydrological processes and water resources. LULC changes continuously not only in long term but also on the inter-annual and season level. Changes in LULC affects the interception, storage and moisture. A widely used approach in rainfall-runoff modelling through Land surface models (LSM)/ hydrological models is to keep LULC same throughout the model running period. In long term simulations where land use change take place during the run period, using a single LULC does not represent a true picture of ground conditions could result in stationarity of model responses. The present work presents a case study in which changes in LULC are incorporated by using multiple LULC layers. LULC for the study period were created using imageries from Landsat series, Sentinal, EO-1 ALI. Distributed, physically based Variable Infiltration Capacity (VIC) model was modified to allow inclusion of LULC as a time varying variable just like climate. The Narayani basin was simulated with LULC, leaf area index (LAI), albedo and climate data for 1992-2015. The results showed that the model simulation with varied parametrization approach has a large improvement over the conventional fixed parametrization approach in terms of long-term water balance. The proposed modelling approach could improve hydrological modelling for applications like land cover change studies, water budget studies etc.
Unemployment and subsequent depression: A mediation analysis using the parametric G-formula.
Bijlsma, Maarten J; Tarkiainen, Lasse; Myrskylä, Mikko; Martikainen, Pekka
2017-12-01
The effects of unemployment on depression are difficult to establish because of confounding and limited understanding of the mechanisms at the population level. In particular, due to longitudinal interdependencies between exposures, mediators and outcomes, intermediate confounding is an obstacle for mediation analyses. Using longitudinal Finnish register data on socio-economic characteristics and medication purchases, we extracted individuals who entered the labor market between ages 16 and 25 in the period 1996 to 2001 and followed them until the year 2007 (n = 42,172). With the parametric G-formula we estimated the population-averaged effect on first antidepressant purchase of a simulated intervention which set all unemployed person-years to employed. In the data, 74% of person-years were employed and 8% unemployed, the rest belonging to studying or other status. In the intervention scenario, employment rose to 85% and the hazard of first antidepressant purchase decreased by 7.6%. Of this reduction 61% was mediated, operating primarily through changes in income and household status, while mediation through other health conditions was negligible. These effects were negligible for women and particularly prominent among less educated men. By taking complex interdependencies into account in a framework of observed repeated measures data, we found that eradicating unemployment raises income levels, promotes family formation, and thereby reduces antidepressant consumption at the population-level. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Parametric modeling studies of turbulent non-premixed jet flames with thin reaction zones
NASA Astrophysics Data System (ADS)
Wang, Haifeng
2013-11-01
The Sydney piloted jet flame series (Flames L, B, and M) feature thinner reaction zones and hence impose greater challenges to modeling than the Sanida Piloted jet flames (Flames D, E, and F). Recently, the Sydney flames received renewed interest due to these challenges. Several new modeling efforts have emerged. However, no systematic parametric modeling studies have been reported for the Sydney flames. A large set of modeling computations of the Sydney flames is presented here by using the coupled large eddy simulation (LES)/probability density function (PDF) method. Parametric studies are performed to gain insight into the model performance, its sensitivity and the effect of numerics.
NASA Technical Reports Server (NTRS)
Stanley, Douglas O.; Unal, Resit; Joyner, C. R.
1992-01-01
The application of advanced technologies to future launch vehicle designs would allow the introduction of a rocket-powered, single-stage-to-orbit (SSTO) launch system early in the next century. For a selected SSTO concept, a dual mixture ratio, staged combustion cycle engine that employs a number of innovative technologies was selected as the baseline propulsion system. A series of parametric trade studies are presented to optimize both a dual mixture ratio engine and a single mixture ratio engine of similar design and technology level. The effect of varying lift-off thrust-to-weight ratio, engine mode transition Mach number, mixture ratios, area ratios, and chamber pressure values on overall vehicle weight is examined. The sensitivity of the advanced SSTO vehicle to variations in each of these parameters is presented, taking into account the interaction of each of the parameters with each other. This parametric optimization and sensitivity study employs a Taguchi design method. The Taguchi method is an efficient approach for determining near-optimum design parameters using orthogonal matrices from design of experiments (DOE) theory. Using orthogonal matrices significantly reduces the number of experimental configurations to be studied. The effectiveness and limitations of the Taguchi method for propulsion/vehicle optimization studies as compared to traditional single-variable parametric trade studies is also discussed.
Yadage and Packtivity - analysis preservation using parametrized workflows
NASA Astrophysics Data System (ADS)
Cranmer, Kyle; Heinrich, Lukas
2017-10-01
Preserving data analyses produced by the collaborations at LHC in a parametrized fashion is crucial in order to maintain reproducibility and re-usability. We argue for a declarative description in terms of individual processing steps - “packtivities” - linked through a dynamic directed acyclic graph (DAG) and present an initial set of JSON schemas for such a description and an implementation - “yadage” - capable of executing workflows of analysis preserved via Linux containers.
SEC sensor parametric test and evaluation system
NASA Technical Reports Server (NTRS)
1978-01-01
This system provides the necessary automated hardware required to carry out, in conjunction with the existing 70 mm SEC television camera, the sensor evaluation tests which are described in detail. The Parametric Test Set (PTS) was completed and is used in a semiautomatic data acquisition and control mode to test the development of the 70 mm SEC sensor, WX 32193. Data analysis of raw data is performed on the Princeton IBM 360-91 computer.
20 mJ, 1 ps Yb:YAG Thin-disk Regenerative Amplifier
Alismail, Ayman; Wang, Haochuan; Brons, Jonathan; Fattahi, Hanieh
2017-01-01
This is a report on a 100 W, 20 mJ, 1 ps Yb:YAG thin-disk regenerative amplifier. A homemade Yb:YAG thin-disk, Kerr-lens mode-locked oscillator with turn-key performance and microjoule-level pulse energy is used to seed the regenerative chirped-pulse amplifier. The amplifier is placed in airtight housing. It operates at room temperature and exhibits stable operation at a 5 kHz repetition rate, with a pulse-to-pulse stability less than 1%. By employing a 1.5 mm-thick beta barium borate crystal, the frequency of the laser output is doubled to 515 nm, with an average power of 70 W, which corresponds to an optical-to-optical efficiency of 70%. This superior performance makes the system an attractive pump source for optical parametric chirped-pulse amplifiers in the near-infrared and mid-infrared spectral range. Combining the turn-key performance and the superior stability of the regenerative amplifier, the system facilitates the generation of a broadband, CEP-stable seed. Providing the seed and pump of the optical parametric chirped-pulse amplification (OPCPA) from one laser source eliminates the demand of active temporal synchronization between these pulses. This work presents a detailed guide to set up and operate a Yb:YAG thin-disk regenerative amplifier, based on chirped-pulse amplification (CPA), as a pump source for an optical parametric chirped-pulse amplifier. PMID:28745636
Summary of the Fourth AIAA CFD Drag Prediction Workshop
NASA Technical Reports Server (NTRS)
Vassberg, John C.; Tinoco, Edward N.; Mani, Mori; Rider, Ben; Zickuhr, Tom; Levy, David W.; Brodersen, Olaf P.; Eisfeld, Bernhard; Crippa, Simone; Wahls, Richard A.;
2010-01-01
Results from the Fourth AIAA Drag Prediction Workshop (DPW-IV) are summarized. The workshop focused on the prediction of both absolute and differential drag levels for wing-body and wing-body-horizontal-tail configurations that are representative of transonic transport air- craft. Numerical calculations are performed using industry-relevant test cases that include lift- specific flight conditions, trimmed drag polars, downwash variations, dragrises and Reynolds- number effects. Drag, lift and pitching moment predictions from numerous Reynolds-Averaged Navier-Stokes computational fluid dynamics methods are presented. Solutions are performed on structured, unstructured and hybrid grid systems. The structured-grid sets include point- matched multi-block meshes and over-set grid systems. The unstructured and hybrid grid sets are comprised of tetrahedral, pyramid, prismatic, and hexahedral elements. Effort is made to provide a high-quality and parametrically consistent family of grids for each grid type about each configuration under study. The wing-body-horizontal families are comprised of a coarse, medium and fine grid; an optional extra-fine grid augments several of the grid families. These mesh sequences are utilized to determine asymptotic grid-convergence characteristics of the solution sets, and to estimate grid-converged absolute drag levels of the wing-body-horizontal configuration using Richardson extrapolation.
Gravity Field Characterization around Small Bodies
NASA Astrophysics Data System (ADS)
Takahashi, Yu
A small body rendezvous mission requires accurate gravity field characterization for safe, accurate navigation purposes. However, the current techniques of gravity field modeling around small bodies are not achieved to the level of satisfaction. This thesis will address how the process of current gravity field characterization can be made more robust for future small body missions. First we perform the covariance analysis around small bodies via multiple slow flybys. Flyby characterization requires less laborious scheduling than its orbit counterpart, simultaneously reducing the risk of impact into the asteroid's surface. It will be shown that the level of initial characterization that can occur with this approach is no less than the orbit approach. Next, we apply the same technique of gravity field characterization to estimate the spin state of 4179 Touatis, which is a near-Earth asteroid in close to 4:1 resonance with the Earth. The data accumulated from 1992-2008 are processed in a least-squares filter to predict Toutatis' orientation during the 2012 apparition. The center-of-mass offset and the moments of inertia estimated thereof can be used to constrain the internal density distribution within the body. Then, the spin state estimation is developed to a generalized method to estimate the internal density distribution within a small body. The density distribution is estimated from the orbit determination solution of the gravitational coefficients. It will be shown that the surface gravity field reconstructed from the estimated density distribution yields higher accuracy than the conventional gravity field models. Finally, we will investigate two types of relatively unknown gravity fields, namely the interior gravity field and interior spherical Bessel gravity field, in order to investigate how accurately the surface gravity field can be mapped out for proximity operations purposes. It will be shown that these formulations compute the surface gravity field with unprecedented accuracy for a well-chosen set of parametric settings, both regionally and globally.
Häme, Yrjö; Angelini, Elsa D.; Hoffman, Eric A.; Barr, R. Graham; Laine, Andrew F.
2014-01-01
The extent of pulmonary emphysema is commonly estimated from CT images by computing the proportional area of voxels below a predefined attenuation threshold. However, the reliability of this approach is limited by several factors that affect the CT intensity distributions in the lung. This work presents a novel method for emphysema quantification, based on parametric modeling of intensity distributions in the lung and a hidden Markov measure field model to segment emphysematous regions. The framework adapts to the characteristics of an image to ensure a robust quantification of emphysema under varying CT imaging protocols and differences in parenchymal intensity distributions due to factors such as inspiration level. Compared to standard approaches, the present model involves a larger number of parameters, most of which can be estimated from data, to handle the variability encountered in lung CT scans. The method was used to quantify emphysema on a cohort of 87 subjects, with repeated CT scans acquired over a time period of 8 years using different imaging protocols. The scans were acquired approximately annually, and the data set included a total of 365 scans. The results show that the emphysema estimates produced by the proposed method have very high intra-subject correlation values. By reducing sensitivity to changes in imaging protocol, the method provides a more robust estimate than standard approaches. In addition, the generated emphysema delineations promise great advantages for regional analysis of emphysema extent and progression, possibly advancing disease subtyping. PMID:24759984
Sparkle/AM1 Parameters for the Modeling of Samarium(III) and Promethium(III) Complexes.
Freire, Ricardo O; da Costa, Nivan B; Rocha, Gerd B; Simas, Alfredo M
2006-01-01
The Sparkle/AM1 model is extended to samarium(III) and promethium(III) complexes. A set of 15 structures of high crystallographic quality (R factor < 0.05 Å), with ligands chosen to be representative of all samarium complexes in the Cambridge Crystallographic Database 2004, CSD, with nitrogen or oxygen directly bonded to the samarium ion, was used as a training set. In the validation procedure, we used a set of 42 other complexes, also of high crystallographic quality. The results show that this parametrization for the Sm(III) ion is similar in accuracy to the previous parametrizations for Eu(III), Gd(III), and Tb(III). On the other hand, promethium is an artificial radioactive element with no stable isotope. So far, there are no promethium complex crystallographic structures in CSD. To circumvent this, we confirmed our previous result that RHF/STO-3G/ECP, with the MWB effective core potential (ECP), appears to be the most efficient ab initio model chemistry in terms of coordination polyhedron crystallographic geometry predictions from isolated lanthanide complex ion calculations. We thus generated a set of 15 RHF/STO-3G/ECP promethium complex structures with ligands chosen to be representative of complexes available in the CSD for all other trivalent lanthanide cations, with nitrogen or oxygen directly bonded to the lanthanide ion. For the 42 samarium(III) complexes and 15 promethium(III) complexes considered, the Sparkle/AM1 unsigned mean error, for all interatomic distances between the Ln(III) ion and the ligand atoms of the first sphere of coordination, is 0.07 and 0.06 Å, respectively, a level of accuracy comparable to present day ab initio/ECP geometries, while being hundreds of times faster.
Bossard, N; Descotes, F; Bremond, A G; Bobin, Y; De Saint Hilaire, P; Golfier, F; Awada, A; Mathevet, P M; Berrerd, L; Barbier, Y; Estève, J
2003-11-01
The prognostic value of cathepsin D has been recently recognized, but as many quantitative tumor markers, its clinical use remains unclear partly because of methodological issues in defining cut-off values. Guidelines have been proposed for analyzing quantitative prognostic factors, underlining the need for keeping data continuous, instead of categorizing them. Flexible approaches, parametric and non-parametric, have been proposed in order to improve the knowledge of the functional form relating a continuous factor to the risk. We studied the prognostic value of cathepsin D in a retrospective hospital cohort of 771 patients with breast cancer, and focused our overall survival analysis, based on the Cox regression, on two flexible approaches: smoothing splines and fractional polynomials. We also determined a cut-off value from the maximum likelihood estimate of a threshold model. These different approaches complemented each other for (1) identifying the functional form relating cathepsin D to the risk, and obtaining a cut-off value and (2) optimizing the adjustment for complex covariate like age at diagnosis in the final multivariate Cox model. We found a significant increase in the death rate, reaching 70% with a doubling of the level of cathepsin D, after the threshold of 37.5 pmol mg(-1). The proper prognostic impact of this marker could be confirmed and a methodology providing appropriate ways to use markers in clinical practice was proposed.
Model independent constraints on transition redshift
NASA Astrophysics Data System (ADS)
Jesus, J. F.; Holanda, R. F. L.; Pereira, S. H.
2018-05-01
This paper aims to put constraints on the transition redshift zt, which determines the onset of cosmic acceleration, in cosmological-model independent frameworks. In order to perform our analyses, we consider a flat universe and assume a parametrization for the comoving distance DC(z) up to third degree on z, a second degree parametrization for the Hubble parameter H(z) and a linear parametrization for the deceleration parameter q(z). For each case, we show that type Ia supernovae and H(z) data complement each other on the parameter space and tighter constrains for the transition redshift are obtained. By combining the type Ia supernovae observations and Hubble parameter measurements it is possible to constrain the values of zt, for each approach, as 0.806± 0.094, 0.870± 0.063 and 0.973± 0.058 at 1σ c.l., respectively. Then, such approaches provide cosmological-model independent estimates for this parameter.
Revisiting Parametric Types and Virtual Classes
NASA Astrophysics Data System (ADS)
Madsen, Anders Bach; Ernst, Erik
This paper presents a conceptually oriented updated view on the relationship between parametric types and virtual classes. The traditional view is that parametric types excel at structurally oriented composition and decomposition, and virtual classes excel at specifying mutually recursive families of classes whose relationships are preserved in derived families. Conversely, while class families can be specified using a large number of F-bounded type parameters, this approach is complex and fragile; and it is difficult to use traditional virtual classes to specify object composition in a structural manner, because virtual classes are closely tied to nominal typing. This paper adds new insight about the dichotomy between these two approaches; it illustrates how virtual constraints and type refinements, as recently introduced in gbeta and Scala, enable structural treatment of virtual types; finally, it shows how a novel kind of dynamic type check can detect compatibility among entire families of classes.
NASA Astrophysics Data System (ADS)
Yu, Miao; Huang, Deqing; Yang, Wanqiu
2018-06-01
In this paper, we address the problem of unknown periodicity for a class of discrete-time nonlinear parametric systems without assuming any growth conditions on the nonlinearities. The unknown periodicity hides in the parametric uncertainties, which is difficult to estimate with existing techniques. By incorporating a logic-based switching mechanism, we identify the period and bound of unknown parameter simultaneously. Lyapunov-based analysis is given to demonstrate that a finite number of switchings can guarantee the asymptotic tracking for the nonlinear parametric systems. The simulation result also shows the efficacy of the proposed switching periodic adaptive control approach.
Lin, Sheng-Hsuan; Young, Jessica; Logan, Roger; Tchetgen Tchetgen, Eric J.; VanderWeele, Tyler J.
2016-01-01
The assessment of direct and indirect effects with time-varying mediators and confounders is a common but challenging problem, and standard mediation analysis approaches are generally not applicable in this context. The mediational g-formula was recently proposed to address this problem, paired with a semi-parametric estimation approach to evaluate longitudinal mediation effects empirically. In this paper, we develop a parametric estimation approach to the mediational g-formula, including a feasible algorithm implemented in a freely available SAS macro. In the Framingham Heart Study data, we apply this method to estimate the interventional analogues of natural direct and indirect effects of smoking behaviors sustained over a 10-year period on blood pressure when considering weight change as a time-varying mediator. Compared with not smoking, smoking 20 cigarettes per day for 10 years was estimated to increase blood pressure by 1.2 (95 % CI: −0.7, 2.7) mm-Hg. The direct effect was estimated to increase blood pressure by 1.5 (95 % CI: −0.3, 2.9) mm-Hg, and the indirect effect was −0.3 (95% CI: −0.5, −0.1) mm-Hg, which is negative because smoking which is associated with lower weight is associated in turn with lower blood pressure. These results provide evidence that weight change in fact partially conceals the detrimental effects of cigarette smoking on blood pressure. Our work represents, to our knowledge, the first application of the parametric mediational g-formula in an epidemiologic cohort study. PMID:27984420
NASA Astrophysics Data System (ADS)
Khobragade, P.; Fan, Jiahua; Rupcich, Franco; Crotty, Dominic J.; Gilat Schmidt, Taly
2016-03-01
This study quantitatively evaluated the performance of the exponential transformation of the free-response operating characteristic curve (EFROC) metric, with the Channelized Hotelling Observer (CHO) as a reference. The CHO has been used for image quality assessment of reconstruction algorithms and imaging systems and often it is applied to study the signal-location-known cases. The CHO also requires a large set of images to estimate the covariance matrix. In terms of clinical applications, this assumption and requirement may be unrealistic. The newly developed location-unknown EFROC detectability metric is estimated from the confidence scores reported by a model observer. Unlike the CHO, EFROC does not require a channelization step and is a non-parametric detectability metric. There are few quantitative studies available on application of the EFROC metric, most of which are based on simulation data. This study investigated the EFROC metric using experimental CT data. A phantom with four low contrast objects: 3mm (14 HU), 5mm (7HU), 7mm (5 HU) and 10 mm (3 HU) was scanned at dose levels ranging from 25 mAs to 270 mAs and reconstructed using filtered backprojection. The area under the curve values for CHO (AUC) and EFROC (AFE) were plotted with respect to different dose levels. The number of images required to estimate the non-parametric AFE metric was calculated for varying tasks and found to be less than the number of images required for parametric CHO estimation. The AFE metric was found to be more sensitive to changes in dose than the CHO metric. This increased sensitivity and the assumption of unknown signal location may be useful for investigating and optimizing CT imaging methods. Future work is required to validate the AFE metric against human observers.
Palmer, T. N.
2014-01-01
This paper sets out a new methodological approach to solving the equations for simulating and predicting weather and climate. In this approach, the conventionally hard boundary between the dynamical core and the sub-grid parametrizations is blurred. This approach is motivated by the relatively shallow power-law spectrum for atmospheric energy on scales of hundreds of kilometres and less. It is first argued that, because of this, the closure schemes for weather and climate simulators should be based on stochastic–dynamic systems rather than deterministic formulae. Second, as high-wavenumber elements of the dynamical core will necessarily inherit this stochasticity during time integration, it is argued that the dynamical core will be significantly over-engineered if all computations, regardless of scale, are performed completely deterministically and if all variables are represented with maximum numerical precision (in practice using double-precision floating-point numbers). As the era of exascale computing is approached, an energy- and computationally efficient approach to cloud-resolved weather and climate simulation is described where determinism and numerical precision are focused on the largest scales only. PMID:24842038
Palmer, T N
2014-06-28
This paper sets out a new methodological approach to solving the equations for simulating and predicting weather and climate. In this approach, the conventionally hard boundary between the dynamical core and the sub-grid parametrizations is blurred. This approach is motivated by the relatively shallow power-law spectrum for atmospheric energy on scales of hundreds of kilometres and less. It is first argued that, because of this, the closure schemes for weather and climate simulators should be based on stochastic-dynamic systems rather than deterministic formulae. Second, as high-wavenumber elements of the dynamical core will necessarily inherit this stochasticity during time integration, it is argued that the dynamical core will be significantly over-engineered if all computations, regardless of scale, are performed completely deterministically and if all variables are represented with maximum numerical precision (in practice using double-precision floating-point numbers). As the era of exascale computing is approached, an energy- and computationally efficient approach to cloud-resolved weather and climate simulation is described where determinism and numerical precision are focused on the largest scales only.
NASA Astrophysics Data System (ADS)
Ataei-Esfahani, Armin
In this dissertation, we present algorithmic procedures for sum-of-squares based stability analysis and control design for uncertain nonlinear systems. In particular, we consider the case of robust aircraft control design for a hypersonic aircraft model subject to parametric uncertainties in its aerodynamic coefficients. In recent years, Sum-of-Squares (SOS) method has attracted increasing interest as a new approach for stability analysis and controller design of nonlinear dynamic systems. Through the application of SOS method, one can describe a stability analysis or control design problem as a convex optimization problem, which can efficiently be solved using Semidefinite Programming (SDP) solvers. For nominal systems, the SOS method can provide a reliable and fast approach for stability analysis and control design for low-order systems defined over the space of relatively low-degree polynomials. However, The SOS method is not well-suited for control problems relating to uncertain systems, specially those with relatively high number of uncertainties or those with non-affine uncertainty structure. In order to avoid issues relating to the increased complexity of the SOS problems for uncertain system, we present an algorithm that can be used to transform an SOS problem with uncertainties into a LMI problem with uncertainties. A new Probabilistic Ellipsoid Algorithm (PEA) is given to solve the robust LMI problem, which can guarantee the feasibility of a given solution candidate with an a-priori fixed probability of violation and with a fixed confidence level. We also introduce two approaches to approximate the robust region of attraction (RROA) for uncertain nonlinear systems with non-affine dependence on uncertainties. The first approach is based on a combination of PEA and SOS method and searches for a common Lyapunov function, while the second approach is based on the generalized Polynomial Chaos (gPC) expansion theorem combined with the SOS method and searches for parameter-dependent Lyapunov functions. The control design problem is investigated through a case study of a hypersonic aircraft model with parametric uncertainties. Through time-scale decomposition and a series of function approximations, the complexity of the aircraft model is reduced to fall within the capability of SDP solvers. The control design problem is then formulated as a convex problem using the dual of the Lyapunov theorem. A nonlinear robust controller is searched using the combined PEA/SOS method. The response of the uncertain aircraft model is evaluated for two sets of pilot commands. As the simulation results show, the aircraft remains stable under up to 50% uncertainty in aerodynamic coefficients and can follow the pilot commands.
The influence of parametric and external noise in act-and-wait control with delayed feedback.
Wang, Jiaxing; Kuske, Rachel
2017-11-01
We apply several novel semi-analytic approaches for characterizing and calculating the effects of noise in a system with act-and-wait control. For concrete illustration, we apply these to a canonical balance model for an inverted pendulum to study the combined effect of delay and noise within the act-and-wait setting. While the act-and-wait control facilitates strong stabilization through deadbeat control, a comparison of different models with continuous vs. discrete updating of the control strategy in the active period illustrates how delays combined with the imprecise application of the control can seriously degrade the performance. We give several novel analyses of a generalized act-and-wait control strategy, allowing flexibility in the updating of the control strategy, in order to understand the sensitivities to delays and random fluctuations. In both the deterministic and stochastic settings, we give analytical and semi-analytical results that characterize and quantify the dynamics of the system. These results include the size and shape of stability regions, densities for the critical eigenvalues that capture the rate of reaching the desired stable equilibrium, and amplification factors for sustained fluctuations in the context of external noise. They also provide the dependence of these quantities on the length of the delay and the active period. In particular, we see that the combined influence of delay, parametric error, or external noise and on-off control can qualitatively change the dynamics, thus reducing the robustness of the control strategy. We also capture the dependence on how frequently the control is updated, allowing an interpolation between continuous and frequent updating. In addition to providing insights for these specific models, the methods we propose are generalizable to other settings with noise, delay, and on-off control, where analytical techniques are otherwise severely scarce.
Power flow analysis of two coupled plates with arbitrary characteristics
NASA Technical Reports Server (NTRS)
Cuschieri, J. M.
1988-01-01
The limitation of keeping two plates identical is removed and the vibrational power input and output are evaluated for different area ratios, plate thickness ratios, and for different values of the structural damping loss factor for the source plate (plate with excitation) and the receiver plate. In performing this parametric analysis, the source plate characteristics are kept constant. The purpose of this parametric analysis is to be able to determine the most critical parameters that influence the flow of vibrational power from the source plate to the receiver plate. In the case of the structural damping parametric analysis, the influence of changes in the source plate damping is also investigated. As was done previously, results obtained from the mobility power flow approach will be compared to results obtained using a statistical energy analysis (SEA) approach. The significance of the power flow results are discussed together with a discussion and a comparison between SEA results and the mobility power flow results. Furthermore, the benefits that can be derived from using the mobility power flow approach, are also examined.
Pixel-based parametric source depth map for Cerenkov luminescence imaging
NASA Astrophysics Data System (ADS)
Altabella, L.; Boschi, F.; Spinelli, A. E.
2016-01-01
Optical tomography represents a challenging problem in optical imaging because of the intrinsically ill-posed inverse problem due to photon diffusion. Cerenkov luminescence tomography (CLT) for optical photons produced in tissues by several radionuclides (i.e.: 32P, 18F, 90Y), has been investigated using both 3D multispectral approach and multiviews methods. Difficult in convergence of 3D algorithms can discourage to use this technique to have information of depth and intensity of source. For these reasons, we developed a faster 2D corrected approach based on multispectral acquisitions, to obtain source depth and its intensity using a pixel-based fitting of source intensity. Monte Carlo simulations and experimental data were used to develop and validate the method to obtain the parametric map of source depth. With this approach we obtain parametric source depth maps with a precision between 3% and 7% for MC simulation and 5-6% for experimental data. Using this method we are able to obtain reliable information about the source depth of Cerenkov luminescence with a simple and flexible procedure.
Parametric vs. non-parametric statistics of low resolution electromagnetic tomography (LORETA).
Thatcher, R W; North, D; Biver, C
2005-01-01
This study compared the relative statistical sensitivity of non-parametric and parametric statistics of 3-dimensional current sources as estimated by the EEG inverse solution Low Resolution Electromagnetic Tomography (LORETA). One would expect approximately 5% false positives (classification of a normal as abnormal) at the P < .025 level of probability (two tailed test) and approximately 1% false positives at the P < .005 level. EEG digital samples (2 second intervals sampled 128 Hz, 1 to 2 minutes eyes closed) from 43 normal adult subjects were imported into the Key Institute's LORETA program. We then used the Key Institute's cross-spectrum and the Key Institute's LORETA output files (*.lor) as the 2,394 gray matter pixel representation of 3-dimensional currents at different frequencies. The mean and standard deviation *.lor files were computed for each of the 2,394 gray matter pixels for each of the 43 subjects. Tests of Gaussianity and different transforms were computed in order to best approximate a normal distribution for each frequency and gray matter pixel. The relative sensitivity of parametric vs. non-parametric statistics were compared using a "leave-one-out" cross validation method in which individual normal subjects were withdrawn and then statistically classified as being either normal or abnormal based on the remaining subjects. Log10 transforms approximated Gaussian distribution in the range of 95% to 99% accuracy. Parametric Z score tests at P < .05 cross-validation demonstrated an average misclassification rate of approximately 4.25%, and range over the 2,394 gray matter pixels was 27.66% to 0.11%. At P < .01 parametric Z score cross-validation false positives were 0.26% and ranged from 6.65% to 0% false positives. The non-parametric Key Institute's t-max statistic at P < .05 had an average misclassification error rate of 7.64% and ranged from 43.37% to 0.04% false positives. The nonparametric t-max at P < .01 had an average misclassification rate of 6.67% and ranged from 41.34% to 0% false positives of the 2,394 gray matter pixels for any cross-validated normal subject. In conclusion, adequate approximation to Gaussian distribution and high cross-validation can be achieved by the Key Institute's LORETA programs by using a log10 transform and parametric statistics, and parametric normative comparisons had lower false positive rates than the non-parametric tests.
ERIC Educational Resources Information Center
Reise, Steven P.; Meijer, Rob R.; Ainsworth, Andrew T.; Morales, Leo S.; Hays, Ron D.
2006-01-01
Group-level parametric and non-parametric item response theory models were applied to the Consumer Assessment of Healthcare Providers and Systems (CAHPS[R]) 2.0 core items in a sample of 35,572 Medicaid recipients nested within 131 health plans. Results indicated that CAHPS responses are dominated by within health plan variation, and only weakly…
Dokoumetzidis, Aristides; Aarons, Leon
2005-08-01
We investigated the propagation of population pharmacokinetic information across clinical studies by applying Bayesian techniques. The aim was to summarize the population pharmacokinetic estimates of a study in appropriate statistical distributions in order to use them as Bayesian priors in consequent population pharmacokinetic analyses. Various data sets of simulated and real clinical data were fitted with WinBUGS, with and without informative priors. The posterior estimates of fittings with non-informative priors were used to build parametric informative priors and the whole procedure was carried on in a consecutive manner. The posterior distributions of the fittings with informative priors where compared to those of the meta-analysis fittings of the respective combinations of data sets. Good agreement was found, for the simulated and experimental datasets when the populations were exchangeable, with the posterior distribution from the fittings with the prior to be nearly identical to the ones estimated with meta-analysis. However, when populations were not exchangeble an alternative parametric form for the prior, the natural conjugate prior, had to be used in order to have consistent results. In conclusion, the results of a population pharmacokinetic analysis may be summarized in Bayesian prior distributions that can be used consecutively with other analyses. The procedure is an alternative to meta-analysis and gives comparable results. It has the advantage that it is faster than the meta-analysis, due to the large datasets used with the latter and can be performed when the data included in the prior are not actually available.
Parametrically Guided Generalized Additive Models with Application to Mergers and Acquisitions Data
Fan, Jianqing; Maity, Arnab; Wang, Yihui; Wu, Yichao
2012-01-01
Generalized nonparametric additive models present a flexible way to evaluate the effects of several covariates on a general outcome of interest via a link function. In this modeling framework, one assumes that the effect of each of the covariates is nonparametric and additive. However, in practice, often there is prior information available about the shape of the regression functions, possibly from pilot studies or exploratory analysis. In this paper, we consider such situations and propose an estimation procedure where the prior information is used as a parametric guide to fit the additive model. Specifically, we first posit a parametric family for each of the regression functions using the prior information (parametric guides). After removing these parametric trends, we then estimate the remainder of the nonparametric functions using a nonparametric generalized additive model, and form the final estimates by adding back the parametric trend. We investigate the asymptotic properties of the estimates and show that when a good guide is chosen, the asymptotic variance of the estimates can be reduced significantly while keeping the asymptotic variance same as the unguided estimator. We observe the performance of our method via a simulation study and demonstrate our method by applying to a real data set on mergers and acquisitions. PMID:23645976
Parametrically Guided Generalized Additive Models with Application to Mergers and Acquisitions Data.
Fan, Jianqing; Maity, Arnab; Wang, Yihui; Wu, Yichao
2013-01-01
Generalized nonparametric additive models present a flexible way to evaluate the effects of several covariates on a general outcome of interest via a link function. In this modeling framework, one assumes that the effect of each of the covariates is nonparametric and additive. However, in practice, often there is prior information available about the shape of the regression functions, possibly from pilot studies or exploratory analysis. In this paper, we consider such situations and propose an estimation procedure where the prior information is used as a parametric guide to fit the additive model. Specifically, we first posit a parametric family for each of the regression functions using the prior information (parametric guides). After removing these parametric trends, we then estimate the remainder of the nonparametric functions using a nonparametric generalized additive model, and form the final estimates by adding back the parametric trend. We investigate the asymptotic properties of the estimates and show that when a good guide is chosen, the asymptotic variance of the estimates can be reduced significantly while keeping the asymptotic variance same as the unguided estimator. We observe the performance of our method via a simulation study and demonstrate our method by applying to a real data set on mergers and acquisitions.
Parametric Instability Rates in Periodically Driven Band Systems
NASA Astrophysics Data System (ADS)
Lellouch, S.; Bukov, M.; Demler, E.; Goldman, N.
2017-04-01
In this work, we analyze the dynamical properties of periodically driven band models. Focusing on the case of Bose-Einstein condensates, and using a mean-field approach to treat interparticle collisions, we identify the origin of dynamical instabilities arising from the interplay between the external drive and interactions. We present a widely applicable generic numerical method to extract instability rates and link parametric instabilities to uncontrolled energy absorption at short times. Based on the existence of parametric resonances, we then develop an analytical approach within Bogoliubov theory, which quantitatively captures the instability rates of the system and provides an intuitive picture of the relevant physical processes, including an understanding of how transverse modes affect the formation of parametric instabilities. Importantly, our calculations demonstrate an agreement between the instability rates determined from numerical simulations and those predicted by theory. To determine the validity regime of the mean-field analysis, we compare the latter to the weakly coupled conserving approximation. The tools developed and the results obtained in this work are directly relevant to present-day ultracold-atom experiments based on shaken optical lattices and are expected to provide an insightful guidance in the quest for Floquet engineering.
A parametric model order reduction technique for poroelastic finite element models.
Lappano, Ettore; Polanz, Markus; Desmet, Wim; Mundo, Domenico
2017-10-01
This research presents a parametric model order reduction approach for vibro-acoustic problems in the frequency domain of systems containing poroelastic materials (PEM). The method is applied to the Finite Element (FE) discretization of the weak u-p integral formulation based on the Biot-Allard theory and makes use of reduced basis (RB) methods typically employed for parametric problems. The parametric reduction is obtained rewriting the Biot-Allard FE equations for poroelastic materials using an affine representation of the frequency (therefore allowing for RB methods) and projecting the frequency-dependent PEM system on a global reduced order basis generated with the proper orthogonal decomposition instead of standard modal approaches. This has proven to be better suited to describe the nonlinear frequency dependence and the strong coupling introduced by damping. The methodology presented is tested on two three-dimensional systems: in the first experiment, the surface impedance of a PEM layer sample is calculated and compared with results of the literature; in the second, the reduced order model of a multilayer system coupled to an air cavity is assessed and the results are compared to those of the reference FE model.
Caie, Peter D; Harrison, David J
2016-01-01
The field of pathology is rapidly transforming from a semiquantitative and empirical science toward a big data discipline. Large data sets from across multiple omics fields may now be extracted from a patient's tissue sample. Tissue is, however, complex, heterogeneous, and prone to artifact. A reductionist view of tissue and disease progression, which does not take this complexity into account, may lead to single biomarkers failing in clinical trials. The integration of standardized multi-omics big data and the retention of valuable information on spatial heterogeneity are imperative to model complex disease mechanisms. Mathematical modeling through systems pathology approaches is the ideal medium to distill the significant information from these large, multi-parametric, and hierarchical data sets. Systems pathology may also predict the dynamical response of disease progression or response to therapy regimens from a static tissue sample. Next-generation pathology will incorporate big data with systems medicine in order to personalize clinical practice for both prognostic and predictive patient care.
NASA Astrophysics Data System (ADS)
Koltai, Péter; Renger, D. R. Michiel
2018-06-01
One way to analyze complicated non-autonomous flows is through trying to understand their transport behavior. In a quantitative, set-oriented approach to transport and mixing, finite time coherent sets play an important role. These are time-parametrized families of sets with unlikely transport to and from their surroundings under small or vanishing random perturbations of the dynamics. Here we propose, as a measure of transport and mixing for purely advective (i.e., deterministic) flows, (semi)distances that arise under vanishing perturbations in the sense of large deviations. Analogously, for given finite Lagrangian trajectory data we derive a discrete-time-and-space semidistance that comes from the "best" approximation of the randomly perturbed process conditioned on this limited information of the deterministic flow. It can be computed as shortest path in a graph with time-dependent weights. Furthermore, we argue that coherent sets are regions of maximal farness in terms of transport and mixing, and hence they occur as extremal regions on a spanning structure of the state space under this semidistance—in fact, under any distance measure arising from the physical notion of transport. Based on this notion, we develop a tool to analyze the state space (or the finite trajectory data at hand) and identify coherent regions. We validate our approach on idealized prototypical examples and well-studied standard cases.
Comparison of two correlated ROC curves at a given specificity or sensitivity level
Bantis, Leonidas E.; Feng, Ziding
2017-01-01
The receiver operating characteristic (ROC) curve is the most popular statistical tool for evaluating the discriminatory capability of a given continuous biomarker. The need to compare two correlated ROC curves arises when individuals are measured with two biomarkers, which induces paired and thus correlated measurements. Many researchers have focused on comparing two correlated ROC curves in terms of the area under the curve (AUC), which summarizes the overall performance of the marker. However, particular values of specificity may be of interest. We focus on comparing two correlated ROC curves at a given specificity level. We propose parametric approaches, transformations to normality, and nonparametric kernel-based approaches. Our methods can be straightforwardly extended for inference in terms of ROC−1(t). This is of particular interest for comparing the accuracy of two correlated biomarkers at a given sensitivity level. Extensions also involve inference for the AUC and accommodating covariates. We evaluate the robustness of our techniques through simulations, compare to other known approaches and present a real data application involving prostate cancer screening. PMID:27324068
Evaluation of design ventilation requirements for enclosed parking facilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ayari, A.; Krarti, M.
2000-07-01
This paper proposes a new design approach to determine the ventilation requirements for enclosed parking garages. The design approach accounts for various factors that affect the indoor air quality within a parking facility, including the average CO emission rate, the average travel time, the number of cars, and the acceptable CO level within the parking garage. This paper first describes the results of a parametric analysis based on the design method that was developed. Then the design method is presented to explain how the ventilation flow rate can be determined for any enclosed parking facility. Finally, some suggestions are proposedmore » to save fan energy for ventilating parking garages using demand ventilation control strategies.« less
Daylight exposure and the other predictors of burnout among nurses in a University Hospital.
Alimoglu, Mustafa Kemal; Donmez, Levent
2005-07-01
The purpose of the study was to investigate if daylight exposure in work setting could be placed among the predictors of job burnout. The sample was composed of 141 nurses who work in Akdeniz University Hospital in Antalya, Turkey. All participants were asked to complete a personal data collection form, the Maslach Burnout Inventory, the Work Related Strain Inventory and the Work Satisfaction Questionnaire to collect data about their burnout, work-related stress (WRS) and job satisfaction (JS) levels in addition to personal characteristics. Descriptive statistics, parametric and non-parametric tests and correlation analysis were used in statistical analyses. Daylight exposure showed no direct effect on burnout but it was indirectly effective via WRS and JS. Exposure to daylight at least 3h a day was found to cause less stress and higher satisfaction at work. Suffering from sleep disorders, younger age, job-related health problems and educational level were found to have total or partial direct effects on burnout. Night shifts may lead to burnout via work related strain and working in inpatient services and dissatisfaction with annual income may be effective via job dissatisfaction. This study confirmed some established predictors of burnout and provided data on an unexplored area. Daylight exposure may be effective on job burnout.
NASA Astrophysics Data System (ADS)
Brauer, C.; Teuling, R.; Torfs, P.; Uijlenhoet, R.
2014-12-01
Recently, we developed the Wageningen Lowland Runoff Simulator (WALRUS) to fill the gap between complex, spatially distributed models which are often used in lowland regions and simple, parametric models which have mostly been developed for mountainous catchments. This parametric rainfall-runoff model can be used all over the world, both in freely draining lowland catchments and polders with controlled water levels. Here, we present the model implementation and our recent experience in training students and practitioners to use the model. WALRUS has several advantages that facilitate practical application. Firstly, WALRUS is computationally efficient, which allows for operational forecasting and uncertainty estimation by running ensembles. Secondly, the code is set-up such that it can be used by both practitioners and researchers. For direct use by practitioners, defaults are implemented for relations between model variables and for the computation of initial conditions based on discharge only, leaving only four parameters which require calibration. For research purposes, the defaults can easily be changed. Finally, an approach for flexible time steps increases numerical stability and makes model parameter values independent of time step size, which facilitates use of the model with the same parameter set for multi-year water balance studies as well as detailed analyses of individual flood peaks. The open source model code is currently implemented in R and compiled into a package. This package will be made available through the R CRAN server. A small massive open online course (MOOC) is being developed to give students, researchers and practitioners a step-by-step WALRUS-training. This course contains explanations about model elements and its advantages and limitations, as well as hands-on exercises to learn how to use WALRUS. All code, course, literature and examples will be collected on a dedicated website, which can be found via www.wageningenur.nl/hwm. References C.C. Brauer, et al. (2014a). Geosci. Model Dev. Discuss., 7, 1357—1411. C.C. Brauer, et al. (2014b). Hydrol. Earth Syst. Sci. Discuss., 11, 2091—2148.
A climatology of gravity wave parameters based on satellite limb soundings
NASA Astrophysics Data System (ADS)
Ern, Manfred; Trinh, Quang Thai; Preusse, Peter; Riese, Martin
2017-04-01
Gravity waves are one of the main drivers of atmospheric dynamics. The resolution of most global circulation models (GCMs) and chemistry climate models (CCMs), however, is too coarse to properly resolve the small scales of gravity waves. Horizontal scales of gravity waves are in the range of tens to a few thousand kilometers. Gravity wave source processes involve even smaller scales. Therefore GCMs/CCMs usually parametrize the effect of gravity waves on the global circulation. These parametrizations are very simplified, and comparisons with global observations of gravity waves are needed for an improvement of parametrizations and an alleviation of model biases. In our study, we present a global data set of gravity wave distributions observed in the stratosphere and the mesosphere by the infrared limb sounding satellite instruments High Resolution Dynamics Limb Sounder (HIRDLS) and Sounding of the Atmosphere using Broadband Emission Radiometry (SABER). We provide various gravity wave parameters (for example, gravity variances, potential energies and absolute momentum fluxes). This comprehensive climatological data set can serve for comparison with other instruments (ground based, airborne, or other satellite instruments), as well as for comparison with gravity wave distributions, both resolved and parametrized, in GCMs and CCMs. The purpose of providing various different parameters is to make our data set useful for a large number of potential users and to overcome limitations of other observation techniques, or of models, that may be able to provide only one of those parameters. We present a climatology of typical average global distributions and of zonal averages, as well as their natural range of variations. In addition, we discuss seasonal variations of the global distribution of gravity waves, as well as limitations of our method of deriving gravity wave parameters from satellite data.
Model-independent fit to Planck and BICEP2 data
NASA Astrophysics Data System (ADS)
Barranco, Laura; Boubekeur, Lotfi; Mena, Olga
2014-09-01
Inflation is the leading theory to describe elegantly the initial conditions that led to structure formation in our Universe. In this paper, we present a novel phenomenological fit to the Planck, WMAP polarization (WP) and the BICEP2 data sets using an alternative parametrization. Instead of starting from inflationary potentials and computing the inflationary observables, we use a phenomenological parametrization due to Mukhanov, describing inflation by an effective equation of state, in terms of the number of e-folds and two phenomenological parameters α and β. Within such a parametrization, which captures the different inflationary models in a model-independent way, the values of the scalar spectral index ns, its running and the tensor-to-scalar ratio r are predicted, given a set of parameters (α ,β). We perform a Markov Chain Monte Carlo analysis of these parameters, and we show that the combined analysis of Planck and WP data favors the Starobinsky and Higgs inflation scenarios. Assuming that the BICEP2 signal is not entirely due to foregrounds, the addition of this last data set prefers instead the ϕ2 chaotic models. The constraint we get from Planck and WP data alone on the derived tensor-to-scalar ratio is r <0.18 at 95% C.L., value which is consistent with the one quoted from the BICEP2 Collaboration analysis, r =0.16-0.05+0-06, after foreground subtraction. This is not necessarily at odds with the 2σ tension found between Planck and BICEP2 measurements when analyzing data in terms of the usual ns and r parameters, given that the parametrization used here, for the preferred value ns≃0.96, allows only for a restricted parameter space in the usual (ns,r) plane.
Comparing Pixel- and Object-Based Approaches in Effectively Classifying Wetland-Dominated Landscapes
Berhane, Tedros M.; Lane, Charles R.; Wu, Qiusheng; Anenkhonov, Oleg A.; Chepinoga, Victor V.; Autrey, Bradley C.; Liu, Hongxing
2018-01-01
Wetland ecosystems straddle both terrestrial and aquatic habitats, performing many ecological functions directly and indirectly benefitting humans. However, global wetland losses are substantial. Satellite remote sensing and classification informs wise wetland management and monitoring. Both pixel- and object-based classification approaches using parametric and non-parametric algorithms may be effectively used in describing wetland structure and habitat, but which approach should one select? We conducted both pixel- and object-based image analyses (OBIA) using parametric (Iterative Self-Organizing Data Analysis Technique, ISODATA, and maximum likelihood, ML) and non-parametric (random forest, RF) approaches in the Barguzin Valley, a large wetland (~500 km2) in the Lake Baikal, Russia, drainage basin. Four Quickbird multispectral bands plus various spatial and spectral metrics (e.g., texture, Non-Differentiated Vegetation Index, slope, aspect, etc.) were analyzed using field-based regions of interest sampled to characterize an initial 18 ISODATA-based classes. Parsimoniously using a three-layer stack (Quickbird band 3, water ratio index (WRI), and mean texture) in the analyses resulted in the highest accuracy, 87.9% with pixel-based RF, followed by OBIA RF (segmentation scale 5, 84.6% overall accuracy), followed by pixel-based ML (83.9% overall accuracy). Increasing the predictors from three to five by adding Quickbird bands 2 and 4 decreased the pixel-based overall accuracy while increasing the OBIA RF accuracy to 90.4%. However, McNemar’s chi-square test confirmed no statistically significant difference in overall accuracy among the classifiers (pixel-based ML, RF, or object-based RF) for either the three- or five-layer analyses. Although potentially useful in some circumstances, the OBIA approach requires substantial resources and user input (such as segmentation scale selection—which was found to substantially affect overall accuracy). Hence, we conclude that pixel-based RF approaches are likely satisfactory for classifying wetland-dominated landscapes. PMID:29707381
Berhane, Tedros M; Lane, Charles R; Wu, Qiusheng; Anenkhonov, Oleg A; Chepinoga, Victor V; Autrey, Bradley C; Liu, Hongxing
2018-01-01
Wetland ecosystems straddle both terrestrial and aquatic habitats, performing many ecological functions directly and indirectly benefitting humans. However, global wetland losses are substantial. Satellite remote sensing and classification informs wise wetland management and monitoring. Both pixel- and object-based classification approaches using parametric and non-parametric algorithms may be effectively used in describing wetland structure and habitat, but which approach should one select? We conducted both pixel- and object-based image analyses (OBIA) using parametric (Iterative Self-Organizing Data Analysis Technique, ISODATA, and maximum likelihood, ML) and non-parametric (random forest, RF) approaches in the Barguzin Valley, a large wetland (~500 km 2 ) in the Lake Baikal, Russia, drainage basin. Four Quickbird multispectral bands plus various spatial and spectral metrics (e.g., texture, Non-Differentiated Vegetation Index, slope, aspect, etc.) were analyzed using field-based regions of interest sampled to characterize an initial 18 ISODATA-based classes. Parsimoniously using a three-layer stack (Quickbird band 3, water ratio index (WRI), and mean texture) in the analyses resulted in the highest accuracy, 87.9% with pixel-based RF, followed by OBIA RF (segmentation scale 5, 84.6% overall accuracy), followed by pixel-based ML (83.9% overall accuracy). Increasing the predictors from three to five by adding Quickbird bands 2 and 4 decreased the pixel-based overall accuracy while increasing the OBIA RF accuracy to 90.4%. However, McNemar's chi-square test confirmed no statistically significant difference in overall accuracy among the classifiers (pixel-based ML, RF, or object-based RF) for either the three- or five-layer analyses. Although potentially useful in some circumstances, the OBIA approach requires substantial resources and user input (such as segmentation scale selection-which was found to substantially affect overall accuracy). Hence, we conclude that pixel-based RF approaches are likely satisfactory for classifying wetland-dominated landscapes.
NASA Astrophysics Data System (ADS)
Lewis, Debra
2013-05-01
Relative equilibria of Lagrangian and Hamiltonian systems with symmetry are critical points of appropriate scalar functions parametrized by the Lie algebra (or its dual) of the symmetry group. Setting aside the structures - symplectic, Poisson, or variational - generating dynamical systems from such functions highlights the common features of their construction and analysis, and supports the construction of analogous functions in non-Hamiltonian settings. If the symmetry group is nonabelian, the functions are invariant only with respect to the isotropy subgroup of the given parameter value. Replacing the parametrized family of functions with a single function on the product manifold and extending the action using the (co)adjoint action on the algebra or its dual yields a fully invariant function. An invariant map can be used to reverse the usual perspective: rather than selecting a parametrized family of functions and finding their critical points, conditions under which functions will be critical on specific orbits, typically distinguished by isotropy class, can be derived. This strategy is illustrated using several well-known mechanical systems - the Lagrange top, the double spherical pendulum, the free rigid body, and the Riemann ellipsoids - and generalizations of these systems.
A parametric LQ approach to multiobjective control system design
NASA Technical Reports Server (NTRS)
Kyr, Douglas E.; Buchner, Marc
1988-01-01
The synthesis of a constant parameter output feedback control law of constrained structure is set in a multiple objective linear quadratic regulator (MOLQR) framework. The use of intuitive objective functions such as model-following ability and closed-loop trajectory sensitivity, allow multiple objective decision making techniques, such as the surrogate worth tradeoff method, to be applied. For the continuous-time deterministic problem with an infinite time horizon, dynamic compensators as well as static output feedback controllers can be synthesized using a descent Anderson-Moore algorithm modified to impose linear equality constraints on the feedback gains by moving in feasible directions. Results of three different examples are presented, including a unique reformulation of the sensitivity reduction problem.
Joint confidence region estimation for area under ROC curve and Youden index.
Yin, Jingjing; Tian, Lili
2014-03-15
In the field of diagnostic studies, the area under the ROC curve (AUC) serves as an overall measure of a biomarker/diagnostic test's accuracy. Youden index, defined as the overall correct classification rate minus one at the optimal cut-off point, is another popular index. For continuous biomarkers of binary disease status, although researchers mainly evaluate the diagnostic accuracy using AUC, for the purpose of making diagnosis, Youden index provides an important and direct measure of the diagnostic accuracy at the optimal threshold and hence should be taken into consideration in addition to AUC. Furthermore, AUC and Youden index are generally correlated. In this paper, we initiate the idea of evaluating diagnostic accuracy based on AUC and Youden index simultaneously. As the first step toward this direction, this paper only focuses on the confidence region estimation of AUC and Youden index for a single marker. We present both parametric and non-parametric approaches for estimating joint confidence region of AUC and Youden index. We carry out extensive simulation study to evaluate the performance of the proposed methods. In the end, we apply the proposed methods to a real data set. Copyright © 2013 John Wiley & Sons, Ltd.
Forensic discrimination of copper wire using trace element concentrations.
Dettman, Joshua R; Cassabaum, Alyssa A; Saunders, Christopher P; Snyder, Deanna L; Buscaglia, JoAnn
2014-08-19
Copper may be recovered as evidence in high-profile cases such as thefts and improvised explosive device incidents; comparison of copper samples from the crime scene and those associated with the subject of an investigation can provide probative associative evidence and investigative support. A solution-based inductively coupled plasma mass spectrometry method for measuring trace element concentrations in high-purity copper was developed using standard reference materials. The method was evaluated for its ability to use trace element profiles to statistically discriminate between copper samples considering the precision of the measurement and manufacturing processes. The discriminating power was estimated by comparing samples chosen on the basis of the copper refining and production process to represent the within-source (samples expected to be similar) and between-source (samples expected to be different) variability using multivariate parametric- and empirical-based data simulation models with bootstrap resampling. If the false exclusion rate is set to 5%, >90% of the copper samples can be correctly determined to originate from different sources using a parametric-based model and >87% with an empirical-based approach. These results demonstrate the potential utility of the developed method for the comparison of copper samples encountered as forensic evidence.
Trends and associated uncertainty in the global mean temperature record
NASA Astrophysics Data System (ADS)
Poppick, A. N.; Moyer, E. J.; Stein, M.
2016-12-01
Physical models suggest that the Earth's mean temperature warms in response to changing CO2 concentrations (and hence increased radiative forcing); given physical uncertainties in this relationship, the historical temperature record is a source of empirical information about global warming. A persistent thread in many analyses of the historical temperature record, however, is the reliance on methods that appear to deemphasize both physical and statistical assumptions. Examples include regression models that treat time rather than radiative forcing as the relevant covariate, and time series methods that account for natural variability in nonparametric rather than parametric ways. We show here that methods that deemphasize assumptions can limit the scope of analysis and can lead to misleading inferences, particularly in the setting considered where the data record is relatively short and the scale of temporal correlation is relatively long. A proposed model that is simple but physically informed provides a more reliable estimate of trends and allows a broader array of questions to be addressed. In accounting for uncertainty, we also illustrate how parametric statistical models that are attuned to the important characteristics of natural variability can be more reliable than ostensibly more flexible approaches.
Isoscalar and isovector giant resonances in a self-consistent phonon coupling approach
NASA Astrophysics Data System (ADS)
Lyutorovich, N.; Tselyaev, V.; Speth, J.; Krewald, S.; Grümmer, F.; Reinhard, P.-G.
2015-10-01
We present fully self-consistent calculations of isoscalar giant monopole and quadrupole as well as isovector giant dipole resonances in heavy and light nuclei. The description is based on Skyrme energy-density functionals determining the static Hartree-Fock ground state and the excitation spectra within random-phase approximation (RPA) and RPA extended by including the quasiparticle-phonon coupling at the level of the time-blocking approximation (TBA). All matrix elements were derived consistently from the given energy-density functional and calculated without any approximation. As a new feature in these calculations, the single-particle continuum was included thus avoiding the artificial discretization usually implied in RPA and TBA. The step to include phonon coupling in TBA leads to small, but systematic, down shifts of the centroid energies of the giant resonances. These shifts are similar in size for all Skyrme parametrizations investigated here. After all, we demonstrate that one can find Skyrme parametrizations which deliver a good simultaneous reproduction of all three giant resonances within TBA.
Tensor methods for parameter estimation and bifurcation analysis of stochastic reaction networks
Liao, Shuohao; Vejchodský, Tomáš; Erban, Radek
2015-01-01
Stochastic modelling of gene regulatory networks provides an indispensable tool for understanding how random events at the molecular level influence cellular functions. A common challenge of stochastic models is to calibrate a large number of model parameters against the experimental data. Another difficulty is to study how the behaviour of a stochastic model depends on its parameters, i.e. whether a change in model parameters can lead to a significant qualitative change in model behaviour (bifurcation). In this paper, tensor-structured parametric analysis (TPA) is developed to address these computational challenges. It is based on recently proposed low-parametric tensor-structured representations of classical matrices and vectors. This approach enables simultaneous computation of the model properties for all parameter values within a parameter space. The TPA is illustrated by studying the parameter estimation, robustness, sensitivity and bifurcation structure in stochastic models of biochemical networks. A Matlab implementation of the TPA is available at http://www.stobifan.org. PMID:26063822
A BEFORE AND AFTER TRIAL OF THE EFFECTIVENESS OF NETWORK ANALYSIS IN HEALTH OPERATIONS MANAGEMENT.
Bhalwar, R; Srivastava, M; Verma, S S; Vaze, M; Tilak, V W
1996-10-01
An intervention trial using "before-and-after" approach was undertaken to address the question whether network analysis as a health managerial tool of control can favourably affect the delays that occur in planning and executing the antimalaria operations of a Station Health Organization in a large military station. Exposure variable of interest was intervention with a network diagram, by which the potential causes of delay along the various activities were assessed and remedial measures were introduced during the second year. Sample size was calculated using conventional alpha and beta error levels. The study indicated that there was a definite beneficial outcome in that the operations could be started as well as completed in time during the intervention year. There was reduction in time requirement in 5 out of the 9 activities, the exact 'p' value being 0.08, by both parametric and non-parametric tests. The use of network analysis in health care management has been recommended.
Tensor methods for parameter estimation and bifurcation analysis of stochastic reaction networks.
Liao, Shuohao; Vejchodský, Tomáš; Erban, Radek
2015-07-06
Stochastic modelling of gene regulatory networks provides an indispensable tool for understanding how random events at the molecular level influence cellular functions. A common challenge of stochastic models is to calibrate a large number of model parameters against the experimental data. Another difficulty is to study how the behaviour of a stochastic model depends on its parameters, i.e. whether a change in model parameters can lead to a significant qualitative change in model behaviour (bifurcation). In this paper, tensor-structured parametric analysis (TPA) is developed to address these computational challenges. It is based on recently proposed low-parametric tensor-structured representations of classical matrices and vectors. This approach enables simultaneous computation of the model properties for all parameter values within a parameter space. The TPA is illustrated by studying the parameter estimation, robustness, sensitivity and bifurcation structure in stochastic models of biochemical networks. A Matlab implementation of the TPA is available at http://www.stobifan.org.
A design study for the addition of higher order parametric discrete elements to NASTRAN
NASA Technical Reports Server (NTRS)
Stanton, E. L.
1972-01-01
The addition of discrete elements to NASTRAN poses significant interface problems with the level 15.1 assembly modules and geometry modules. Potential problems in designing new modules for higher-order parametric discrete elements are reviewed in both areas. An assembly procedure is suggested that separates grid point degrees of freedom on the basis of admissibility. New geometric input data are described that facilitate the definition of surfaces in parametric space.
Mazzotta, Laura; Cozzani, Mauro; Mutinelli, Sabrina; Castaldo, Attilio; Silvestrini-Biavati, Armando
2013-01-01
Objectives. To build a 3D parametric model to detect shape and volume of dental roots, from a panoramic radiograph (PAN) of the patient. Materials and Methods. A PAN and a cone beam computed tomography (CBCT) of a patient were acquired. For each tooth, various parameters were considered (coronal and root lengths and widths): these were measured from the CBCT and from the PAN. Measures were compared to evaluate the accuracy level of PAN measurements. By using a CAD software, parametric models of an incisor and of a molar were constructed employing B-spline curves and free-form surfaces. PAN measures of teeth 2.1 and 3.6 were assigned to the parametric models; the same two teeth were segmented from CBCT. The two models were superimposed to assess the accuracy of the parametric model. Results. PAN measures resulted to be accurate and comparable with all other measurements. From model superimposition the maximum error resulted was 1.1 mm on the incisor crown and 2 mm on the molar furcation. Conclusion. This study shows that it is possible to build a 3D parametric model starting from 2D information with a clinically valid accuracy level. This can ultimately lead to a crown-root movement simulation. PMID:23554814
A new look at cardiac defense: attention or emotion?
Vila, Jaime; Fernández, María Carmen; Pegalajar, Joaquín; Nieves Vera, María; Robles, Humbelina; Pérez, Nieves; Sánchez, María B; Ramírez, Isabel; Ruiz-Padial, Elisabeth
2003-05-01
The study of cardiac defense has a long tradition in psychological research both within the cognitive approach--linked to Pavlov, Sokolov, and Graham's work on sensory reflexes--and within the motivational one--linked to the work of Cannon and subsequent researchers on the concepts of activation and stress. These two approaches have been difficult to reconcile in the past. We summarize a series of studies on cardiac defense from a different perspective, which allows integration of the traditional approaches. This new perspective emphasizes a sequential process interpretation of the cardiac defense response. Results of descriptive and parametric studies, as well as those of studies examining the physiological and psychological mechanisms underlying the response, show a complex response pattern with both accelerative and decelerative components, with both sympathetic and parasympathetic influences, and with both attentional and emotional significance. The implications of this new look at cardiac defense are discussed in relation to defensive reactions in natural settings, the brain mechanisms controlling such reactions, and their effects on health and illness.
Siciliani, Luigi
2006-01-01
Policy makers are increasingly interested in developing performance indicators that measure hospital efficiency. These indicators may give the purchasers of health services an additional regulatory tool to contain health expenditure. Using panel data, this study compares different parametric (econometric) and non-parametric (linear programming) techniques for the measurement of a hospital's technical efficiency. This comparison was made using a sample of 17 Italian hospitals in the years 1996-9. Highest correlations are found in the efficiency scores between the non-parametric data envelopment analysis under the constant returns to scale assumption (DEA-CRS) and several parametric models. Correlation reduces markedly when using more flexible non-parametric specifications such as data envelopment analysis under the variable returns to scale assumption (DEA-VRS) and the free disposal hull (FDH) model. Correlation also generally reduces when moving from one output to two-output specifications. This analysis suggests that there is scope for developing performance indicators at hospital level using panel data, but it is important that extensive sensitivity analysis is carried out if purchasers wish to make use of these indicators in practice.
Parametrization study of the land multiparameter VTI elastic waveform inversion
NASA Astrophysics Data System (ADS)
He, W.; Plessix, R.-É.; Singh, S.
2018-06-01
Multiparameter inversion of seismic data remains challenging due to the trade-off between the different elastic parameters and the non-uniqueness of the solution. The sensitivity of the seismic data to a given subsurface elastic parameter depends on the source and receiver ray/wave path orientations at the subsurface point. In a high-frequency approximation, this is commonly analysed through the study of the radiation patterns that indicate the sensitivity of each parameter versus the incoming (from the source) and outgoing (to the receiver) angles. In practice, this means that the inversion result becomes sensitive to the choice of parametrization, notably because the null-space of the inversion depends on this choice. We can use a least-overlapping parametrization that minimizes the overlaps between the radiation patterns, in this case each parameter is only sensitive in a restricted angle domain, or an overlapping parametrization that contains a parameter sensitive to all angles, in this case overlaps between the radiation parameters occur. Considering a multiparameter inversion in an elastic vertically transverse isotropic medium and a complex land geological setting, we show that the inversion with the least-overlapping parametrization gives less satisfactory results than with the overlapping parametrization. The difficulties come from the complex wave paths that make difficult to predict the areas of sensitivity of each parameter. This shows that the parametrization choice should not only be based on the radiation pattern analysis but also on the angular coverage at each subsurface point that depends on geology and the acquisition layout.
NASA Astrophysics Data System (ADS)
Naik, Deepak kumar; Maity, K. P.
2018-03-01
Plasma arc cutting (PAC) is a high temperature thermal cutting process employed for the cutting of extensively high strength material which are difficult to cut through any other manufacturing process. This process involves high energized plasma arc to cut any conducting material with better dimensional accuracy in lesser time. This research work presents the effect of process parameter on to the dimensional accuracy of PAC process. The input process parameters were selected as arc voltage, standoff distance and cutting speed. A rectangular plate of 304L stainless steel of 10 mm thickness was taken for the experiment as a workpiece. Stainless steel is very extensively used material in manufacturing industries. Linear dimension were measured following Taguchi’s L16 orthogonal array design approach. Three levels were selected to conduct the experiment for each of the process parameter. In all experiments, clockwise cut direction was followed. The result obtained thorough measurement is further analyzed. Analysis of variance (ANOVA) and Analysis of means (ANOM) were performed to evaluate the effect of each process parameter. ANOVA analysis reveals the effect of input process parameter upon leaner dimension in X axis. The results of the work shows that the optimal setting of process parameter values for the leaner dimension on the X axis. The result of the investigations clearly show that the specific range of input process parameter achieved the improved machinability.
NASA Astrophysics Data System (ADS)
Kaucikas, M.; Warren, M.; Michailovas, A.; Antanavicius, R.; van Thor, J. J.
2013-02-01
This paper describes the investigation of an optical parametric oscillator (OPO) set-up based on two beta barium borate (BBO) crystals, where the interplay between the crystal orientations, cut angles and air dispersion substantially influenced the OPO performance, and especially the angular spectrum of the output beam. Theory suggests that if two BBO crystals are used in this type of design, they should be of different cuts. This paper aims to provide an experimental manifestation of this fact. Furthermore, it has been shown that air dispersion produces similar effects and should be taken into account. An x-ray crystallographic indexing of the crystals was performed as an independent test of the above conclusions.
NASA Astrophysics Data System (ADS)
Förner, Wolfgang
1992-03-01
Ab initio investigations of the bond alternation in butadiene are presented. The atomic basis sets applied range from minimal to split valence plus polarization quality. With the latter one the Hartree-Fock limit for the bond alternation is reached. Correlation is considered on Møller-Plesset many-body perturbation theory of second order (MP2), linear coupled cluster doubles (L-CCD) and coupled cluster doubles (CCD) level. For the smaller basis sets it is shown that for the bond alternation π-π correlations are essential while the effects of σ-σ and σ-π correlations are, though large, nearly independent of bond alternation. On MP2 level the variation of σ-π correlation with bond alternation is surprisingly large. This is discussed as an artefact of MP2. Comparative Su-Schrieffer-Heeger (SSH) and Pariser-Parr-Pople (PPP) calculations show that these models in their usual parametrizations cannot reproduce the ab initio results.
Human discomfort response to noise combined with vertical vibration
NASA Technical Reports Server (NTRS)
Leatherwood, J. D.
1979-01-01
An experimental investigation was conducted (1) to determine the effects of combined environmental noise and vertical vibration upon human subjective discomfort response, (2) to develop a model for the prediction of passenger discomfort response to the combined environment, and (3) to develop a set of noise-vibration curves for use as criteria in ride quality design. Subjects were exposed to parametric combinations of noise and vibrations through the use of a realistic laboratory simulator. Results indicated that accurate prediction of passenger ride comfort requires knowledge of both the level and frequency content of the noise and vibration components of a ride environment as well as knowledge of the interactive effects of combined noise and vibration. A design tool in the form of an empirical model of passenger discomfort response to combined noise and vertical vibration was developed and illustrated by several computational examples. Finally, a set of noise-vibration criteria curves were generated to illustrate the fundamental design trade-off possible between passenger discomfort and the noise-vibration levels that produce the discomfort.
Duarte, João Valente; Faustino, Ricardo; Lobo, Mercês; Cunha, Gil; Nunes, César; Ferreira, Carlos; Januário, Cristina; Castelo-Branco, Miguel
2016-10-01
Machado-Joseph Disease, inherited type 3 spinocerebellar ataxia (SCA3), is the most common form worldwide. Neuroimaging and neuropathology have consistently demonstrated cerebellar alterations. Here we aimed to discover whole-brain functional biomarkers, based on parametric performance-level-dependent signals. We assessed 13 patients with early SCA3 and 14 healthy participants. We used a combined parametric behavioral/functional neuroimaging design to investigate disease fingerprints, as a function of performance levels, coupled with structural MRI and voxel-based morphometry. Functional magnetic resonance imaging (fMRI) was designed to parametrically analyze behavior and neural responses to audio-paced bilateral thumb movements at temporal frequencies of 1, 3, and 5 Hz. Our performance-level-based design probing neuronal correlates of motor coordination enabled the discovery that neural activation and behavior show critical loss of parametric modulation specifically in SCA3, associated with frequency-dependent cortico/subcortical activation/deactivation patterns. Cerebellar/cortical rate-dependent dissociation patterns could clearly differentiate between groups irrespective of grey matter loss. Our findings suggest functional reorganization of the motor network and indicate a possible role of fMRI as a tool to monitor disease progression in SCA3. Accordingly, fMRI patterns proved to be potential biomarkers in early SCA3, as tested by receiver operating characteristic analysis of both behavior and neural activation at different frequencies. Discrimination analysis based on BOLD signal in response to the applied parametric finger-tapping task significantly often reached >80% sensitivity and specificity in single regions-of-interest.Functional fingerprints based on cerebellar and cortical BOLD performance dependent signal modulation can thus be combined as diagnostic and/or therapeutic targets in hereditary ataxia. Hum Brain Mapp 37:3656-3668, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
A Robust Approach to Risk Assessment Based on Species Sensitivity Distributions.
Monti, Gianna S; Filzmoser, Peter; Deutsch, Roland C
2018-05-03
The guidelines for setting environmental quality standards are increasingly based on probabilistic risk assessment due to a growing general awareness of the need for probabilistic procedures. One of the commonly used tools in probabilistic risk assessment is the species sensitivity distribution (SSD), which represents the proportion of species affected belonging to a biological assemblage as a function of exposure to a specific toxicant. Our focus is on the inverse use of the SSD curve with the aim of estimating the concentration, HCp, of a toxic compound that is hazardous to p% of the biological community under study. Toward this end, we propose the use of robust statistical methods in order to take into account the presence of outliers or apparent skew in the data, which may occur without any ecological basis. A robust approach exploits the full neighborhood of a parametric model, enabling the analyst to account for the typical real-world deviations from ideal models. We examine two classic HCp estimation approaches and consider robust versions of these estimators. In addition, we also use data transformations in conjunction with robust estimation methods in case of heteroscedasticity. Different scenarios using real data sets as well as simulated data are presented in order to illustrate and compare the proposed approaches. These scenarios illustrate that the use of robust estimation methods enhances HCp estimation. © 2018 Society for Risk Analysis.
NASA Technical Reports Server (NTRS)
Toups, Larry; Simon, Matthew; Smitherman, David; Spexarth, Gary
2012-01-01
NASA's Human Space Flight Architecture Team (HAT) is a multi-disciplinary, cross-agency study team that conducts strategic analysis of integrated development approaches for human and robotic space exploration architectures. During each analysis cycle, HAT iterates and refines the definition of design reference missions (DRMs), which inform the definition of a set of integrated capabilities required to explore multiple destinations. An important capability identified in this capability-driven approach is habitation, which is necessary for crewmembers to live and work effectively during long duration transits to and operations at exploration destinations beyond Low Earth Orbit (LEO). This capability is captured by an element referred to as the Deep Space Habitat (DSH), which provides all equipment and resources for the functions required to support crew safety, health, and work including: life support, food preparation, waste management, sleep quarters, and housekeeping.The purpose of this paper is to describe the design of the DSH capable of supporting crew during exploration missions. First, the paper describes the functionality required in a DSH to support the HAT defined exploration missions, the parameters affecting its design, and the assumptions used in the sizing of the habitat. Then, the process used for arriving at parametric sizing estimates to support additional HAT analyses is detailed. Finally, results from the HAT Cycle C DSH sizing are presented followed by a brief description of the remaining design trades and technological advancements necessary to enable the exploration habitation capability.
Predicting failure to return to work.
Mills, R
2012-08-01
The research question is: is it possible to predict, at the time of workers' compensation claim lodgement, which workers will have a prolonged return to work (RTW) outcome? This paper illustrates how a traditional analytic approach to the analysis of an existing large database can be insufficient to answer the research question, and suggests an alternative data management and analysis approach. This paper retrospectively analyses 9018 workers' compensation claims from two different workers' compensation jurisdictions in Australia (two data sets) over a 4-month period in 2007. De-identified data, submitted at the time of claim lodgement, were compared with RTW outcomes for up to 3 months. Analysis consisted of descriptive, parametric (analysis of variance and multiple regression), survival (proportional hazards) and data mining (partitioning) analysis. No significant associations were found on parametric analysis. Multiple associations were found between the predictor variables and RTW outcome on survival analysis, with marked differences being found between some sub-groups on partitioning--where diagnosis was found to be the strongest discriminator (particularly neck and shoulder injuries). There was a consistent trend for female gender to be associated with a prolonged RTW outcome. The supplied data were not sufficient to enable the development of a predictive model. If we want to predict early who will have a prolonged RTW in Australia, workers' compensation claim forms should be redesigned, data management improved and specialised analytic techniques used. © 2011 The Author. Internal Medicine Journal © 2011 Royal Australasian College of Physicians.
Toward a self-organizing pre-symbolic neural model representing sensorimotor primitives.
Zhong, Junpei; Cangelosi, Angelo; Wermter, Stefan
2014-01-01
The acquisition of symbolic and linguistic representations of sensorimotor behavior is a cognitive process performed by an agent when it is executing and/or observing own and others' actions. According to Piaget's theory of cognitive development, these representations develop during the sensorimotor stage and the pre-operational stage. We propose a model that relates the conceptualization of the higher-level information from visual stimuli to the development of ventral/dorsal visual streams. This model employs neural network architecture incorporating a predictive sensory module based on an RNNPB (Recurrent Neural Network with Parametric Biases) and a horizontal product model. We exemplify this model through a robot passively observing an object to learn its features and movements. During the learning process of observing sensorimotor primitives, i.e., observing a set of trajectories of arm movements and its oriented object features, the pre-symbolic representation is self-organized in the parametric units. These representational units act as bifurcation parameters, guiding the robot to recognize and predict various learned sensorimotor primitives. The pre-symbolic representation also accounts for the learning of sensorimotor primitives in a latent learning context.
Toward a self-organizing pre-symbolic neural model representing sensorimotor primitives
Zhong, Junpei; Cangelosi, Angelo; Wermter, Stefan
2014-01-01
The acquisition of symbolic and linguistic representations of sensorimotor behavior is a cognitive process performed by an agent when it is executing and/or observing own and others' actions. According to Piaget's theory of cognitive development, these representations develop during the sensorimotor stage and the pre-operational stage. We propose a model that relates the conceptualization of the higher-level information from visual stimuli to the development of ventral/dorsal visual streams. This model employs neural network architecture incorporating a predictive sensory module based on an RNNPB (Recurrent Neural Network with Parametric Biases) and a horizontal product model. We exemplify this model through a robot passively observing an object to learn its features and movements. During the learning process of observing sensorimotor primitives, i.e., observing a set of trajectories of arm movements and its oriented object features, the pre-symbolic representation is self-organized in the parametric units. These representational units act as bifurcation parameters, guiding the robot to recognize and predict various learned sensorimotor primitives. The pre-symbolic representation also accounts for the learning of sensorimotor primitives in a latent learning context. PMID:24550798
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Guibao; Wandel, Scott F.; Jovanovic, Igor, E-mail: ijovanovic@psu.edu
2014-02-15
We describe the production of 2.2-mJ, ∼6 optical-cycle-long mid-infrared laser pulses with a carrier wavelength of 2.05 μm in a two-stage β-BaB{sub 2}O{sub 4} nondegenerate optical parametric amplifier design with a mixed phase matching scheme, which is pumped by a standard Ti:sapphire chirped-pulse amplification system. It is demonstrated that relatively high pulse energies, short pulse durations, high stability, and excellent beam profiles can be obtained using this simple approach, even without the use of optical parametric chirped-pulse amplification.
Evolution of spherical cavitation bubbles: Parametric and closed-form solutions
NASA Astrophysics Data System (ADS)
Mancas, Stefan C.; Rosu, Haret C.
2016-02-01
We present an analysis of the Rayleigh-Plesset equation for a three dimensional vacuous bubble in water. In the simplest case when the effects of surface tension are neglected, the known parametric solutions for the radius and time evolution of the bubble in terms of a hypergeometric function are briefly reviewed. By including the surface tension, we show the connection between the Rayleigh-Plesset equation and Abel's equation, and obtain the parametric rational Weierstrass periodic solutions following the Abel route. In the same Abel approach, we also provide a discussion of the nonintegrable case of nonzero viscosity for which we perform a numerical integration.
Generalized Correlation Coefficient for Non-Parametric Analysis of Microarray Time-Course Data.
Tan, Qihua; Thomassen, Mads; Burton, Mark; Mose, Kristian Fredløv; Andersen, Klaus Ejner; Hjelmborg, Jacob; Kruse, Torben
2017-06-06
Modeling complex time-course patterns is a challenging issue in microarray study due to complex gene expression patterns in response to the time-course experiment. We introduce the generalized correlation coefficient and propose a combinatory approach for detecting, testing and clustering the heterogeneous time-course gene expression patterns. Application of the method identified nonlinear time-course patterns in high agreement with parametric analysis. We conclude that the non-parametric nature in the generalized correlation analysis could be an useful and efficient tool for analyzing microarray time-course data and for exploring the complex relationships in the omics data for studying their association with disease and health.
Ralph, Duncan K; Matsen, Frederick A
2016-01-01
VDJ rearrangement and somatic hypermutation work together to produce antibody-coding B cell receptor (BCR) sequences for a remarkable diversity of antigens. It is now possible to sequence these BCRs in high throughput; analysis of these sequences is bringing new insight into how antibodies develop, in particular for broadly-neutralizing antibodies against HIV and influenza. A fundamental step in such sequence analysis is to annotate each base as coming from a specific one of the V, D, or J genes, or from an N-addition (a.k.a. non-templated insertion). Previous work has used simple parametric distributions to model transitions from state to state in a hidden Markov model (HMM) of VDJ recombination, and assumed that mutations occur via the same process across sites. However, codon frame and other effects have been observed to violate these parametric assumptions for such coding sequences, suggesting that a non-parametric approach to modeling the recombination process could be useful. In our paper, we find that indeed large modern data sets suggest a model using parameter-rich per-allele categorical distributions for HMM transition probabilities and per-allele-per-position mutation probabilities, and that using such a model for inference leads to significantly improved results. We present an accurate and efficient BCR sequence annotation software package using a novel HMM "factorization" strategy. This package, called partis (https://github.com/psathyrella/partis/), is built on a new general-purpose HMM compiler that can perform efficient inference given a simple text description of an HMM.
García del Barrio, J M; Ortega, M; Vázquez De la Cueva, A; Elena-Rosselló, R
2006-08-01
This paper mainly aims to study the linear element influence on the estimation of vascular plant species diversity in five Mediterranean landscapes modeled as land cover patch mosaics. These landscapes have several core habitats and a different set of linear elements--habitat edges or ecotones, roads or railways, rivers, streams and hedgerows on farm land--whose plant composition were examined. Secondly, it aims to check plant diversity estimation in Mediterranean landscapes using parametric and non-parametric procedures, with two indices: Species richness and Shannon index. Land cover types and landscape linear elements were identified from aerial photographs. Their spatial information was processed using GIS techniques. Field plots were selected using a stratified sampling design according to relieve and tree density of each habitat type. A 50x20 m2 multi-scale sampling plot was designed for the core habitats and across the main landscape linear elements. Richness and diversity of plant species were estimated by comparing the observed field data to ICE (Incidence-based Coverage Estimator) and ACE (Abundance-based Coverage Estimator) non-parametric estimators. The species density, percentage of unique species, and alpha diversity per plot were significantly higher (p < 0.05) in linear elements than in core habitats. ICE estimate of number of species was 32% higher than of ACE estimate, which did not differ significantly from the observed values. Accumulated species richness in core habitats together with linear elements, were significantly higher than those recorded only in the core habitats in all the landscapes. Conversely, Shannon diversity index did not show significant differences.
NASA Astrophysics Data System (ADS)
Paul, Subir; Nagesh Kumar, D.
2018-04-01
Hyperspectral (HS) data comprises of continuous spectral responses of hundreds of narrow spectral bands with very fine spectral resolution or bandwidth, which offer feature identification and classification with high accuracy. In the present study, Mutual Information (MI) based Segmented Stacked Autoencoder (S-SAE) approach for spectral-spatial classification of the HS data is proposed to reduce the complexity and computational time compared to Stacked Autoencoder (SAE) based feature extraction. A non-parametric dependency measure (MI) based spectral segmentation is proposed instead of linear and parametric dependency measure to take care of both linear and nonlinear inter-band dependency for spectral segmentation of the HS bands. Then morphological profiles are created corresponding to segmented spectral features to assimilate the spatial information in the spectral-spatial classification approach. Two non-parametric classifiers, Support Vector Machine (SVM) with Gaussian kernel and Random Forest (RF) are used for classification of the three most popularly used HS datasets. Results of the numerical experiments carried out in this study have shown that SVM with a Gaussian kernel is providing better results for the Pavia University and Botswana datasets whereas RF is performing better for Indian Pines dataset. The experiments performed with the proposed methodology provide encouraging results compared to numerous existing approaches.
Nonparametric Regression and the Parametric Bootstrap for Local Dependence Assessment.
ERIC Educational Resources Information Center
Habing, Brian
2001-01-01
Discusses ideas underlying nonparametric regression and the parametric bootstrap with an overview of their application to item response theory and the assessment of local dependence. Illustrates the use of the method in assessing local dependence that varies with examinee trait levels. (SLD)
Automatic Parametrization of Somatosensory Evoked Potentials With Chirp Modeling.
Vayrynen, Eero; Noponen, Kai; Vipin, Ashwati; Thow, X Y; Al-Nashash, Hasan; Kortelainen, Jukka; All, Angelo
2016-09-01
In this paper, an approach using polynomial phase chirp signals to model somatosensory evoked potentials (SEPs) is proposed. SEP waveforms are assumed as impulses undergoing group velocity dispersion while propagating along a multipath neural connection. Mathematical analysis of pulse dispersion resulting in chirp signals is performed. An automatic parameterization of SEPs is proposed using chirp models. A Particle Swarm Optimization algorithm is used to optimize the model parameters. Features describing the latencies and amplitudes of SEPs are automatically derived. A rat model is then used to evaluate the automatic parameterization of SEPs in two experimental cases, i.e., anesthesia level and spinal cord injury (SCI). Experimental results show that chirp-based model parameters and the derived SEP features are significant in describing both anesthesia level and SCI changes. The proposed automatic optimization based approach for extracting chirp parameters offers potential for detailed SEP analysis in future studies. The method implementation in Matlab technical computing language is provided online.
Lewis, James W.; Talkington, William J.; Tallaksen, Katherine C.; Frum, Chris A.
2012-01-01
Whether viewed or heard, an object in action can be segmented as a distinct salient event based on a number of different sensory cues. In the visual system, several low-level attributes of an image are processed along parallel hierarchies, involving intermediate stages wherein gross-level object form and/or motion features are extracted prior to stages that show greater specificity for different object categories (e.g., people, buildings, or tools). In the auditory system, though relying on a rather different set of low-level signal attributes, meaningful real-world acoustic events and “auditory objects” can also be readily distinguished from background scenes. However, the nature of the acoustic signal attributes or gross-level perceptual features that may be explicitly processed along intermediate cortical processing stages remain poorly understood. Examining mechanical and environmental action sounds, representing two distinct non-biological categories of action sources, we had participants assess the degree to which each sound was perceived as object-like versus scene-like. We re-analyzed data from two of our earlier functional magnetic resonance imaging (fMRI) task paradigms (Engel et al., 2009) and found that scene-like action sounds preferentially led to activation along several midline cortical structures, but with strong dependence on listening task demands. In contrast, bilateral foci along the superior temporal gyri (STG) showed parametrically increasing activation to action sounds rated as more “object-like,” independent of sound category or task demands. Moreover, these STG regions also showed parametric sensitivity to spectral structure variations (SSVs) of the action sounds—a quantitative measure of change in entropy of the acoustic signals over time—and the right STG additionally showed parametric sensitivity to measures of mean entropy and harmonic content of the environmental sounds. Analogous to the visual system, intermediate stages of the auditory system appear to process or extract a number of quantifiable low-order signal attributes that are characteristic of action events perceived as being object-like, representing stages that may begin to dissociate different perceptual dimensions and categories of every-day, real-world action sounds. PMID:22582038
Parametric-Studies and Data-Plotting Modules for the SOAP
NASA Technical Reports Server (NTRS)
2008-01-01
"Parametric Studies" and "Data Table Plot View" are the names of software modules in the Satellite Orbit Analysis Program (SOAP). Parametric Studies enables parameterization of as many as three satellite or ground-station attributes across a range of values and computes the average, minimum, and maximum of a specified metric, the revisit time, or 21 other functions at each point in the parameter space. This computation produces a one-, two-, or three-dimensional table of data representing statistical results across the parameter space. Inasmuch as the output of a parametric study in three dimensions can be a very large data set, visualization is a paramount means of discovering trends in the data (see figure). Data Table Plot View enables visualization of the data table created by Parametric Studies or by another data source: this module quickly generates a display of the data in the form of a rotatable three-dimensional-appearing plot, making it unnecessary to load the SOAP output data into a separate plotting program. The rotatable three-dimensionalappearing plot makes it easy to determine which points in the parameter space are most desirable. Both modules provide intuitive user interfaces for ease of use.
Bayesian hierarchical functional data analysis via contaminated informative priors.
Scarpa, Bruno; Dunson, David B
2009-09-01
A variety of flexible approaches have been proposed for functional data analysis, allowing both the mean curve and the distribution about the mean to be unknown. Such methods are most useful when there is limited prior information. Motivated by applications to modeling of temperature curves in the menstrual cycle, this article proposes a flexible approach for incorporating prior information in semiparametric Bayesian analyses of hierarchical functional data. The proposed approach is based on specifying the distribution of functions as a mixture of a parametric hierarchical model and a nonparametric contamination. The parametric component is chosen based on prior knowledge, while the contamination is characterized as a functional Dirichlet process. In the motivating application, the contamination component allows unanticipated curve shapes in unhealthy menstrual cycles. Methods are developed for posterior computation, and the approach is applied to data from a European fecundability study.
Parametric and experimental analysis using a power flow approach
NASA Technical Reports Server (NTRS)
Cuschieri, J. M.
1990-01-01
A structural power flow approach for the analysis of structure-borne transmission of vibrations is used to analyze the influence of structural parameters on transmitted power. The parametric analysis is also performed using the Statistical Energy Analysis approach and the results are compared with those obtained using the power flow approach. The advantages of structural power flow analysis are demonstrated by comparing the type of results that are obtained by the two analytical methods. Also, to demonstrate that the power flow results represent a direct physical parameter that can be measured on a typical structure, an experimental study of structural power flow is presented. This experimental study presents results for an L shaped beam for which an available solution was already obtained. Various methods to measure vibrational power flow are compared to study their advantages and disadvantages.
Photometric Supernova Classification with Machine Learning
NASA Astrophysics Data System (ADS)
Lochner, Michelle; McEwen, Jason D.; Peiris, Hiranya V.; Lahav, Ofer; Winter, Max K.
2016-08-01
Automated photometric supernova classification has become an active area of research in recent years in light of current and upcoming imaging surveys such as the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope, given that spectroscopic confirmation of type for all supernovae discovered will be impossible. Here, we develop a multi-faceted classification pipeline, combining existing and new approaches. Our pipeline consists of two stages: extracting descriptive features from the light curves and classification using a machine learning algorithm. Our feature extraction methods vary from model-dependent techniques, namely SALT2 fits, to more independent techniques that fit parametric models to curves, to a completely model-independent wavelet approach. We cover a range of representative machine learning algorithms, including naive Bayes, k-nearest neighbors, support vector machines, artificial neural networks, and boosted decision trees (BDTs). We test the pipeline on simulated multi-band DES light curves from the Supernova Photometric Classification Challenge. Using the commonly used area under the curve (AUC) of the Receiver Operating Characteristic as a metric, we find that the SALT2 fits and the wavelet approach, with the BDTs algorithm, each achieve an AUC of 0.98, where 1 represents perfect classification. We find that a representative training set is essential for good classification, whatever the feature set or algorithm, with implications for spectroscopic follow-up. Importantly, we find that by using either the SALT2 or the wavelet feature sets with a BDT algorithm, accurate classification is possible purely from light curve data, without the need for any redshift information.
Parametric Model of an Aerospike Rocket Engine
NASA Technical Reports Server (NTRS)
Korte, J. J.
2000-01-01
A suite of computer codes was assembled to simulate the performance of an aerospike engine and to generate the engine input for the Program to Optimize Simulated Trajectories. First an engine simulator module was developed that predicts the aerospike engine performance for a given mixture ratio, power level, thrust vectoring level, and altitude. This module was then used to rapidly generate the aerospike engine performance tables for axial thrust, normal thrust, pitching moment, and specific thrust. Parametric engine geometry was defined for use with the engine simulator module. The parametric model was also integrated into the iSIGHTI multidisciplinary framework so that alternate designs could be determined. The computer codes were used to support in-house conceptual studies of reusable launch vehicle designs.
Parametric Model of an Aerospike Rocket Engine
NASA Technical Reports Server (NTRS)
Korte, J. J.
2000-01-01
A suite of computer codes was assembled to simulate the performance of an aerospike engine and to generate the engine input for the Program to Optimize Simulated Trajectories. First an engine simulator module was developed that predicts the aerospike engine performance for a given mixture ratio, power level, thrust vectoring level, and altitude. This module was then used to rapidly generate the aerospike engine performance tables for axial thrust, normal thrust, pitching moment, and specific thrust. Parametric engine geometry was defined for use with the engine simulator module. The parametric model was also integrated into the iSIGHT multidisciplinary framework so that alternate designs could be determined. The computer codes were used to support in-house conceptual studies of reusable launch vehicle designs.
Learning from Friends: Measuring Influence in a Dyadic Computer Instructional Setting
ERIC Educational Resources Information Center
DeLay, Dawn; Hartl, Amy C.; Laursen, Brett; Denner, Jill; Werner, Linda; Campe, Shannon; Ortiz, Eloy
2014-01-01
Data collected from partners in a dyadic instructional setting are, by definition, not statistically independent. As a consequence, conventional parametric statistical analyses of change and influence carry considerable risk of bias. In this article, we illustrate a strategy to overcome this obstacle: the longitudinal actor-partner interdependence…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hentschke, Clemens M., E-mail: clemens.hentschke@gmail.com; Tönnies, Klaus D.; Beuing, Oliver
Purpose: The early detection of cerebral aneurysms plays a major role in preventing subarachnoid hemorrhage. The authors present a system to automatically detect cerebral aneurysms in multimodal 3D angiographic data sets. The authors’ system is parametrizable for contrast-enhanced magnetic resonance angiography (CE-MRA), time-of-flight magnetic resonance angiography (TOF-MRA), and computed tomography angiography (CTA). Methods: Initial volumes of interest are found by applying a multiscale sphere-enhancing filter. Several features are combined in a linear discriminant function (LDF) to distinguish between true aneurysms and false positives. The features include shape information, spatial information, and probability information. The LDF can either be parametrized bymore » domain experts or automatically by training. Vessel segmentation is avoided as it could heavily influence the detection algorithm. Results: The authors tested their method with 151 clinical angiographic data sets containing 112 aneurysms. The authors reach a sensitivity of 95% with CE-MRA data sets at an average false positive rate per data set (FP{sub DS}) of 8.2. For TOF-MRA, we achieve 95% sensitivity at 11.3 FP{sub DS}. For CTA, we reach a sensitivity of 95% at 22.8 FP{sub DS}. For all modalities, the expert parametrization led to similar or better results than the trained parametrization eliminating the need for training. 93% of aneurysms that were smaller than 5 mm were found. The authors also showed that their algorithm is capable of detecting aneurysms that were previously overlooked by radiologists. Conclusions: The authors present an automatic system to detect cerebral aneurysms in multimodal angiographic data sets. The system proved as a suitable computer-aided detection tool to help radiologists find cerebral aneurysms.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duesbery, M.S.
1993-11-30
This program aims at improving current methods of lifetime assessment by building in the characteristics of the micro-mechanisms known to be responsible for damage and failure. The broad approach entails the integration and, where necessary, augmentation of the micro-scale research results currently available in the literature into a macro-sale model with predictive capability. In more detail, the program will develop a set of hierarchically structured models at different length scales, from atomic to macroscopic, at each level taking as parametric input the results of the model at the next smaller scale. In this way the known microscopic properties can bemore » transported by systematic procedures to the unknown macro-scale region. It may mot be possible to eliminate empiricism completely, because some of the quantities involved cannot yet be estimated to the required degree of precision. In this case the aim will be at least to eliminate functional empiricism. Restriction of empiricism to the choice of parameters to be input to known functional forms permits some confidence in extrapolation procedures and has the advantage that the models can readily be updated as better estimates of the parameters become available.« less
NASA Technical Reports Server (NTRS)
Zhu, Lin-Fa; Kim, Soo; Chattopadhyay, Aditi; Goldberg, Robert K.
2004-01-01
A numerical procedure has been developed to investigate the nonlinear and strain rate dependent deformation response of polymer matrix composite laminated plates under high strain rate impact loadings. A recently developed strength of materials based micromechanics model, incorporating a set of nonlinear, strain rate dependent constitutive equations for the polymer matrix, is extended to account for the transverse shear effects during impact. Four different assumptions of transverse shear deformation are investigated in order to improve the developed strain rate dependent micromechanics model. The validities of these assumptions are investigated using numerical and theoretical approaches. A method to determine through the thickness strain and transverse Poisson's ratio of the composite is developed. The revised micromechanics model is then implemented into a higher order laminated plate theory which is modified to include the effects of inelastic strains. Parametric studies are conducted to investigate the mechanical response of composite plates under high strain rate loadings. Results show the transverse shear stresses cannot be neglected in the impact problem. A significant level of strain rate dependency and material nonlinearity is found in the deformation response of representative composite specimens.
DiffSplice: the genome-wide detection of differential splicing events with RNA-seq
Hu, Yin; Huang, Yan; Du, Ying; Orellana, Christian F.; Singh, Darshan; Johnson, Amy R.; Monroy, Anaïs; Kuan, Pei-Fen; Hammond, Scott M.; Makowski, Liza; Randell, Scott H.; Chiang, Derek Y.; Hayes, D. Neil; Jones, Corbin; Liu, Yufeng; Prins, Jan F.; Liu, Jinze
2013-01-01
The RNA transcriptome varies in response to cellular differentiation as well as environmental factors, and can be characterized by the diversity and abundance of transcript isoforms. Differential transcription analysis, the detection of differences between the transcriptomes of different cells, may improve understanding of cell differentiation and development and enable the identification of biomarkers that classify disease types. The availability of high-throughput short-read RNA sequencing technologies provides in-depth sampling of the transcriptome, making it possible to accurately detect the differences between transcriptomes. In this article, we present a new method for the detection and visualization of differential transcription. Our approach does not depend on transcript or gene annotations. It also circumvents the need for full transcript inference and quantification, which is a challenging problem because of short read lengths, as well as various sampling biases. Instead, our method takes a divide-and-conquer approach to localize the difference between transcriptomes in the form of alternative splicing modules (ASMs), where transcript isoforms diverge. Our approach starts with the identification of ASMs from the splice graph, constructed directly from the exons and introns predicted from RNA-seq read alignments. The abundance of alternative splicing isoforms residing in each ASM is estimated for each sample and is compared across sample groups. A non-parametric statistical test is applied to each ASM to detect significant differential transcription with a controlled false discovery rate. The sensitivity and specificity of the method have been assessed using simulated data sets and compared with other state-of-the-art approaches. Experimental validation using qRT-PCR confirmed a selected set of genes that are differentially expressed in a lung differentiation study and a breast cancer data set, demonstrating the utility of the approach applied on experimental biological data sets. The software of DiffSplice is available at http://www.netlab.uky.edu/p/bioinfo/DiffSplice. PMID:23155066
Illiquidity premium and expected stock returns in the UK: A new approach
NASA Astrophysics Data System (ADS)
Chen, Jiaqi; Sherif, Mohamed
2016-09-01
This study examines the relative importance of liquidity risk for the time-series and cross-section of stock returns in the UK. We propose a simple way to capture the multidimensionality of illiquidity. Our analysis indicates that existing illiquidity measures have considerable asset specific components, which justifies our new approach. Further, we use an alternative test of the Amihud (2002) measure and parametric and non-parametric methods to investigate whether liquidity risk is priced in the UK. We find that the inclusion of the illiquidity factor in the capital asset pricing model plays a significant role in explaining the cross-sectional variation in stock returns, in particular with the Fama-French three-factor model. Further, using Hansen-Jagannathan non-parametric bounds, we find that the illiquidity-augmented capital asset pricing models yield a small distance error, other non-liquidity based models fail to yield economically plausible distance values. Our findings have important implications for managing the liquidity risk of equity portfolios.
Sleep analysis for wearable devices applying autoregressive parametric models.
Mendez, M O; Villantieri, O; Bianchi, A; Cerutti, S
2005-01-01
We applied time-variant and time-invariant parametric models in both healthy subjects and patients with sleep disorder recordings in order to assess the skills of those approaches to sleep disorders diagnosis in wearable devices. The recordings present the Obstructive Sleep Apnea (OSA) pathology which is characterized by fluctuations in the heart rate, bradycardia in apneonic phase and tachycardia at the recovery of ventilation. Data come from a web database in www.physionet.org. During OSA the spectral indexes obtained by time-variant lattice filters presented oscillations that correspond to the changes brady-tachycardia of the RR intervals and greater values than healthy ones. Multivariate autoregressive models showed an increment in very low frequency component (PVLF) at each apneic event. Also a rise in high frequency component (PHF) occurred over the breathing restore in the spectrum of both quadratic coherence and cross-spectrum in OSA. These autoregressive parametric approaches could help in the diagnosis of Sleep Disorder inside of the wearable devices.
The binned bispectrum estimator: template-based and non-parametric CMB non-Gaussianity searches
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bucher, Martin; Racine, Benjamin; Tent, Bartjan van, E-mail: bucher@apc.univ-paris7.fr, E-mail: benjar@uio.no, E-mail: vantent@th.u-psud.fr
2016-05-01
We describe the details of the binned bispectrum estimator as used for the official 2013 and 2015 analyses of the temperature and polarization CMB maps from the ESA Planck satellite. The defining aspect of this estimator is the determination of a map bispectrum (3-point correlation function) that has been binned in harmonic space. For a parametric determination of the non-Gaussianity in the map (the so-called f NL parameters), one takes the inner product of this binned bispectrum with theoretically motivated templates. However, as a complementary approach one can also smooth the binned bispectrum using a variable smoothing scale in ordermore » to suppress noise and make coherent features stand out above the noise. This allows one to look in a model-independent way for any statistically significant bispectral signal. This approach is useful for characterizing the bispectral shape of the galactic foreground emission, for which a theoretical prediction of the bispectral anisotropy is lacking, and for detecting a serendipitous primordial signal, for which a theoretical template has not yet been put forth. Both the template-based and the non-parametric approaches are described in this paper.« less
Implementation of Instrumental Variable Bounds for Data Missing Not at Random.
Marden, Jessica R; Wang, Linbo; Tchetgen, Eric J Tchetgen; Walter, Stefan; Glymour, M Maria; Wirth, Kathleen E
2018-05-01
Instrumental variables are routinely used to recover a consistent estimator of an exposure causal effect in the presence of unmeasured confounding. Instrumental variable approaches to account for nonignorable missing data also exist but are less familiar to epidemiologists. Like instrumental variables for exposure causal effects, instrumental variables for missing data rely on exclusion restriction and instrumental variable relevance assumptions. Yet these two conditions alone are insufficient for point identification. For estimation, researchers have invoked a third assumption, typically involving fairly restrictive parametric constraints. Inferences can be sensitive to these parametric assumptions, which are typically not empirically testable. The purpose of our article is to discuss another approach for leveraging a valid instrumental variable. Although the approach is insufficient for nonparametric identification, it can nonetheless provide informative inferences about the presence, direction, and magnitude of selection bias, without invoking a third untestable parametric assumption. An important contribution of this article is an Excel spreadsheet tool that can be used to obtain empirical evidence of selection bias and calculate bounds and corresponding Bayesian 95% credible intervals for a nonidentifiable population proportion. For illustrative purposes, we used the spreadsheet tool to analyze HIV prevalence data collected by the 2007 Zambia Demographic and Health Survey (DHS).
Coupled oscillators in identification of nonlinear damping of a real parametric pendulum
NASA Astrophysics Data System (ADS)
Olejnik, Paweł; Awrejcewicz, Jan
2018-01-01
A damped parametric pendulum with friction is identified twice by means of its precise and imprecise mathematical model. A laboratory test stand designed for experimental investigations of nonlinear effects determined by a viscous resistance and the stick-slip phenomenon serves as the model mechanical system. An influence of accurateness of mathematical modeling on the time variability of the nonlinear damping coefficient of the oscillator is proved. A free decay response of a precisely and imprecisely modeled physical pendulum is dependent on two different time-varying coefficients of damping. The coefficients of the analyzed parametric oscillator are identified with the use of a new semi-empirical method based on a coupled oscillators approach, utilizing the fractional order derivative of the discrete measurement series treated as an input to the numerical model. Results of application of the proposed method of identification of the nonlinear coefficients of the damped parametric oscillator have been illustrated and extensively discussed.
NASA Astrophysics Data System (ADS)
Voss, Paul L.; Köprülü, Kahraman G.; Kumar, Prem
2006-04-01
We present a quantum theory of nondegenerate phase-sensitive parametric amplification in a χ(3) nonlinear medium. The nonzero response time of the Kerr (χ(3)) nonlinearity determines the quantum-limited noise figure of χ(3) parametric amplification, as well as the limit on quadrature squeezing. This nonzero response time of the nonlinearity requires coupling of the parametric process to a molecular vibration phonon bath, causing the addition of excess noise through spontaneous Raman scattering. We present analytical expressions for the quantum-limited noise figure of frequency nondegenerate and frequency degenerate χ(3) parametric amplifiers operated as phase-sensitive amplifiers. We also present results for frequency nondegenerate quadrature squeezing. We show that our nondegenerate squeezing theory agrees with the degenerate squeezing theory of Boivin and Shapiro as degeneracy is approached. We have also included the effect of linear loss on the phase-sensitive process.
NASA Astrophysics Data System (ADS)
Jiang, Jin-Wu
2015-08-01
We propose parametrizing the Stillinger-Weber potential for covalent materials starting from the valence force-field model. All geometrical parameters in the Stillinger-Weber potential are determined analytically according to the equilibrium condition for each individual potential term, while the energy parameters are derived from the valence force-field model. This parametrization approach transfers the accuracy of the valence force field model to the Stillinger-Weber potential. Furthermore, the resulting Stilliinger-Weber potential supports stable molecular dynamics simulations, as each potential term is at an energy-minimum state separately at the equilibrium configuration. We employ this procedure to parametrize Stillinger-Weber potentials for single-layer MoS2 and black phosphorous. The obtained Stillinger-Weber potentials predict an accurate phonon spectrum and mechanical behaviors. We also provide input scripts of these Stillinger-Weber potentials used by publicly available simulation packages including GULP and LAMMPS.
Jiang, Jin-Wu
2015-08-07
We propose parametrizing the Stillinger-Weber potential for covalent materials starting from the valence force-field model. All geometrical parameters in the Stillinger-Weber potential are determined analytically according to the equilibrium condition for each individual potential term, while the energy parameters are derived from the valence force-field model. This parametrization approach transfers the accuracy of the valence force field model to the Stillinger-Weber potential. Furthermore, the resulting Stilliinger-Weber potential supports stable molecular dynamics simulations, as each potential term is at an energy-minimum state separately at the equilibrium configuration. We employ this procedure to parametrize Stillinger-Weber potentials for single-layer MoS2 and black phosphorous. The obtained Stillinger-Weber potentials predict an accurate phonon spectrum and mechanical behaviors. We also provide input scripts of these Stillinger-Weber potentials used by publicly available simulation packages including GULP and LAMMPS.
NASA Technical Reports Server (NTRS)
Hen, Itay; Rieffel, Eleanor G.; Do, Minh; Venturelli, Davide
2014-01-01
There are two common ways to evaluate algorithms: performance on benchmark problems derived from real applications and analysis of performance on parametrized families of problems. The two approaches complement each other, each having its advantages and disadvantages. The planning community has concentrated on the first approach, with few ways of generating parametrized families of hard problems known prior to this work. Our group's main interest is in comparing approaches to solving planning problems using a novel type of computational device - a quantum annealer - to existing state-of-the-art planning algorithms. Because only small-scale quantum annealers are available, we must compare on small problem sizes. Small problems are primarily useful for comparison only if they are instances of parametrized families of problems for which scaling analysis can be done. In this technical report, we discuss our approach to the generation of hard planning problems from classes of well-studied NP-complete problems that map naturally to planning problems or to aspects of planning problems that many practical planning problems share. These problem classes exhibit a phase transition between easy-to-solve and easy-to-show-unsolvable planning problems. The parametrized families of hard planning problems lie at the phase transition. The exponential scaling of hardness with problem size is apparent in these families even at very small problem sizes, thus enabling us to characterize even very small problems as hard. The families we developed will prove generally useful to the planning community in analyzing the performance of planning algorithms, providing a complementary approach to existing evaluation methods. We illustrate the hardness of these problems and their scaling with results on four state-of-the-art planners, observing significant differences between these planners on these problem families. Finally, we describe two general, and quite different, mappings of planning problems to QUBOs, the form of input required for a quantum annealing machine such as the D-Wave II.
Dalle Carbonare, S; Folli, F; Patrini, E; Giudici, P; Bellazzi, R
2013-01-01
The increasing demand of health care services and the complexity of health care delivery require Health Care Organizations (HCOs) to approach clinical risk management through proper methods and tools. An important aspect of risk management is to exploit the analysis of medical injuries compensation claims in order to reduce adverse events and, at the same time, to optimize the costs of health insurance policies. This work provides a probabilistic method to estimate the risk level of a HCO by computing quantitative risk indexes from medical injury compensation claims. Our method is based on the estimate of a loss probability distribution from compensation claims data through parametric and non-parametric modeling and Monte Carlo simulations. The loss distribution can be estimated both on the whole dataset and, thanks to the application of a Bayesian hierarchical model, on stratified data. The approach allows to quantitatively assessing the risk structure of the HCO by analyzing the loss distribution and deriving its expected value and percentiles. We applied the proposed method to 206 cases of injuries with compensation requests collected from 1999 to the first semester of 2007 by the HCO of Lodi, in the Northern part of Italy. We computed the risk indexes taking into account the different clinical departments and the different hospitals involved. The approach proved to be useful to understand the HCO risk structure in terms of frequency, severity, expected and unexpected loss related to adverse events.
Reis, Yara; Wolf, Thomas; Brors, Benedikt; Hamacher-Brady, Anne; Eils, Roland; Brady, Nathan R.
2012-01-01
Mitochondria exist as a network of interconnected organelles undergoing constant fission and fusion. Current approaches to study mitochondrial morphology are limited by low data sampling coupled with manual identification and classification of complex morphological phenotypes. Here we propose an integrated mechanistic and data-driven modeling approach to analyze heterogeneous, quantified datasets and infer relations between mitochondrial morphology and apoptotic events. We initially performed high-content, multi-parametric measurements of mitochondrial morphological, apoptotic, and energetic states by high-resolution imaging of human breast carcinoma MCF-7 cells. Subsequently, decision tree-based analysis was used to automatically classify networked, fragmented, and swollen mitochondrial subpopulations, at the single-cell level and within cell populations. Our results revealed subtle but significant differences in morphology class distributions in response to various apoptotic stimuli. Furthermore, key mitochondrial functional parameters including mitochondrial membrane potential and Bax activation, were measured under matched conditions. Data-driven fuzzy logic modeling was used to explore the non-linear relationships between mitochondrial morphology and apoptotic signaling, combining morphological and functional data as a single model. Modeling results are in accordance with previous studies, where Bax regulates mitochondrial fragmentation, and mitochondrial morphology influences mitochondrial membrane potential. In summary, we established and validated a platform for mitochondrial morphological and functional analysis that can be readily extended with additional datasets. We further discuss the benefits of a flexible systematic approach for elucidating specific and general relationships between mitochondrial morphology and apoptosis. PMID:22272225
Direct 4D reconstruction of parametric images incorporating anato-functional joint entropy.
Tang, Jing; Kuwabara, Hiroto; Wong, Dean F; Rahmim, Arman
2010-08-07
We developed an anatomy-guided 4D closed-form algorithm to directly reconstruct parametric images from projection data for (nearly) irreversible tracers. Conventional methods consist of individually reconstructing 2D/3D PET data, followed by graphical analysis on the sequence of reconstructed image frames. The proposed direct reconstruction approach maintains the simplicity and accuracy of the expectation-maximization (EM) algorithm by extending the system matrix to include the relation between the parametric images and the measured data. A closed-form solution was achieved using a different hidden complete-data formulation within the EM framework. Furthermore, the proposed method was extended to maximum a posterior reconstruction via incorporation of MR image information, taking the joint entropy between MR and parametric PET features as the prior. Using realistic simulated noisy [(11)C]-naltrindole PET and MR brain images/data, the quantitative performance of the proposed methods was investigated. Significant improvements in terms of noise versus bias performance were demonstrated when performing direct parametric reconstruction, and additionally upon extending the algorithm to its Bayesian counterpart using the MR-PET joint entropy measure.
Foote, Kenneth G
2012-05-01
Measurement of acoustic backscattering properties of targets requires removal of the range dependence of echoes. This process is called range compensation. For conventional sonars making measurements in the transducer farfield, the compensation removes effects of geometrical spreading and absorption. For parametric sonars consisting of a parametric acoustic transmitter and a conventional-sonar receiver, two additional range dependences require compensation when making measurements in the nonlinearly generated difference-frequency nearfield: an apparently increasing source level and a changing beamwidth. General expressions are derived for range compensation functions in the difference-frequency nearfield of parametric sonars. These are evaluated numerically for a parametric sonar whose difference-frequency band, effectively 1-6 kHz, is being used to observe Atlantic herring (Clupea harengus) in situ. Range compensation functions for this sonar are compared with corresponding functions for conventional sonars for the cases of single and multiple scatterers. Dependences of these range compensation functions on the parametric sonar transducer shape, size, acoustic power density, and hydrography are investigated. Parametric range compensation functions, when applied with calibration data, will enable difference-frequency echoes to be expressed in physical units of volume backscattering, and backscattering spectra, including fish-swimbladder-resonances, to be analyzed.
Automatic 3D high-fidelity traffic interchange modeling using 2D road GIS data
NASA Astrophysics Data System (ADS)
Wang, Jie; Shen, Yuzhong
2011-03-01
3D road models are widely used in many computer applications such as racing games and driving simulations. However, almost all high-fidelity 3D road models were generated manually by professional artists at the expense of intensive labor. There are very few existing methods for automatically generating 3D high-fidelity road networks, especially for those existing in the real world. Real road network contains various elements such as road segments, road intersections and traffic interchanges. Among them, traffic interchanges present the most challenges to model due to their complexity and the lack of height information (vertical position) of traffic interchanges in existing road GIS data. This paper proposes a novel approach that can automatically produce 3D high-fidelity road network models, including traffic interchange models, from real 2D road GIS data that mainly contain road centerline information. The proposed method consists of several steps. The raw road GIS data are first preprocessed to extract road network topology, merge redundant links, and classify road types. Then overlapped points in the interchanges are detected and their elevations are determined based on a set of level estimation rules. Parametric representations of the road centerlines are then generated through link segmentation and fitting, and they have the advantages of arbitrary levels of detail with reduced memory usage. Finally a set of civil engineering rules for road design (e.g., cross slope, superelevation) are selected and used to generate realistic road surfaces. In addition to traffic interchange modeling, the proposed method also applies to other more general road elements. Preliminary results show that the proposed method is highly effective and useful in many applications.
Connectionist model-based stereo vision for telerobotics
NASA Technical Reports Server (NTRS)
Hoff, William; Mathis, Donald
1989-01-01
Autonomous stereo vision for range measurement could greatly enhance the performance of telerobotic systems. Stereo vision could be a key component for autonomous object recognition and localization, thus enabling the system to perform low-level tasks, and allowing a human operator to perform a supervisory role. The central difficulty in stereo vision is the ambiguity in matching corresponding points in the left and right images. However, if one has a priori knowledge of the characteristics of the objects in the scene, as is often the case in telerobotics, a model-based approach can be taken. Researchers describe how matching ambiguities can be resolved by ensuring that the resulting three-dimensional points are consistent with surface models of the expected objects. A four-layer neural network hierarchy is used in which surface models of increasing complexity are represented in successive layers. These models are represented using a connectionist scheme called parameter networks, in which a parametrized object (for example, a planar patch p=f(h,m sub x, m sub y) is represented by a collection of processing units, each of which corresponds to a distinct combination of parameter values. The activity level of each unit in the parameter network can be thought of as representing the confidence with which the hypothesis represented by that unit is believed. Weights in the network are set so as to implement gradient descent in an energy function.
Design of a terahertz parametric oscillator based on a resonant cavity in a terahertz waveguide
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saito, K., E-mail: k-saito@material.tohoku.ac.jp; Oyama, Y.; Tanabe, T.
We demonstrate ns-pulsed pumping of terahertz (THz) parametric oscillations in a quasi-triply resonant cavity in a THz waveguide. The THz waves, down converted through parametric interactions between the pump and signal waves at telecom frequencies, are confined to a GaP single mode ridge waveguide. By combining the THz waveguide with a quasi-triply resonant cavity, the nonlinear interactions can be enhanced. A low threshold pump intensity for parametric oscillations can be achieved in the cavity waveguide. The THz output power can be maximized by optimizing the quality factors of the cavity so that an optical to THz photon conversion efficiency, η{submore » p}, of 0.35, which is near the quantum-limit level, can be attained. The proposed THz optical parametric oscillator can be utilized as an efficient and monochromatic THz source.« less
Acoustic attenuation design requirements established through EPNL parametric trades
NASA Technical Reports Server (NTRS)
Veldman, H. F.
1972-01-01
An optimization procedure for the provision of an acoustic lining configuration that is balanced with respect to engine performance losses and lining attenuation characteristics was established using a method which determined acoustic attenuation design requirements through parametric trade studies using the subjective noise unit of effective perceived noise level (EPNL).
Myocardium tracking via matching distributions.
Ben Ayed, Ismail; Li, Shuo; Ross, Ian; Islam, Ali
2009-01-01
The goal of this study is to investigate automatic myocardium tracking in cardiac Magnetic Resonance (MR) sequences using global distribution matching via level-set curve evolution. Rather than relying on the pixelwise information as in existing approaches, distribution matching compares intensity distributions, and consequently, is well-suited to the myocardium tracking problem. Starting from a manual segmentation of the first frame, two curves are evolved in order to recover the endocardium (inner myocardium boundary) and the epicardium (outer myocardium boundary) in all the frames. For each curve, the evolution equation is sought following the maximization of a functional containing two terms: (1) a distribution matching term measuring the similarity between the non-parametric intensity distributions sampled from inside and outside the curve to the model distributions of the corresponding regions estimated from the previous frame; (2) a gradient term for smoothing the curve and biasing it toward high gradient of intensity. The Bhattacharyya coefficient is used as a similarity measure between distributions. The functional maximization is obtained by the Euler-Lagrange ascent equation of curve evolution, and efficiently implemented via level-set. The performance of the proposed distribution matching was quantitatively evaluated by comparisons with independent manual segmentations approved by an experienced cardiologist. The method was applied to ten 2D mid-cavity MR sequences corresponding to ten different subjects. Although neither shape prior knowledge nor curve coupling were used, quantitative evaluation demonstrated that the results were consistent with manual segmentations. The proposed method compares well with existing methods. The algorithm also yields a satisfying reproducibility. Distribution matching leads to a myocardium tracking which is more flexible and applicable than existing methods because the algorithm uses only the current data, i.e., does not require a training, and consequently, the solution is not bounded to some shape/intensity prior information learned from of a finite training set.
Polarization of light and hopf fibration
NASA Astrophysics Data System (ADS)
Jurčo, B.
1987-09-01
A set of polarization states of quasi-monochromatic light is described geometrically in terms of the Hopf fibration. Several associated alternative polarization parametrizations are given explicitly, including the Stokes parameters.
Hu, Leland S; Ning, Shuluo; Eschbacher, Jennifer M; Gaw, Nathan; Dueck, Amylou C; Smith, Kris A; Nakaji, Peter; Plasencia, Jonathan; Ranjbar, Sara; Price, Stephen J; Tran, Nhan; Loftus, Joseph; Jenkins, Robert; O'Neill, Brian P; Elmquist, William; Baxter, Leslie C; Gao, Fei; Frakes, David; Karis, John P; Zwart, Christine; Swanson, Kristin R; Sarkaria, Jann; Wu, Teresa; Mitchell, J Ross; Li, Jing
2015-01-01
Genetic profiling represents the future of neuro-oncology but suffers from inadequate biopsies in heterogeneous tumors like Glioblastoma (GBM). Contrast-enhanced MRI (CE-MRI) targets enhancing core (ENH) but yields adequate tumor in only ~60% of cases. Further, CE-MRI poorly localizes infiltrative tumor within surrounding non-enhancing parenchyma, or brain-around-tumor (BAT), despite the importance of characterizing this tumor segment, which universally recurs. In this study, we use multiple texture analysis and machine learning (ML) algorithms to analyze multi-parametric MRI, and produce new images indicating tumor-rich targets in GBM. We recruited primary GBM patients undergoing image-guided biopsies and acquired pre-operative MRI: CE-MRI, Dynamic-Susceptibility-weighted-Contrast-enhanced-MRI, and Diffusion Tensor Imaging. Following image coregistration and region of interest placement at biopsy locations, we compared MRI metrics and regional texture with histologic diagnoses of high- vs low-tumor content (≥80% vs <80% tumor nuclei) for corresponding samples. In a training set, we used three texture analysis algorithms and three ML methods to identify MRI-texture features that optimized model accuracy to distinguish tumor content. We confirmed model accuracy in a separate validation set. We collected 82 biopsies from 18 GBMs throughout ENH and BAT. The MRI-based model achieved 85% cross-validated accuracy to diagnose high- vs low-tumor in the training set (60 biopsies, 11 patients). The model achieved 81.8% accuracy in the validation set (22 biopsies, 7 patients). Multi-parametric MRI and texture analysis can help characterize and visualize GBM's spatial histologic heterogeneity to identify regional tumor-rich biopsy targets.
UQTools: The Uncertainty Quantification Toolbox - Introduction and Tutorial
NASA Technical Reports Server (NTRS)
Kenny, Sean P.; Crespo, Luis G.; Giesy, Daniel P.
2012-01-01
UQTools is the short name for the Uncertainty Quantification Toolbox, a software package designed to efficiently quantify the impact of parametric uncertainty on engineering systems. UQTools is a MATLAB-based software package and was designed to be discipline independent, employing very generic representations of the system models and uncertainty. Specifically, UQTools accepts linear and nonlinear system models and permits arbitrary functional dependencies between the system s measures of interest and the probabilistic or non-probabilistic parametric uncertainty. One of the most significant features incorporated into UQTools is the theoretical development centered on homothetic deformations and their application to set bounding and approximating failure probabilities. Beyond the set bounding technique, UQTools provides a wide range of probabilistic and uncertainty-based tools to solve key problems in science and engineering.
Nonequilibrium Langevin approach to quantum optics in semiconductor microcavities
NASA Astrophysics Data System (ADS)
Portolan, S.; di Stefano, O.; Savasta, S.; Rossi, F.; Girlanda, R.
2008-01-01
Recently, the possibility of generating nonclassical polariton states by means of parametric scattering has been demonstrated. Excitonic polaritons propagate in a complex interacting environment and contain real electronic excitations subject to scattering events and noise affecting quantum coherence and entanglement. Here, we present a general theoretical framework for the realistic investigation of polariton quantum correlations in the presence of coherent and incoherent interaction processes. The proposed theoretical approach is based on the nonequilibrium quantum Langevin approach for open systems applied to interacting-electron complexes described within the dynamics controlled truncation scheme. It provides an easy recipe to calculate multitime correlation functions which are key quantities in quantum optics. As a first application, we analyze the buildup of polariton parametric emission in semiconductor microcavities including the influence of noise originating from phonon-induced scattering.
NASA Astrophysics Data System (ADS)
Gosselin, Jeremy M.; Dosso, Stan E.; Cassidy, John F.; Quijano, Jorge E.; Molnar, Sheri; Dettmer, Jan
2017-10-01
This paper develops and applies a Bernstein-polynomial parametrization to efficiently represent general, gradient-based profiles in nonlinear geophysical inversion, with application to ambient-noise Rayleigh-wave dispersion data. Bernstein polynomials provide a stable parametrization in that small perturbations to the model parameters (basis-function coefficients) result in only small perturbations to the geophysical parameter profile. A fully nonlinear Bayesian inversion methodology is applied to estimate shear wave velocity (VS) profiles and uncertainties from surface wave dispersion data extracted from ambient seismic noise. The Bayesian information criterion is used to determine the appropriate polynomial order consistent with the resolving power of the data. Data error correlations are accounted for in the inversion using a parametric autoregressive model. The inversion solution is defined in terms of marginal posterior probability profiles for VS as a function of depth, estimated using Metropolis-Hastings sampling with parallel tempering. This methodology is applied to synthetic dispersion data as well as data processed from passive array recordings collected on the Fraser River Delta in British Columbia, Canada. Results from this work are in good agreement with previous studies, as well as with co-located invasive measurements. The approach considered here is better suited than `layered' modelling approaches in applications where smooth gradients in geophysical parameters are expected, such as soil/sediment profiles. Further, the Bernstein polynomial representation is more general than smooth models based on a fixed choice of gradient type (e.g. power-law gradient) because the form of the gradient is determined objectively by the data, rather than by a subjective parametrization choice.
Methods for Probabilistic Fault Diagnosis: An Electrical Power System Case Study
NASA Technical Reports Server (NTRS)
Ricks, Brian W.; Mengshoel, Ole J.
2009-01-01
Health management systems that more accurately and quickly diagnose faults that may occur in different technical systems on-board a vehicle will play a key role in the success of future NASA missions. We discuss in this paper the diagnosis of abrupt continuous (or parametric) faults within the context of probabilistic graphical models, more specifically Bayesian networks that are compiled to arithmetic circuits. This paper extends our previous research, within the same probabilistic setting, on diagnosis of abrupt discrete faults. Our approach and diagnostic algorithm ProDiagnose are domain-independent; however we use an electrical power system testbed called ADAPT as a case study. In one set of ADAPT experiments, performed as part of the 2009 Diagnostic Challenge, our system turned out to have the best performance among all competitors. In a second set of experiments, we show how we have recently further significantly improved the performance of the probabilistic model of ADAPT. While these experiments are obtained for an electrical power system testbed, we believe they can easily be transitioned to real-world systems, thus promising to increase the success of future NASA missions.
Towards an Empirically Based Parametric Explosion Spectral Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ford, S R; Walter, W R; Ruppert, S
2009-08-31
Small underground nuclear explosions need to be confidently detected, identified, and characterized in regions of the world where they have never before been tested. The focus of our work is on the local and regional distances (< 2000 km) and phases (Pn, Pg, Sn, Lg) necessary to see small explosions. We are developing a parametric model of the nuclear explosion seismic source spectrum that is compatible with the earthquake-based geometrical spreading and attenuation models developed using the Magnitude Distance Amplitude Correction (MDAC) techniques (Walter and Taylor, 2002). The explosion parametric model will be particularly important in regions without any priormore » explosion data for calibration. The model is being developed using the available body of seismic data at local and regional distances for past nuclear explosions at foreign and domestic test sites. Parametric modeling is a simple and practical approach for widespread monitoring applications, prior to the capability to carry out fully deterministic modeling. The achievable goal of our parametric model development is to be able to predict observed local and regional distance seismic amplitudes for event identification and yield determination in regions with incomplete or no prior history of underground nuclear testing. The relationship between the parametric equations and the geologic and containment conditions will assist in our physical understanding of the nuclear explosion source.« less
NASA Astrophysics Data System (ADS)
Anurose, T. J.; Subrahamanyam, D. Bala
2013-06-01
We discuss the impact of the differential treatment of the roughness lengths for momentum and heat (z_{0m} and z_{0h}) in the flux parametrization scheme of the high-resolution regional model (HRM) for a heterogeneous terrain centred around Thiruvananthapuram, India (8.5°N, 76.9°E). The magnitudes of sensible heat flux ( H) obtained from HRM simulations using the original parametrization scheme differed drastically from the concurrent in situ observations. With a view to improving the performance of this parametrization scheme, two distinct modifications are incorporated: (1) In the first method, a constant value of 100 is assigned to the z_{0m}/z_{0h} ratio; (2) and in the second approach, this ratio is treated as a function of time. Both these modifications in the HRM model showed significant improvements in the H simulations for Thiruvananthapuram and its adjoining regions. Results obtained from the present study provide a first-ever comparison of H simulations using the modified parametrization scheme in the HRM model with in situ observations for the Indian coastal region, and suggest a differential treatment of z_{0m} and z_{0h} in the flux parametrization scheme.
Tunable electromagnetically induced transparency and absorption with dressed superconducting qubits
NASA Astrophysics Data System (ADS)
Ian, Hou; Liu, Yu-Xi; Nori, Franco
2010-06-01
Electromagnetically induced transparency and absorption (EIT and EIA) are usually demonstrated using three-level atomic systems. In contrast to the usual case, we theoretically study the EIT and EIA in an equivalent three-level system: a superconducting two-level system (qubit) dressed by a single-mode cavity field. In this equivalent system, we find that both the EIT and the EIA can be tuned by controlling the level-spacing of the superconducting qubit and hence controlling the dressed system. This tunability is due to the dressed relaxation and dephasing rates which vary parametrically with the level-spacing of the original qubit and thus affect the transition properties of the dressed qubit and the susceptibility. These dressed relaxation and dephasing rates characterize the reaction of the dressed qubit to an incident probe field. Using recent experimental data on superconducting qubits (charge, phase, and flux qubits) to demonstrate our approach, we show the possibility of experimentally realizing this proposal.
NASA Astrophysics Data System (ADS)
Vattré, A.
2017-08-01
A parametric energy-based framework is developed to describe the elastic strain relaxation of interface dislocations. By means of the Stroh sextic formalism with a Fourier series technique, the proposed approach couples the classical anisotropic elasticity theory with surface/interface stress and elasticity properties in heterogeneous interface-dominated materials. For any semicoherent interface of interest, the strain energy landscape is computed using the persistent elastic fields produced by infinitely periodic hexagonal-shaped dislocation configurations with planar three-fold nodes. A finite element based procedure combined with the conjugate gradient and nudged elastic band methods is applied to determine the minimum-energy paths for which the pre-computed energy landscapes yield to elastically favorable dislocation reactions. Several applications on the Au/Cu heterosystems are given. The simple and limiting case of a single set of infinitely periodic dislocations is introduced to determine exact closed-form expressions for stresses. The second limiting case of the pure (010) Au/Cu heterophase interfaces containing two crossing sets of straight dislocations investigates the effects due to the non-classical boundary conditions on the stress distributions, including separate and appropriate constitutive relations at semicoherent interfaces and free surfaces. Using the quantized Frank-Bilby equation, it is shown that the elastic strain landscape exhibits intrinsic dislocation configurations for which the junction formation is energetically unfavorable. On the other hand, the mismatched (111) Au/Cu system gives rise to the existence of a minimum-energy path where the fully strain-relaxed equilibrium and non-regular intrinsic hexagonal-shaped dislocation rearrangement is accompanied by a significant removal of the short-range elastic energy.
NASA Astrophysics Data System (ADS)
Shoeibi, Samira; Taghavi-Shahri, F.; Khanpour, Hamzeh; Javidan, Kurosh
2018-04-01
In recent years, several experiments at the e-p collider HERA have collected high precision deep-inelastic scattering (DIS) data on the spectrum of leading nucleon carrying a large fraction of the proton's energy. In this paper, we have analyzed recent experimental data on the production of forward protons and neutrons in DIS at HERA in the framework of a perturbative QCD. We propose a technique based on the fractures functions framework, and extract the nucleon fracture functions (FFs) M2(n /p )(x ,Q2;xL) from global QCD analysis of DIS data measured by the ZEUS Collaboration at HERA. We have shown that an approach based on the fracture functions formalism allows us to phenomenologically parametrize the nucleon FFs. Considering both leading neutron as well as leading proton production data at HERA, we present the results for the separate parton distributions for all parton species, including valence quark densities, the antiquark densities, the strange sea distribution, and the gluon distribution functions. We proposed several parametrizations for the nucleon FFs and open the possibility of these asymmetries. The obtained optimum set of nucleon FFs is accompanied by Hessian uncertainty sets which allow one to propagate uncertainties to other observables interest. The extracted results for the t -integrated leading neutron F2LN (3 )(x ,Q2;xL) and leading proton F2LP (3 )(x ,Q2;xL) structure functions are in good agreement with all data analyzed, for a wide range of fractional momentum variable x as well as the longitudinal momentum fraction xL.
Multilevel Latent Class Analysis: Parametric and Nonparametric Models
ERIC Educational Resources Information Center
Finch, W. Holmes; French, Brian F.
2014-01-01
Latent class analysis is an analytic technique often used in educational and psychological research to identify meaningful groups of individuals within a larger heterogeneous population based on a set of variables. This technique is flexible, encompassing not only a static set of variables but also longitudinal data in the form of growth mixture…
Ku band low noise parametric amplifier
NASA Technical Reports Server (NTRS)
1976-01-01
A low noise, K sub u-band, parametric amplifier (paramp) was developed. The unit is a spacecraft-qualifiable, prototype, parametric amplifier for eventual application in the shuttle orbiter. The amplifier was required to have a noise temperature of less than 150 K. A noise temperature of less than 120 K at a gain level of 17 db was achieved. A 3-db bandwidth in excess of 350 MHz was attained, while deviation from phase linearity of about + or - 1 degree over 50 MHz was achieved. The paramp operates within specification over an ambient temperature range of -5 C to +50 C. The performance requirements and the operation of the K sub u-band parametric amplifier system are described. The final test results are also given.
Likert scales, levels of measurement and the "laws" of statistics.
Norman, Geoff
2010-12-01
Reviewers of research reports frequently criticize the choice of statistical methods. While some of these criticisms are well-founded, frequently the use of various parametric methods such as analysis of variance, regression, correlation are faulted because: (a) the sample size is too small, (b) the data may not be normally distributed, or (c) The data are from Likert scales, which are ordinal, so parametric statistics cannot be used. In this paper, I dissect these arguments, and show that many studies, dating back to the 1930s consistently show that parametric statistics are robust with respect to violations of these assumptions. Hence, challenges like those above are unfounded, and parametric methods can be utilized without concern for "getting the wrong answer".
Kernel approach to molecular similarity based on iterative graph similarity.
Rupp, Matthias; Proschak, Ewgenij; Schneider, Gisbert
2007-01-01
Similarity measures for molecules are of basic importance in chemical, biological, and pharmaceutical applications. We introduce a molecular similarity measure defined directly on the annotated molecular graph, based on iterative graph similarity and optimal assignments. We give an iterative algorithm for the computation of the proposed molecular similarity measure, prove its convergence and the uniqueness of the solution, and provide an upper bound on the required number of iterations necessary to achieve a desired precision. Empirical evidence for the positive semidefiniteness of certain parametrizations of our function is presented. We evaluated our molecular similarity measure by using it as a kernel in support vector machine classification and regression applied to several pharmaceutical and toxicological data sets, with encouraging results.
A parametric method for determining the number of signals in narrow-band direction finding
NASA Astrophysics Data System (ADS)
Wu, Qiang; Fuhrmann, Daniel R.
1991-08-01
A novel and more accurate method to determine the number of signals in the multisource direction finding problem is developed. The information-theoretic criteria of Yin and Krishnaiah (1988) are applied to a set of quantities which are evaluated from the log-likelihood function. Based on proven asymptotic properties of the maximum likelihood estimation, these quantities have the properties required by the criteria. Since the information-theoretic criteria use these quantities instead of the eigenvalues of the estimated correlation matrix, this approach possesses the advantage of not requiring a subjective threshold, and also provides higher performance than when eigenvalues are used. Simulation results are presented and compared to those obtained from the nonparametric method given by Wax and Kailath (1985).
NASA Astrophysics Data System (ADS)
Noe, Frank
To efficiently simulate and generate understanding from simulations of complex macromolecular systems, the concept of slow collective coordinates or reaction coordinates is of fundamental importance. Here we will introduce variational approaches to approximate the slow coordinates and the reaction coordinates between selected end-states given MD simulations of the macromolecular system and a (possibly large) basis set of candidate coordinates. We will then discuss how to select physically intuitive order paremeters that are good surrogates of this variationally optimal result. These result can be used in order to construct Markov state models or other models of the stationary and kinetics properties, in order to parametrize low-dimensional / coarse-grained model of the dynamics. Deutsche Forschungsgemeinschaft, European Research Council.
TARGETED PROSTATE BIOPSY: LESSONS LEARNED MIDST THE EVOLUTION OF A DISRUPTIVE TECHNOLOGY
Nassiri, Nima; Natarajan, Shyam; Margolis, Daniel J.; Marks, Leonard S.
2015-01-01
Lessons learned during a 6-year experience with more than 1200 patients undergoing targeted prostate biopsy via MRI/US fusion are reported: (1) The procedure is safe and efficient, requiring some 15–20 minutes in an office setting; (2) MRI is best performed by a radiologist with specialized training, employing a trans-abdominal multi-parametric approach and preferably a 3T magnet; (3) Grade of MRI suspicion is the most powerful predictor of biopsy results, e.g., Grade 5 usually represents cancer; (4) Some potentially-important cancers (15%–30%) are MRI-invisible; (5) Targeted biopsies provide >80% concordance with whole-organ pathology. Early enthusiasm notwithstanding, cost-effectiveness is yet to be resolved, and the technologies remain in evolution. PMID:26166671
Z/sub n/ Baxter model: symmetries and the Belavin parametrization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Richey, M.P.; Tracy, C.A.
1986-02-01
The Z/sub n/ Baxter model is an exactly solvable lattice model in the special case of the Belavin parametrization. For this parametrization the authors calculate the partition function in an antiferromagnetic region and the order parameter in a ferromagnetic region. They find that the order parameter is expressible in terms of a modular function of level n which for n=2 is the Onsager-Yang-Baxter result. In addition they determine the symmetry group of the finite lattice partition function for the general Z/sub n/ Baxter model.
Ultrasonically Absorptive Coatings for Hypersonic
2008-05-13
UAC and TPS functions. To aid in the design of UAC with regular microstructure to be tested the CUBRC LENS I tunnel, parametric studies of the UAC-LFC...approaching the large-scale demonstration stage in the CUBRC LENS tunnel as well as fabrication of ceramic UAC samples integrated into TPS. Summary...integrate UAC and TPS functions. To aid in the design of UAC with regular microstructure to be tested the CUBRC LENS I tunnel, parametric studies of
Nonlinear optical interactions in silicon waveguides
NASA Astrophysics Data System (ADS)
Kuyken, B.; Leo, F.; Clemmen, S.; Dave, U.; Van Laer, R.; Ideguchi, T.; Zhao, H.; Liu, X.; Safioui, J.; Coen, S.; Gorza, S. P.; Selvaraja, S. K.; Massar, S.; Osgood, R. M.; Verheyen, P.; Van Campenhout, J.; Baets, R.; Green, W. M. J.; Roelkens, G.
2017-03-01
The strong nonlinear response of silicon photonic nanowire waveguides allows for the integration of nonlinear optical functions on a chip. However, the detrimental nonlinear optical absorption in silicon at telecom wavelengths limits the efficiency of many such experiments. In this review, several approaches are proposed and demonstrated to overcome this fundamental issue. By using the proposed methods, we demonstrate amongst others supercontinuum generation, frequency comb generation, a parametric optical amplifier, and a parametric optical oscillator.
Parametric Modeling in the CAE Process: Creating a Family of Models
NASA Technical Reports Server (NTRS)
Brown, Christopher J.
2011-01-01
This Presentation meant as an example - Give ideas of approaches to use - The significant benefit of PARAMETRIC geometry based modeling The importance of planning before you build Showcase some NX capabilities - Mesh Controls - Associativity - Divide Face - Offset Surface Reminder - This only had to be done once! - Can be used for any cabinet in that "family" Saves a lot of time if pre-planned Allows re-use in the future
Hydrogen peroxide clusters: the role of open book motif in cage and helical structures.
Elango, M; Parthasarathi, R; Subramanian, V; Ramachandran, C N; Sathyamurthy, N
2006-05-18
Hartree-Fock (HF) calculations using 6-31G*, 6-311++G(d,p), aug-cc-pVDZ, and aug-cc-pVTZ basis sets show that hydrogen peroxide molecular clusters tend to form hydrogen-bonded cyclic and cage structures along the lines expected of a molecule which can act as a proton donor as well as an acceptor. These results are reiterated by density functional theoretic (DFT) calculations with B3LYP parametrization and also by second-order Møller-Plesset perturbation (MP2) theory using 6-31G* and 6-311++G(d,p) basis sets. Trends in stabilization energies and geometrical parameters obtained at the HF level using 6-311++G(d,p), aug-cc-pVDZ, and aug-cc-pVTZ basis sets are similar to those obtained from HF/6-31G* calculation. In addition, the HF calculations suggest the formation of stable helical structures for larger clusters, provided the neighbors form an open book structure.