Sample records for parametric level set

  1. Identifying Attributes of CO2 Leakage Zones in Shallow Aquifers Using a Parametric Level Set Method

    NASA Astrophysics Data System (ADS)

    Sun, A. Y.; Islam, A.; Wheeler, M.

    2016-12-01

    Leakage through abandoned wells and geologic faults poses the greatest risk to CO2 storage permanence. For shallow aquifers, secondary CO2 plumes emanating from the leak zones may go undetected for a sustained period of time and has the greatest potential to cause large-scale and long-term environmental impacts. Identification of the attributes of leak zones, including their shape, location, and strength, is required for proper environmental risk assessment. This study applies a parametric level set (PaLS) method to characterize the leakage zone. Level set methods are appealing for tracking topological changes and recovering unknown shapes of objects. However, level set evolution using the conventional level set methods is challenging. In PaLS, the level set function is approximated using a weighted sum of basis functions and the level set evolution problem is replaced by an optimization problem. The efficacy of PaLS is demonstrated through recovering the source zone created by CO2 leakage into a carbonate aquifer. Our results show that PaLS is a robust source identification method that can recover the approximate source locations in the presence of measurement errors, model parameter uncertainty, and inaccurate initial guesses of source flux strengths. The PaLS inversion framework introduced in this work is generic and can be adapted for any reactive transport model by switching the pre- and post-processing routines.

  2. Rapid calculation of accurate atomic charges for proteins via the electronegativity equalization method.

    PubMed

    Ionescu, Crina-Maria; Geidl, Stanislav; Svobodová Vařeková, Radka; Koča, Jaroslav

    2013-10-28

    We focused on the parametrization and evaluation of empirical models for fast and accurate calculation of conformationally dependent atomic charges in proteins. The models were based on the electronegativity equalization method (EEM), and the parametrization procedure was tailored to proteins. We used large protein fragments as reference structures and fitted the EEM model parameters using atomic charges computed by three population analyses (Mulliken, Natural, iterative Hirshfeld), at the Hartree-Fock level with two basis sets (6-31G*, 6-31G**) and in two environments (gas phase, implicit solvation). We parametrized and successfully validated 24 EEM models. When tested on insulin and ubiquitin, all models reproduced quantum mechanics level charges well and were consistent with respect to population analysis and basis set. Specifically, the models showed on average a correlation of 0.961, RMSD 0.097 e, and average absolute error per atom 0.072 e. The EEM models can be used with the freely available EEM implementation EEM_SOLVER.

  3. DFTB Parameters for the Periodic Table, Part 2: Energies and Energy Gradients from Hydrogen to Calcium.

    PubMed

    Oliveira, Augusto F; Philipsen, Pier; Heine, Thomas

    2015-11-10

    In the first part of this series, we presented a parametrization strategy to obtain high-quality electronic band structures on the basis of density-functional-based tight-binding (DFTB) calculations and published a parameter set called QUASINANO2013.1. Here, we extend our parametrization effort to include the remaining terms that are needed to compute the total energy and its gradient, commonly referred to as repulsive potential. Instead of parametrizing these terms as a two-body potential, we calculate them explicitly from the DFTB analogues of the Kohn-Sham total energy expression. This strategy requires only two further numerical parameters per element. Thus, the atomic configuration and four real numbers per element are sufficient to define the DFTB model at this level of parametrization. The QUASINANO2015 parameter set allows the calculation of energy, structure, and electronic structure of all systems composed of elements ranging from H to Ca. Extensive benchmarks show that the overall accuracy of QUASINANO2015 is comparable to that of well-established methods, including PM7 and hand-tuned DFTB parameter sets, while coverage of a much larger range of chemical systems is available.

  4. Parameterization of DFTB3/3OB for Sulfur and Phosphorus for Chemical and Biological Applications

    PubMed Central

    2015-01-01

    We report the parametrization of the approximate density functional tight binding method, DFTB3, for sulfur and phosphorus. The parametrization is done in a framework consistent with our previous 3OB set established for O, N, C, and H, thus the resulting parameters can be used to describe a broad set of organic and biologically relevant molecules. The 3d orbitals are included in the parametrization, and the electronic parameters are chosen to minimize errors in the atomization energies. The parameters are tested using a fairly diverse set of molecules of biological relevance, focusing on the geometries, reaction energies, proton affinities, and hydrogen bonding interactions of these molecules; vibrational frequencies are also examined, although less systematically. The results of DFTB3/3OB are compared to those from DFT (B3LYP and PBE), ab initio (MP2, G3B3), and several popular semiempirical methods (PM6 and PDDG), as well as predictions of DFTB3 with the older parametrization (the MIO set). In general, DFTB3/3OB is a major improvement over the previous parametrization (DFTB3/MIO), and for the majority cases tested here, it also outperforms PM6 and PDDG, especially for structural properties, vibrational frequencies, hydrogen bonding interactions, and proton affinities. For reaction energies, DFTB3/3OB exhibits major improvement over DFTB3/MIO, due mainly to significant reduction of errors in atomization energies; compared to PM6 and PDDG, DFTB3/3OB also generally performs better, although the magnitude of improvement is more modest. Compared to high-level calculations, DFTB3/3OB is most successful at predicting geometries; larger errors are found in the energies, although the results can be greatly improved by computing single point energies at a high level with DFTB3 geometries. There are several remaining issues with the DFTB3/3OB approach, most notably its difficulty in describing phosphate hydrolysis reactions involving a change in the coordination number of the phosphorus, for which a specific parametrization (3OB/OPhyd) is developed as a temporary solution; this suggests that the current DFTB3 methodology has limited transferability for complex phosphorus chemistry at the level of accuracy required for detailed mechanistic investigations. Therefore, fundamental improvements in the DFTB3 methodology are needed for a reliable method that describes phosphorus chemistry without ad hoc parameters. Nevertheless, DFTB3/3OB is expected to be a competitive QM method in QM/MM calculations for studying phosphorus/sulfur chemistry in condensed phase systems, especially as a low-level method that drives the sampling in a dual-level QM/MM framework. PMID:24803865

  5. A Non-parametric Cutout Index for Robust Evaluation of Identified Proteins*

    PubMed Central

    Serang, Oliver; Paulo, Joao; Steen, Hanno; Steen, Judith A.

    2013-01-01

    This paper proposes a novel, automated method for evaluating sets of proteins identified using mass spectrometry. The remaining peptide-spectrum match score distributions of protein sets are compared to an empirical absent peptide-spectrum match score distribution, and a Bayesian non-parametric method reminiscent of the Dirichlet process is presented to accurately perform this comparison. Thus, for a given protein set, the process computes the likelihood that the proteins identified are correctly identified. First, the method is used to evaluate protein sets chosen using different protein-level false discovery rate (FDR) thresholds, assigning each protein set a likelihood. The protein set assigned the highest likelihood is used to choose a non-arbitrary protein-level FDR threshold. Because the method can be used to evaluate any protein identification strategy (and is not limited to mere comparisons of different FDR thresholds), we subsequently use the method to compare and evaluate multiple simple methods for merging peptide evidence over replicate experiments. The general statistical approach can be applied to other types of data (e.g. RNA sequencing) and generalizes to multivariate problems. PMID:23292186

  6. Free-form geometric modeling by integrating parametric and implicit PDEs.

    PubMed

    Du, Haixia; Qin, Hong

    2007-01-01

    Parametric PDE techniques, which use partial differential equations (PDEs) defined over a 2D or 3D parametric domain to model graphical objects and processes, can unify geometric attributes and functional constraints of the models. PDEs can also model implicit shapes defined by level sets of scalar intensity fields. In this paper, we present an approach that integrates parametric and implicit trivariate PDEs to define geometric solid models containing both geometric information and intensity distribution subject to flexible boundary conditions. The integrated formulation of second-order or fourth-order elliptic PDEs permits designers to manipulate PDE objects of complex geometry and/or arbitrary topology through direct sculpting and free-form modeling. We developed a PDE-based geometric modeling system for shape design and manipulation of PDE objects. The integration of implicit PDEs with parametric geometry offers more general and arbitrary shape blending and free-form modeling for objects with intensity attributes than pure geometric models.

  7. Parametric tests of a traction drive retrofitted to an automotive gas turbine

    NASA Technical Reports Server (NTRS)

    Rohn, D. A.; Lowenthal, S. H.; Anderson, N. E.

    1980-01-01

    The results of a test program to retrofit a high performance fixed ratio Nasvytis Multiroller Traction Drive in place of a helical gear set to a gas turbine engine are presented. Parametric tests up to a maximum engine power turbine speed of 45,500 rpm and to a power level of 11 kW were conducted. Comparisons were made to similar drives that were parametrically tested on a back-to-back test stand. The drive showed good compatibility with the gas turbine engine. Specific fuel consumption of the engine with the traction drive speed reducer installed was comparable to the original helical gearset equipped engine.

  8. Experimental Characterization of Gas Turbine Emissions at Simulated Flight Altitude Conditions

    NASA Technical Reports Server (NTRS)

    Howard, R. P.; Wormhoudt, J. C.; Whitefield, P. D.

    1996-01-01

    NASA's Atmospheric Effects of Aviation Project (AEAP) is developing a scientific basis for assessment of the atmospheric impact of subsonic and supersonic aviation. A primary goal is to assist assessments of United Nations scientific organizations and hence, consideration of emissions standards by the International Civil Aviation Organization (ICAO). Engine tests have been conducted at AEDC to fulfill the need of AEAP. The purpose of these tests is to obtain a comprehensive database to be used for supplying critical information to the atmospheric research community. It includes: (1) simulated sea-level-static test data as well as simulated altitude data; and (2) intrusive (extractive probe) data as well as non-intrusive (optical techniques) data. A commercial-type bypass engine with aviation fuel was used in this test series. The test matrix was set by parametrically selecting the temperature, pressure, and flow rate at sea-level-static and different altitudes to obtain a parametric set of data.

  9. Fixed-Order Mixed Norm Designs for Building Vibration Control

    NASA Technical Reports Server (NTRS)

    Whorton, Mark S.; Calise, Anthony J.

    2000-01-01

    This study investigates the use of H2, mu-synthesis, and mixed H2/mu methods to construct full order controllers and optimized controllers of fixed dimensions. The benchmark problem definition is first extended to include uncertainty within the controller bandwidth in the form of parametric uncertainty representative of uncertainty in the natural frequencies of the design model. The sensitivity of H2 design to unmodeled dynamics and parametric uncertainty is evaluated for a range of controller levels of authority. Next, mu-synthesis methods are applied to design full order compensators that are robust to both unmodeled dynamics and to parametric uncertainty. Finally, a set of mixed H2/mu compensators are designed which are optimized for a fixed compensator dimension. These mixed norm designs recover the H2 design performance levels while providing the same levels of robust stability as the mu designs. It is shown that designing with the mixed norm approach permits higher levels of controller authority for which the H2 designs are destabilizing. The benchmark problem is that of an active tendon system. The controller designs are all based on the use of acceleration feedback.

  10. Combined-probability space and certainty or uncertainty relations for a finite-level quantum system

    NASA Astrophysics Data System (ADS)

    Sehrawat, Arun

    2017-08-01

    The Born rule provides a probability vector (distribution) with a quantum state for a measurement setting. For two settings, we have a pair of vectors from the same quantum state. Each pair forms a combined-probability vector that obeys certain quantum constraints, which are triangle inequalities in our case. Such a restricted set of combined vectors, called the combined-probability space, is presented here for a d -level quantum system (qudit). The combined space is a compact convex subset of a Euclidean space, and all its extreme points come from a family of parametric curves. Considering a suitable concave function on the combined space to estimate the uncertainty, we deliver an uncertainty relation by finding its global minimum on the curves for a qudit. If one chooses an appropriate concave (or convex) function, then there is no need to search for the absolute minimum (maximum) over the whole space; it will be on the parametric curves. So these curves are quite useful for establishing an uncertainty (or a certainty) relation for a general pair of settings. We also demonstrate that many known tight certainty or uncertainty relations for a qubit can be obtained with the triangle inequalities.

  11. An EM-based semi-parametric mixture model approach to the regression analysis of competing-risks data.

    PubMed

    Ng, S K; McLachlan, G J

    2003-04-15

    We consider a mixture model approach to the regression analysis of competing-risks data. Attention is focused on inference concerning the effects of factors on both the probability of occurrence and the hazard rate conditional on each of the failure types. These two quantities are specified in the mixture model using the logistic model and the proportional hazards model, respectively. We propose a semi-parametric mixture method to estimate the logistic and regression coefficients jointly, whereby the component-baseline hazard functions are completely unspecified. Estimation is based on maximum likelihood on the basis of the full likelihood, implemented via an expectation-conditional maximization (ECM) algorithm. Simulation studies are performed to compare the performance of the proposed semi-parametric method with a fully parametric mixture approach. The results show that when the component-baseline hazard is monotonic increasing, the semi-parametric and fully parametric mixture approaches are comparable for mildly and moderately censored samples. When the component-baseline hazard is not monotonic increasing, the semi-parametric method consistently provides less biased estimates than a fully parametric approach and is comparable in efficiency in the estimation of the parameters for all levels of censoring. The methods are illustrated using a real data set of prostate cancer patients treated with different dosages of the drug diethylstilbestrol. Copyright 2003 John Wiley & Sons, Ltd.

  12. Synthesis and Control of Flexible Systems with Component-Level Uncertainties

    NASA Technical Reports Server (NTRS)

    Maghami, Peiman G.; Lim, Kyong B.

    2009-01-01

    An efficient and computationally robust method for synthesis of component dynamics is developed. The method defines the interface forces/moments as feasible vectors in transformed coordinates to ensure that connectivity requirements of the combined structure are met. The synthesized system is then defined in a transformed set of feasible coordinates. The simplicity of form is exploited to effectively deal with modeling parametric and non-parametric uncertainties at the substructure level. Uncertainty models of reasonable size and complexity are synthesized for the combined structure from those in the substructure models. In particular, we address frequency and damping uncertainties at the component level. The approach first considers the robustness of synthesized flexible systems. It is then extended to deal with non-synthesized dynamic models with component-level uncertainties by projecting uncertainties to the system level. A numerical example is given to demonstrate the feasibility of the proposed approach.

  13. Extended parametric representation of compressor fans and turbines. Volume 2: Part user's manual (parametric turbine)

    NASA Technical Reports Server (NTRS)

    Coverse, G. L.

    1984-01-01

    A turbine modeling technique has been developed which will enable the user to obtain consistent and rapid off-design performance from design point input. This technique is applicable to both axial and radial flow turbine with flow sizes ranging from about one pound per second to several hundred pounds per second. The axial flow turbines may or may not include variable geometry in the first stage nozzle. A user-specified option will also permit the calculation of design point cooling flow levels and corresponding changes in efficiency for the axial flow turbines. The modeling technique has been incorporated into a time-sharing program in order to facilitate its use. Because this report contains a description of the input output data, values of typical inputs, and example cases, it is suitable as a user's manual. This report is the second of a three volume set. The titles of the three volumes are as follows: (1) Volume 1 CMGEN USER's Manual (Parametric Compressor Generator); (2) Volume 2 PART USER's Manual (Parametric Turbine); (3) Volume 3 MODFAN USER's Manual (Parametric Modulation Flow Fan).

  14. A note on the correlation between circular and linear variables with an application to wind direction and air temperature data in a Mediterranean climate

    NASA Astrophysics Data System (ADS)

    Lototzis, M.; Papadopoulos, G. K.; Droulia, F.; Tseliou, A.; Tsiros, I. X.

    2018-04-01

    There are several cases where a circular variable is associated with a linear one. A typical example is wind direction that is often associated with linear quantities such as air temperature and air humidity. The analysis of a statistical relationship of this kind can be tested by the use of parametric and non-parametric methods, each of which has its own advantages and drawbacks. This work deals with correlation analysis using both the parametric and the non-parametric procedure on a small set of meteorological data of air temperature and wind direction during a summer period in a Mediterranean climate. Correlations were examined between hourly, daily and maximum-prevailing values, under typical and non-typical meteorological conditions. Both tests indicated a strong correlation between mean hourly wind directions and mean hourly air temperature, whereas mean daily wind direction and mean daily air temperature do not seem to be correlated. In some cases, however, the two procedures were found to give quite dissimilar levels of significance on the rejection or not of the null hypothesis of no correlation. The simple statistical analysis presented in this study, appropriately extended in large sets of meteorological data, may be a useful tool for estimating effects of wind on local climate studies.

  15. A Study of Fixed-Order Mixed Norm Designs for a Benchmark Problem in Structural Control

    NASA Technical Reports Server (NTRS)

    Whorton, Mark S.; Calise, Anthony J.; Hsu, C. C.

    1998-01-01

    This study investigates the use of H2, p-synthesis, and mixed H2/mu methods to construct full-order controllers and optimized controllers of fixed dimensions. The benchmark problem definition is first extended to include uncertainty within the controller bandwidth in the form of parametric uncertainty representative of uncertainty in the natural frequencies of the design model. The sensitivity of H2 design to unmodelled dynamics and parametric uncertainty is evaluated for a range of controller levels of authority. Next, mu-synthesis methods are applied to design full-order compensators that are robust to both unmodelled dynamics and to parametric uncertainty. Finally, a set of mixed H2/mu compensators are designed which are optimized for a fixed compensator dimension. These mixed norm designs recover the H, design performance levels while providing the same levels of robust stability as the u designs. It is shown that designing with the mixed norm approach permits higher levels of controller authority for which the H, designs are destabilizing. The benchmark problem is that of an active tendon system. The controller designs are all based on the use of acceleration feedback.

  16. An improvement of quantum parametric methods by using SGSA parameterization technique and new elementary parametric functionals

    NASA Astrophysics Data System (ADS)

    Sánchez, M.; Oldenhof, M.; Freitez, J. A.; Mundim, K. C.; Ruette, F.

    A systematic improvement of parametric quantum methods (PQM) is performed by considering: (a) a new application of parameterization procedure to PQMs and (b) novel parametric functionals based on properties of elementary parametric functionals (EPF) [Ruette et al., Int J Quantum Chem 2008, 108, 1831]. Parameterization was carried out by using the simplified generalized simulated annealing (SGSA) method in the CATIVIC program. This code has been parallelized and comparison with MOPAC/2007 (PM6) and MINDO/SR was performed for a set of molecules with C=C, C=H, and H=H bonds. Results showed better accuracy than MINDO/SR and MOPAC-2007 for a selected trial set of molecules.

  17. General analysis of group velocity effects in collinear optical parametric amplifiers and generators.

    PubMed

    Arisholm, Gunnar

    2007-05-14

    Group velocity mismatch (GVM) is a major concern in the design of optical parametric amplifiers (OPAs) and generators (OPGs) for pulses shorter than a few picoseconds. By simplifying the coupled propagation equations and exploiting their scaling properties, the number of free parameters for a collinear OPA is reduced to a level where the parameter space can be studied systematically by simulations. The resulting set of figures show the combinations of material parameters and pulse lengths for which high performance can be achieved, and they can serve as a basis for a design.

  18. When the Single Matters more than the Group (II): Addressing the Problem of High False Positive Rates in Single Case Voxel Based Morphometry Using Non-parametric Statistics.

    PubMed

    Scarpazza, Cristina; Nichols, Thomas E; Seramondi, Donato; Maumet, Camille; Sartori, Giuseppe; Mechelli, Andrea

    2016-01-01

    In recent years, an increasing number of studies have used Voxel Based Morphometry (VBM) to compare a single patient with a psychiatric or neurological condition of interest against a group of healthy controls. However, the validity of this approach critically relies on the assumption that the single patient is drawn from a hypothetical population with a normal distribution and variance equal to that of the control group. In a previous investigation, we demonstrated that family-wise false positive error rate (i.e., the proportion of statistical comparisons yielding at least one false positive) in single case VBM are much higher than expected (Scarpazza et al., 2013). Here, we examine whether the use of non-parametric statistics, which does not rely on the assumptions of normal distribution and equal variance, would enable the investigation of single subjects with good control of false positive risk. We empirically estimated false positive rates (FPRs) in single case non-parametric VBM, by performing 400 statistical comparisons between a single disease-free individual and a group of 100 disease-free controls. The impact of smoothing (4, 8, and 12 mm) and type of pre-processing (Modulated, Unmodulated) was also examined, as these factors have been found to influence FPRs in previous investigations using parametric statistics. The 400 statistical comparisons were repeated using two independent, freely available data sets in order to maximize the generalizability of the results. We found that the family-wise error rate was 5% for increases and 3.6% for decreases in one data set; and 5.6% for increases and 6.3% for decreases in the other data set (5% nominal). Further, these results were not dependent on the level of smoothing and modulation. Therefore, the present study provides empirical evidence that single case VBM studies with non-parametric statistics are not susceptible to high false positive rates. The critical implication of this finding is that VBM can be used to characterize neuroanatomical alterations in individual subjects as long as non-parametric statistics are employed.

  19. Comparison of thawing and freezing dark energy parametrizations

    NASA Astrophysics Data System (ADS)

    Pantazis, G.; Nesseris, S.; Perivolaropoulos, L.

    2016-05-01

    Dark energy equation of state w (z ) parametrizations with two parameters and given monotonicity are generically either convex or concave functions. This makes them suitable for fitting either freezing or thawing quintessence models but not both simultaneously. Fitting a data set based on a freezing model with an unsuitable (concave when increasing) w (z ) parametrization [like Chevallier-Polarski-Linder (CPL)] can lead to significant misleading features like crossing of the phantom divide line, incorrect w (z =0 ), incorrect slope, etc., that are not present in the underlying cosmological model. To demonstrate this fact we generate scattered cosmological data at both the level of w (z ) and the luminosity distance DL(z ) based on either thawing or freezing quintessence models and fit them using parametrizations of convex and of concave type. We then compare statistically significant features of the best fit w (z ) with actual features of the underlying model. We thus verify that the use of unsuitable parametrizations can lead to misleading conclusions. In order to avoid these problems it is important to either use both convex and concave parametrizations and select the one with the best χ2 or use principal component analysis thus splitting the redshift range into independent bins. In the latter case, however, significant information about the slope of w (z ) at high redshifts is lost. Finally, we propose a new family of parametrizations w (z )=w0+wa(z/1 +z )n which generalizes the CPL and interpolates between thawing and freezing parametrizations as the parameter n increases to values larger than 1.

  20. Frequency domain optical parametric amplification

    PubMed Central

    Schmidt, Bruno E.; Thiré, Nicolas; Boivin, Maxime; Laramée, Antoine; Poitras, François; Lebrun, Guy; Ozaki, Tsuneyuki; Ibrahim, Heide; Légaré, François

    2014-01-01

    Today’s ultrafast lasers operate at the physical limits of optical materials to reach extreme performances. Amplification of single-cycle laser pulses with their corresponding octave-spanning spectra still remains a formidable challenge since the universal dilemma of gain narrowing sets limits for both real level pumped amplifiers as well as parametric amplifiers. We demonstrate that employing parametric amplification in the frequency domain rather than in time domain opens up new design opportunities for ultrafast laser science, with the potential to generate single-cycle multi-terawatt pulses. Fundamental restrictions arising from phase mismatch and damage threshold of nonlinear laser crystals are not only circumvented but also exploited to produce a synergy between increased seed spectrum and increased pump energy. This concept was successfully demonstrated by generating carrier envelope phase stable, 1.43 mJ two-cycle pulses at 1.8 μm wavelength. PMID:24805968

  1. Large-scale subject-specific cerebral arterial tree modeling using automated parametric mesh generation for blood flow simulation.

    PubMed

    Ghaffari, Mahsa; Tangen, Kevin; Alaraj, Ali; Du, Xinjian; Charbel, Fady T; Linninger, Andreas A

    2017-12-01

    In this paper, we present a novel technique for automatic parametric mesh generation of subject-specific cerebral arterial trees. This technique generates high-quality and anatomically accurate computational meshes for fast blood flow simulations extending the scope of 3D vascular modeling to a large portion of cerebral arterial trees. For this purpose, a parametric meshing procedure was developed to automatically decompose the vascular skeleton, extract geometric features and generate hexahedral meshes using a body-fitted coordinate system that optimally follows the vascular network topology. To validate the anatomical accuracy of the reconstructed vasculature, we performed statistical analysis to quantify the alignment between parametric meshes and raw vascular images using receiver operating characteristic curve. Geometric accuracy evaluation showed an agreement with area under the curves value of 0.87 between the constructed mesh and raw MRA data sets. Parametric meshing yielded on-average, 36.6% and 21.7% orthogonal and equiangular skew quality improvement over the unstructured tetrahedral meshes. The parametric meshing and processing pipeline constitutes an automated technique to reconstruct and simulate blood flow throughout a large portion of the cerebral arterial tree down to the level of pial vessels. This study is the first step towards fast large-scale subject-specific hemodynamic analysis for clinical applications. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Locally-Based Kernal PLS Smoothing to Non-Parametric Regression Curve Fitting

    NASA Technical Reports Server (NTRS)

    Rosipal, Roman; Trejo, Leonard J.; Wheeler, Kevin; Korsmeyer, David (Technical Monitor)

    2002-01-01

    We present a novel smoothing approach to non-parametric regression curve fitting. This is based on kernel partial least squares (PLS) regression in reproducing kernel Hilbert space. It is our concern to apply the methodology for smoothing experimental data where some level of knowledge about the approximate shape, local inhomogeneities or points where the desired function changes its curvature is known a priori or can be derived based on the observed noisy data. We propose locally-based kernel PLS regression that extends the previous kernel PLS methodology by incorporating this knowledge. We compare our approach with existing smoothing splines, hybrid adaptive splines and wavelet shrinkage techniques on two generated data sets.

  3. Nonparametric functional data estimation applied to ozone data: prediction and extreme value analysis.

    PubMed

    Quintela-del-Río, Alejandro; Francisco-Fernández, Mario

    2011-02-01

    The study of extreme values and prediction of ozone data is an important topic of research when dealing with environmental problems. Classical extreme value theory is usually used in air-pollution studies. It consists in fitting a parametric generalised extreme value (GEV) distribution to a data set of extreme values, and using the estimated distribution to compute return levels and other quantities of interest. Here, we propose to estimate these values using nonparametric functional data methods. Functional data analysis is a relatively new statistical methodology that generally deals with data consisting of curves or multi-dimensional variables. In this paper, we use this technique, jointly with nonparametric curve estimation, to provide alternatives to the usual parametric statistical tools. The nonparametric estimators are applied to real samples of maximum ozone values obtained from several monitoring stations belonging to the Automatic Urban and Rural Network (AURN) in the UK. The results show that nonparametric estimators work satisfactorily, outperforming the behaviour of classical parametric estimators. Functional data analysis is also used to predict stratospheric ozone concentrations. We show an application, using the data set of mean monthly ozone concentrations in Arosa, Switzerland, and the results are compared with those obtained by classical time series (ARIMA) analysis. Copyright © 2010 Elsevier Ltd. All rights reserved.

  4. Parametrization of DFTB3/3OB for Magnesium and Zinc for Chemical and Biological Applications

    PubMed Central

    2015-01-01

    We report the parametrization of the approximate density functional theory, DFTB3, for magnesium and zinc for chemical and biological applications. The parametrization strategy follows that established in previous work that parametrized several key main group elements (O, N, C, H, P, and S). This 3OB set of parameters can thus be used to study many chemical and biochemical systems. The parameters are benchmarked using both gas-phase and condensed-phase systems. The gas-phase results are compared to DFT (mostly B3LYP), ab initio (MP2 and G3B3), and PM6, as well as to a previous DFTB parametrization (MIO). The results indicate that DFTB3/3OB is particularly successful at predicting structures, including rather complex dinuclear metalloenzyme active sites, while being semiquantitative (with a typical mean absolute deviation (MAD) of ∼3–5 kcal/mol) for energetics. Single-point calculations with high-level quantum mechanics (QM) methods generally lead to very satisfying (a typical MAD of ∼1 kcal/mol) energetic properties. DFTB3/MM simulations for solution and two enzyme systems also lead to encouraging structural and energetic properties in comparison to available experimental data. The remaining limitations of DFTB3, such as the treatment of interaction between metal ions and highly charged/polarizable ligands, are also discussed. PMID:25178644

  5. Empirical validation of statistical parametric mapping for group imaging of fast neural activity using electrical impedance tomography.

    PubMed

    Packham, B; Barnes, G; Dos Santos, G Sato; Aristovich, K; Gilad, O; Ghosh, A; Oh, T; Holder, D

    2016-06-01

    Electrical impedance tomography (EIT) allows for the reconstruction of internal conductivity from surface measurements. A change in conductivity occurs as ion channels open during neural activity, making EIT a potential tool for functional brain imaging. EIT images can have  >10 000 voxels, which means statistical analysis of such images presents a substantial multiple testing problem. One way to optimally correct for these issues and still maintain the flexibility of complicated experimental designs is to use random field theory. This parametric method estimates the distribution of peaks one would expect by chance in a smooth random field of a given size. Random field theory has been used in several other neuroimaging techniques but never validated for EIT images of fast neural activity, such validation can be achieved using non-parametric techniques. Both parametric and non-parametric techniques were used to analyze a set of 22 images collected from 8 rats. Significant group activations were detected using both techniques (corrected p  <  0.05). Both parametric and non-parametric analyses yielded similar results, although the latter was less conservative. These results demonstrate the first statistical analysis of such an image set and indicate that such an analysis is an approach for EIT images of neural activity.

  6. Empirical validation of statistical parametric mapping for group imaging of fast neural activity using electrical impedance tomography

    PubMed Central

    Packham, B; Barnes, G; dos Santos, G Sato; Aristovich, K; Gilad, O; Ghosh, A; Oh, T; Holder, D

    2016-01-01

    Abstract Electrical impedance tomography (EIT) allows for the reconstruction of internal conductivity from surface measurements. A change in conductivity occurs as ion channels open during neural activity, making EIT a potential tool for functional brain imaging. EIT images can have  >10 000 voxels, which means statistical analysis of such images presents a substantial multiple testing problem. One way to optimally correct for these issues and still maintain the flexibility of complicated experimental designs is to use random field theory. This parametric method estimates the distribution of peaks one would expect by chance in a smooth random field of a given size. Random field theory has been used in several other neuroimaging techniques but never validated for EIT images of fast neural activity, such validation can be achieved using non-parametric techniques. Both parametric and non-parametric techniques were used to analyze a set of 22 images collected from 8 rats. Significant group activations were detected using both techniques (corrected p  <  0.05). Both parametric and non-parametric analyses yielded similar results, although the latter was less conservative. These results demonstrate the first statistical analysis of such an image set and indicate that such an analysis is an approach for EIT images of neural activity. PMID:27203477

  7. Rank-based permutation approaches for non-parametric factorial designs.

    PubMed

    Umlauft, Maria; Konietschke, Frank; Pauly, Markus

    2017-11-01

    Inference methods for null hypotheses formulated in terms of distribution functions in general non-parametric factorial designs are studied. The methods can be applied to continuous, ordinal or even ordered categorical data in a unified way, and are based only on ranks. In this set-up Wald-type statistics and ANOVA-type statistics are the current state of the art. The first method is asymptotically exact but a rather liberal statistical testing procedure for small to moderate sample size, while the latter is only an approximation which does not possess the correct asymptotic α level under the null. To bridge these gaps, a novel permutation approach is proposed which can be seen as a flexible generalization of the Kruskal-Wallis test to all kinds of factorial designs with independent observations. It is proven that the permutation principle is asymptotically correct while keeping its finite exactness property when data are exchangeable. The results of extensive simulation studies foster these theoretical findings. A real data set exemplifies its applicability. © 2017 The British Psychological Society.

  8. Biowaste home composting: experimental process monitoring and quality control.

    PubMed

    Tatàno, Fabio; Pagliaro, Giacomo; Di Giovanni, Paolo; Floriani, Enrico; Mangani, Filippo

    2015-04-01

    Because home composting is a prevention option in managing biowaste at local levels, the objective of the present study was to contribute to the knowledge of the process evolution and compost quality that can be expected and obtained, respectively, in this decentralized option. In this study, organized as the research portion of a provincial project on home composting in the territory of Pesaro-Urbino (Central Italy), four experimental composters were first initiated and temporally monitored. Second, two small sub-sets of selected provincial composters (directly operated by households involved in the project) underwent quality control on their compost products at two different temporal steps. The monitored experimental composters showed overall decreasing profiles versus composting time for moisture, organic carbon, and C/N, as well as overall increasing profiles for electrical conductivity and total nitrogen, which represented qualitative indications of progress in the process. Comparative evaluations of the monitored experimental composters also suggested some interactions in home composting, i.e., high C/N ratios limiting organic matter decomposition rates and final humification levels; high moisture contents restricting the internal temperature regime; nearly horizontal phosphorus and potassium evolutions contributing to limit the rates of increase in electrical conductivity; and prolonged biowaste additions contributing to limit the rate of decrease in moisture. The measures of parametric data variability in the two sub-sets of controlled provincial composters showed decreased variability in moisture, organic carbon, and C/N from the seventh to fifteenth month of home composting, as well as increased variability in electrical conductivity, total nitrogen, and humification rate, which could be considered compatible with the respective nature of decreasing and increasing parameters during composting. The modeled parametric kinetics in the monitored experimental composters, along with the evaluation of the parametric central tendencies in the sub-sets of controlled provincial composters, all indicate that 12-15 months is a suitable duration for the appropriate development of home composting in final and simultaneous compliance with typical reference limits. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Finding Rational Parametric Curves of Relative Degree One or Two

    ERIC Educational Resources Information Center

    Boyles, Dave

    2010-01-01

    A plane algebraic curve, the complete set of solutions to a polynomial equation: f(x, y) = 0, can in many cases be drawn using parametric equations: x = x(t), y = y(t). Using algebra, attempting to parametrize by means of rational functions of t, one discovers quickly that it is not the degree of f but the "relative degree," that describes how…

  10. A robust semi-parametric warping estimator of the survivor function with an application to two-group comparisons

    PubMed Central

    Hutson, Alan D

    2018-01-01

    In this note, we develop a new and novel semi-parametric estimator of the survival curve that is comparable to the product-limit estimator under very relaxed assumptions. The estimator is based on a beta parametrization that warps the empirical distribution of the observed censored and uncensored data. The parameters are obtained using a pseudo-maximum likelihood approach adjusting the survival curve accounting for the censored observations. In the univariate setting, the new estimator tends to better extend the range of the survival estimation given a high degree of censoring. However, the key feature of this paper is that we develop a new two-group semi-parametric exact permutation test for comparing survival curves that is generally superior to the classic log-rank and Wilcoxon tests and provides the best global power across a variety of alternatives. The new test is readily extended to the k group setting. PMID:26988931

  11. One-dimensional statistical parametric mapping in Python.

    PubMed

    Pataky, Todd C

    2012-01-01

    Statistical parametric mapping (SPM) is a topological methodology for detecting field changes in smooth n-dimensional continua. Many classes of biomechanical data are smooth and contained within discrete bounds and as such are well suited to SPM analyses. The current paper accompanies release of 'SPM1D', a free and open-source Python package for conducting SPM analyses on a set of registered 1D curves. Three example applications are presented: (i) kinematics, (ii) ground reaction forces and (iii) contact pressure distribution in probabilistic finite element modelling. In addition to offering a high-level interface to a variety of common statistical tests like t tests, regression and ANOVA, SPM1D also emphasises fundamental concepts of SPM theory through stand-alone example scripts. Source code and documentation are available at: www.tpataky.net/spm1d/.

  12. Parametric Shape Optimization of Lens-Focused Piezoelectric Ultrasound Transducers.

    PubMed

    Thomas, Gilles P L; Chapelon, Jean-Yves; Bera, Jean-Christophe; Lafon, Cyril

    2018-05-01

    Focused transducers composed of flat piezoelectric ceramic coupled with an acoustic lens present an economical alternative to curved piezoelectric ceramics and are already in use in a variety of fields. Using a displacement/pressure (u/p) mixed finite element formulation combined with parametric level-set functions to implicitly define the boundaries between the materials and the fluid-structure interface, a method to optimize the shape of acoustic lens made of either one or multiple materials is presented. From that method, two 400 kHz focused transducers using acoustic lens were designed and built with different rapid prototyping methods, one of them made with a combination of two materials, and experimental measurements of the pressure field around the focal point are in good agreement with the presented model.

  13. Bayesian non-parametric inference for stochastic epidemic models using Gaussian Processes.

    PubMed

    Xu, Xiaoguang; Kypraios, Theodore; O'Neill, Philip D

    2016-10-01

    This paper considers novel Bayesian non-parametric methods for stochastic epidemic models. Many standard modeling and data analysis methods use underlying assumptions (e.g. concerning the rate at which new cases of disease will occur) which are rarely challenged or tested in practice. To relax these assumptions, we develop a Bayesian non-parametric approach using Gaussian Processes, specifically to estimate the infection process. The methods are illustrated with both simulated and real data sets, the former illustrating that the methods can recover the true infection process quite well in practice, and the latter illustrating that the methods can be successfully applied in different settings. © The Author 2016. Published by Oxford University Press.

  14. Development and application of computer assisted optimal method for treatment of femoral neck fracture.

    PubMed

    Wang, Monan; Zhang, Kai; Yang, Ning

    2018-04-09

    To help doctors decide their treatment from the aspect of mechanical analysis, the work built a computer assisted optimal system for treatment of femoral neck fracture oriented to clinical application. The whole system encompassed the following three parts: Preprocessing module, finite element mechanical analysis module, post processing module. Preprocessing module included parametric modeling of bone, parametric modeling of fracture face, parametric modeling of fixed screw and fixed position and input and transmission of model parameters. Finite element mechanical analysis module included grid division, element type setting, material property setting, contact setting, constraint and load setting, analysis method setting and batch processing operation. Post processing module included extraction and display of batch processing operation results, image generation of batch processing operation, optimal program operation and optimal result display. The system implemented the whole operations from input of fracture parameters to output of the optimal fixed plan according to specific patient real fracture parameter and optimal rules, which demonstrated the effectiveness of the system. Meanwhile, the system had a friendly interface, simple operation and could improve the system function quickly through modifying single module.

  15. Accurate segmentation framework for the left ventricle wall from cardiac cine MRI

    NASA Astrophysics Data System (ADS)

    Sliman, H.; Khalifa, F.; Elnakib, A.; Soliman, A.; Beache, G. M.; Gimel'farb, G.; Emam, A.; Elmaghraby, A.; El-Baz, A.

    2013-10-01

    We propose a novel, fast, robust, bi-directional coupled parametric deformable model to segment the left ventricle (LV) wall borders using first- and second-order visual appearance features. These features are embedded in a new stochastic external force that preserves the topology of LV wall to track the evolution of the parametric deformable models control points. To accurately estimate the marginal density of each deformable model control point, the empirical marginal grey level distributions (first-order appearance) inside and outside the boundary of the deformable model are modeled with adaptive linear combinations of discrete Gaussians (LCDG). The second order visual appearance of the LV wall is accurately modeled with a new rotationally invariant second-order Markov-Gibbs random field (MGRF). We tested the proposed segmentation approach on 15 data sets in 6 infarction patients using the Dice similarity coefficient (DSC) and the average distance (AD) between the ground truth and automated segmentation contours. Our approach achieves a mean DSC value of 0.926±0.022 and AD value of 2.16±0.60 compared to two other level set methods that achieve 0.904±0.033 and 0.885±0.02 for DSC; and 2.86±1.35 and 5.72±4.70 for AD, respectively.

  16. Three-Dimensional Modeling of Aircraft High-Lift Components with Vehicle Sketch Pad

    NASA Technical Reports Server (NTRS)

    Olson, Erik D.

    2016-01-01

    Vehicle Sketch Pad (OpenVSP) is a parametric geometry modeler that has been used extensively for conceptual design studies of aircraft, including studies using higher-order analysis. OpenVSP can model flap and slat surfaces using simple shearing of the airfoil coordinates, which is an appropriate level of complexity for lower-order aerodynamic analysis methods. For three-dimensional analysis, however, there is not a built-in method for defining the high-lift components in OpenVSP in a realistic manner, or for controlling their complex motions in a parametric manner that is intuitive to the designer. This paper seeks instead to utilize OpenVSP's existing capabilities, and establish a set of best practices for modeling high-lift components at a level of complexity suitable for higher-order analysis methods. Techniques are described for modeling the flap and slat components as separate three-dimensional surfaces, and for controlling their motion using simple parameters defined in the local hinge-axis frame of reference. To demonstrate the methodology, an OpenVSP model for the Energy-Efficient Transport (EET) AR12 wind-tunnel model has been created, taking advantage of OpenVSP's Advanced Parameter Linking capability to translate the motions of the high-lift components from the hinge-axis coordinate system to a set of transformations in OpenVSP's frame of reference.

  17. Model-free estimation of the psychometric function

    PubMed Central

    Żychaluk, Kamila; Foster, David H.

    2009-01-01

    A subject's response to the strength of a stimulus is described by the psychometric function, from which summary measures, such as a threshold or slope, may be derived. Traditionally, this function is estimated by fitting a parametric model to the experimental data, usually the proportion of successful trials at each stimulus level. Common models include the Gaussian and Weibull cumulative distribution functions. This approach works well if the model is correct, but it can mislead if not. In practice, the correct model is rarely known. Here, a nonparametric approach based on local linear fitting is advocated. No assumption is made about the true model underlying the data, except that the function is smooth. The critical role of the bandwidth is identified, and its optimum value estimated by a cross-validation procedure. As a demonstration, seven vision and hearing data sets were fitted by the local linear method and by several parametric models. The local linear method frequently performed better and never worse than the parametric ones. Supplemental materials for this article can be downloaded from app.psychonomic-journals.org/content/supplemental. PMID:19633355

  18. Minimization of Basis Risk in Parametric Earthquake Cat Bonds

    NASA Astrophysics Data System (ADS)

    Franco, G.

    2009-12-01

    A catastrophe -cat- bond is an instrument used by insurance and reinsurance companies, by governments or by groups of nations to cede catastrophic risk to the financial markets, which are capable of supplying cover for highly destructive events, surpassing the typical capacity of traditional reinsurance contracts. Parametric cat bonds, a specific type of cat bonds, use trigger mechanisms or indices that depend on physical event parameters published by respected third parties in order to determine whether a part or the entire bond principal is to be paid for a certain event. First generation cat bonds, or cat-in-a-box bonds, display a trigger mechanism that consists of a set of geographic zones in which certain conditions need to be met by an earthquake’s magnitude and depth in order to trigger payment of the bond principal. Second generation cat bonds use an index formulation that typically consists of a sum of products of a set of weights by a polynomial function of the ground motion variables reported by a geographically distributed seismic network. These instruments are especially appealing to developing countries with incipient insurance industries wishing to cede catastrophic losses to the financial markets because the payment trigger mechanism is transparent and does not involve the parties ceding or accepting the risk, significantly reducing moral hazard. In order to be successful in the market, however, parametric cat bonds have typically been required to specify relatively simple trigger conditions. The consequence of such simplifications is the increase of basis risk. This risk represents the possibility that the trigger mechanism fails to accurately capture the actual losses of a catastrophic event, namely that it does not trigger for a highly destructive event or vice versa, that a payment of the bond principal is caused by an event that produced insignificant losses. The first case disfavors the sponsor who was seeking cover for its losses while the second disfavors the investor who loses part of the investment without a reasonable cause. A streamlined and fairly automated methodology has been developed to design parametric triggers that minimize the basis risk while still maintaining their level of relative simplicity. Basis risk is minimized in both, first and second generation, parametric cat bonds through an optimization procedure that aims to find the most appropriate magnitude thresholds, geographic zones, and weight index values. Sensitivity analyses to different design assumptions show that first generation cat bonds are typically affected by a large negative basis risk, namely the risk that the bond will not trigger for events within the risk level transferred, unless a sufficiently small geographic resolution is selected to define the trigger zones. Second generation cat bonds in contrast display a bias towards negative or positive basis risk depending on the degree of the polynomial used as well as on other design parameters. Two examples are presented, the construction of a first generation parametric trigger mechanism for Costa Rica and the design of a second generation parametric index for Japan.

  19. Sensitivity analysis for parametric generalized implicit quasi-variational-like inclusions involving P-[eta]-accretive mappings

    NASA Astrophysics Data System (ADS)

    Kazmi, K. R.; Khan, F. A.

    2008-01-01

    In this paper, using proximal-point mapping technique of P-[eta]-accretive mapping and the property of the fixed-point set of set-valued contractive mappings, we study the behavior and sensitivity analysis of the solution set of a parametric generalized implicit quasi-variational-like inclusion involving P-[eta]-accretive mapping in real uniformly smooth Banach space. Further, under suitable conditions, we discuss the Lipschitz continuity of the solution set with respect to the parameter. The technique and results presented in this paper can be viewed as extension of the techniques and corresponding results given in [R.P. Agarwal, Y.-J. Cho, N.-J. Huang, Sensitivity analysis for strongly nonlinear quasi-variational inclusions, Appl. MathE Lett. 13 (2002) 19-24; S. Dafermos, Sensitivity analysis in variational inequalities, Math. Oper. Res. 13 (1988) 421-434; X.-P. Ding, Sensitivity analysis for generalized nonlinear implicit quasi-variational inclusions, Appl. Math. Lett. 17 (2) (2004) 225-235; X.-P. Ding, Parametric completely generalized mixed implicit quasi-variational inclusions involving h-maximal monotone mappings, J. Comput. Appl. Math. 182 (2) (2005) 252-269; X.-P. Ding, C.L. Luo, On parametric generalized quasi-variational inequalities, J. Optim. Theory Appl. 100 (1999) 195-205; Z. Liu, L. Debnath, S.M. Kang, J.S. Ume, Sensitivity analysis for parametric completely generalized nonlinear implicit quasi-variational inclusions, J. Math. Anal. Appl. 277 (1) (2003) 142-154; R.N. Mukherjee, H.L. Verma, Sensitivity analysis of generalized variational inequalities, J. Math. Anal. Appl. 167 (1992) 299-304; M.A. Noor, Sensitivity analysis framework for general quasi-variational inclusions, Comput. Math. Appl. 44 (2002) 1175-1181; M.A. Noor, Sensitivity analysis for quasivariational inclusions, J. Math. Anal. Appl. 236 (1999) 290-299; J.Y. Park, J.U. Jeong, Parametric generalized mixed variational inequalities, Appl. Math. Lett. 17 (2004) 43-48].

  20. Investigation of tool wear and surface roughness on machining of titanium alloy with MT-CVD cutting tool

    NASA Astrophysics Data System (ADS)

    Maity, Kalipada; Pradhan, Swastik

    2018-04-01

    In this study, machining of titanium alloy (grade 5) is carried out using MT-CVD coated cutting tool. Titanium alloys possess superior strength-to-weight ratio with good corrosion resistance. Most of the industries used titanium alloy for the manufacturing of various types of lightweight components. The parts made from Ti-6Al-4V largely used in aerospace, biomedical, automotive and marine sectors. The conventional machining of this material is very difficult, due to low thermal conductivity and high chemical reactivity properties. To achieve a good surface finish with minimum tool wear of cutting tool, the machining is carried out using MT-CVD coated cutting tool. The experiment is carried out using of Taguchi L27 array layout with three cutting variables and levels. To find out the optimum parametric setting desirability function analysis (DFA) approach is used. The analysis of variance is studied to know the percentage contribution of each cutting variables. The optimum parametric setting results calculated from DFA were validated through the confirmation test.

  1. Waveform inversion for orthorhombic anisotropy with P waves: feasibility and resolution

    NASA Astrophysics Data System (ADS)

    Kazei, Vladimir; Alkhalifah, Tariq

    2018-05-01

    Various parametrizations have been suggested to simplify inversions of first arrivals, or P waves, in orthorhombic anisotropic media, but the number and type of retrievable parameters have not been decisively determined. We show that only six parameters can be retrieved from the dynamic linearized inversion of P waves. These parameters are different from the six parameters needed to describe the kinematics of P waves. Reflection-based radiation patterns from the P-P scattered waves are remapped into the spectral domain to allow for our resolution analysis based on the effective angle of illumination concept. Singular value decomposition of the spectral sensitivities from various azimuths, offset coverage scenarios and data bandwidths allows us to quantify the resolution of different parametrizations, taking into account the signal-to-noise ratio in a given experiment. According to our singular value analysis, when the primary goal of inversion is determining the velocity of the P waves, gradually adding anisotropy of lower orders (isotropic, vertically transversally isotropic and orthorhombic) in hierarchical parametrization is the best choice. Hierarchical parametrization reduces the trade-off between the parameters and makes gradual introduction of lower anisotropy orders straightforward. When all the anisotropic parameters affecting P-wave propagation need to be retrieved simultaneously, the classic parametrization of orthorhombic medium with elastic stiffness matrix coefficients and density is a better choice for inversion. We provide estimates of the number and set of parameters that can be retrieved from surface seismic data in different acquisition scenarios. To set up an inversion process, the singular values determine the number of parameters that can be inverted and the resolution matrices from the parametrizations can be used to ascertain the set of parameters that can be resolved.

  2. A Parametric k-Means Algorithm

    PubMed Central

    Tarpey, Thaddeus

    2007-01-01

    Summary The k points that optimally represent a distribution (usually in terms of a squared error loss) are called the k principal points. This paper presents a computationally intensive method that automatically determines the principal points of a parametric distribution. Cluster means from the k-means algorithm are nonparametric estimators of principal points. A parametric k-means approach is introduced for estimating principal points by running the k-means algorithm on a very large simulated data set from a distribution whose parameters are estimated using maximum likelihood. Theoretical and simulation results are presented comparing the parametric k-means algorithm to the usual k-means algorithm and an example on determining sizes of gas masks is used to illustrate the parametric k-means algorithm. PMID:17917692

  3. Parametric symmetries in exactly solvable real and PT symmetric complex potentials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yadav, Rajesh Kumar, E-mail: rajeshastrophysics@gmail.com; Khare, Avinash, E-mail: khare@physics.unipune.ac.in; Bagchi, Bijan, E-mail: bbagchi123@gmail.com

    In this paper, we discuss the parametric symmetries in different exactly solvable systems characterized by real or complex PT symmetric potentials. We focus our attention on the conventional potentials such as the generalized Pöschl Teller (GPT), Scarf-I, and PT symmetric Scarf-II which are invariant under certain parametric transformations. The resulting set of potentials is shown to yield a completely different behavior of the bound state solutions. Further, the supersymmetric partner potentials acquire different forms under such parametric transformations leading to new sets of exactly solvable real and PT symmetric complex potentials. These potentials are also observed to be shape invariantmore » (SI) in nature. We subsequently take up a study of the newly discovered rationally extended SI potentials, corresponding to the above mentioned conventional potentials, whose bound state solutions are associated with the exceptional orthogonal polynomials (EOPs). We discuss the transformations of the corresponding Casimir operator employing the properties of the so(2, 1) algebra.« less

  4. Model risk for European-style stock index options.

    PubMed

    Gençay, Ramazan; Gibson, Rajna

    2007-01-01

    In empirical modeling, there have been two strands for pricing in the options literature, namely the parametric and nonparametric models. Often, the support for the nonparametric methods is based on a benchmark such as the Black-Scholes (BS) model with constant volatility. In this paper, we study the stochastic volatility (SV) and stochastic volatility random jump (SVJ) models as parametric benchmarks against feedforward neural network (FNN) models, a class of neural network models. Our choice for FNN models is due to their well-studied universal approximation properties of an unknown function and its partial derivatives. Since the partial derivatives of an option pricing formula are risk pricing tools, an accurate estimation of the unknown option pricing function is essential for pricing and hedging. Our findings indicate that FNN models offer themselves as robust option pricing tools, over their sophisticated parametric counterparts in predictive settings. There are two routes to explain the superiority of FNN models over the parametric models in forecast settings. These are nonnormality of return distributions and adaptive learning.

  5. Definition of NASTRAN sets by use of parametric geometry

    NASA Technical Reports Server (NTRS)

    Baughn, Terry V.; Tiv, Mehran

    1989-01-01

    Many finite element preprocessors describe finite element model geometry with points, lines, surfaces and volumes. One method for describing these basic geometric entities is by use of parametric cubics which are useful for representing complex shapes. The lines, surfaces and volumes may be discretized for follow on finite element analysis. The ability to limit or selectively recover results from the finite element model is extremely important to the analyst. Equally important is the ability to easily apply boundary conditions. Although graphical preprocessors have made these tasks easier, model complexity may not lend itself to easily identify a group of grid points desired for data recovery or application of constraints. A methodology is presented which makes use of the assignment of grid point locations in parametric coordinates. The parametric coordinates provide a convenient ordering of the grid point locations and a method for retrieving the grid point ID's from the parent geometry. The selected grid points may then be used for the generation of the appropriate set and constraint cards.

  6. Bayesian Dose-Response Modeling in Sparse Data

    NASA Astrophysics Data System (ADS)

    Kim, Steven B.

    This book discusses Bayesian dose-response modeling in small samples applied to two different settings. The first setting is early phase clinical trials, and the second setting is toxicology studies in cancer risk assessment. In early phase clinical trials, experimental units are humans who are actual patients. Prior to a clinical trial, opinions from multiple subject area experts are generally more informative than the opinion of a single expert, but we may face a dilemma when they have disagreeing prior opinions. In this regard, we consider compromising the disagreement and compare two different approaches for making a decision. In addition to combining multiple opinions, we also address balancing two levels of ethics in early phase clinical trials. The first level is individual-level ethics which reflects the perspective of trial participants. The second level is population-level ethics which reflects the perspective of future patients. We extensively compare two existing statistical methods which focus on each perspective and propose a new method which balances the two conflicting perspectives. In toxicology studies, experimental units are living animals. Here we focus on a potential non-monotonic dose-response relationship which is known as hormesis. Briefly, hormesis is a phenomenon which can be characterized by a beneficial effect at low doses and a harmful effect at high doses. In cancer risk assessments, the estimation of a parameter, which is known as a benchmark dose, can be highly sensitive to a class of assumptions, monotonicity or hormesis. In this regard, we propose a robust approach which considers both monotonicity and hormesis as a possibility. In addition, We discuss statistical hypothesis testing for hormesis and consider various experimental designs for detecting hormesis based on Bayesian decision theory. Past experiments have not been optimally designed for testing for hormesis, and some Bayesian optimal designs may not be optimal under a wrong parametric assumption. In this regard, we consider a robust experimental design which does not require any parametric assumption.

  7. Nonparametric estimation of benchmark doses in environmental risk assessment

    PubMed Central

    Piegorsch, Walter W.; Xiong, Hui; Bhattacharya, Rabi N.; Lin, Lizhen

    2013-01-01

    Summary An important statistical objective in environmental risk analysis is estimation of minimum exposure levels, called benchmark doses (BMDs), that induce a pre-specified benchmark response in a dose-response experiment. In such settings, representations of the risk are traditionally based on a parametric dose-response model. It is a well-known concern, however, that if the chosen parametric form is misspecified, inaccurate and possibly unsafe low-dose inferences can result. We apply a nonparametric approach for calculating benchmark doses, based on an isotonic regression method for dose-response estimation with quantal-response data (Bhattacharya and Kong, 2007). We determine the large-sample properties of the estimator, develop bootstrap-based confidence limits on the BMDs, and explore the confidence limits’ small-sample properties via a short simulation study. An example from cancer risk assessment illustrates the calculations. PMID:23914133

  8. A level-set method for two-phase flows with moving contact line and insoluble surfactant

    NASA Astrophysics Data System (ADS)

    Xu, Jian-Jun; Ren, Weiqing

    2014-04-01

    A level-set method for two-phase flows with moving contact line and insoluble surfactant is presented. The mathematical model consists of the Navier-Stokes equation for the flow field, a convection-diffusion equation for the surfactant concentration, together with the Navier boundary condition and a condition for the dynamic contact angle derived by Ren et al. (2010) [37]. The numerical method is based on the level-set continuum surface force method for two-phase flows with surfactant developed by Xu et al. (2012) [54] with some cautious treatment for the boundary conditions. The numerical method consists of three components: a flow solver for the velocity field, a solver for the surfactant concentration, and a solver for the level-set function. In the flow solver, the surface force is dealt with using the continuum surface force model. The unbalanced Young stress at the moving contact line is incorporated into the Navier boundary condition. A convergence study of the numerical method and a parametric study are presented. The influence of surfactant on the dynamics of the moving contact line is illustrated using examples. The capability of the level-set method to handle complex geometries is demonstrated by simulating a pendant drop detaching from a wall under gravity.

  9. Development and Validation of a Statistical Shape Modeling-Based Finite Element Model of the Cervical Spine Under Low-Level Multiple Direction Loading Conditions

    PubMed Central

    Bredbenner, Todd L.; Eliason, Travis D.; Francis, W. Loren; McFarland, John M.; Merkle, Andrew C.; Nicolella, Daniel P.

    2014-01-01

    Cervical spinal injuries are a significant concern in all trauma injuries. Recent military conflicts have demonstrated the substantial risk of spinal injury for the modern warfighter. Finite element models used to investigate injury mechanisms often fail to examine the effects of variation in geometry or material properties on mechanical behavior. The goals of this study were to model geometric variation for a set of cervical spines, to extend this model to a parametric finite element model, and, as a first step, to validate the parametric model against experimental data for low-loading conditions. Individual finite element models were created using cervical spine (C3–T1) computed tomography data for five male cadavers. Statistical shape modeling (SSM) was used to generate a parametric finite element model incorporating variability of spine geometry, and soft-tissue material property variation was also included. The probabilistic loading response of the parametric model was determined under flexion-extension, axial rotation, and lateral bending and validated by comparison to experimental data. Based on qualitative and quantitative comparison of the experimental loading response and model simulations, we suggest that the model performs adequately under relatively low-level loading conditions in multiple loading directions. In conclusion, SSM methods coupled with finite element analyses within a probabilistic framework, along with the ability to statistically validate the overall model performance, provide innovative and important steps toward describing the differences in vertebral morphology, spinal curvature, and variation in material properties. We suggest that these methods, with additional investigation and validation under injurious loading conditions, will lead to understanding and mitigating the risks of injury in the spine and other musculoskeletal structures. PMID:25506051

  10. Parametric Method Performance for Dynamic 3'-Deoxy-3'-18F-Fluorothymidine PET/CT in Epidermal Growth Factor Receptor-Mutated Non-Small Cell Lung Carcinoma Patients Before and During Therapy.

    PubMed

    Kramer, Gerbrand Maria; Frings, Virginie; Heijtel, Dennis; Smit, E F; Hoekstra, Otto S; Boellaard, Ronald

    2017-06-01

    The objective of this study was to validate several parametric methods for quantification of 3'-deoxy-3'- 18 F-fluorothymidine ( 18 F-FLT) PET in advanced-stage non-small cell lung carcinoma (NSCLC) patients with an activating epidermal growth factor receptor mutation who were treated with gefitinib or erlotinib. Furthermore, we evaluated the impact of noise on accuracy and precision of the parametric analyses of dynamic 18 F-FLT PET/CT to assess the robustness of these methods. Methods : Ten NSCLC patients underwent dynamic 18 F-FLT PET/CT at baseline and 7 and 28 d after the start of treatment. Parametric images were generated using plasma input Logan graphic analysis and 2 basis functions-based methods: a 2-tissue-compartment basis function model (BFM) and spectral analysis (SA). Whole-tumor-averaged parametric pharmacokinetic parameters were compared with those obtained by nonlinear regression of the tumor time-activity curve using a reversible 2-tissue-compartment model with blood volume fraction. In addition, 2 statistically equivalent datasets were generated by countwise splitting the original list-mode data, each containing 50% of the total counts. Both new datasets were reconstructed, and parametric pharmacokinetic parameters were compared between the 2 replicates and the original data. Results: After the settings of each parametric method were optimized, distribution volumes (V T ) obtained with Logan graphic analysis, BFM, and SA all correlated well with those derived using nonlinear regression at baseline and during therapy ( R 2 ≥ 0.94; intraclass correlation coefficient > 0.97). SA-based V T images were most robust to increased noise on a voxel-level (repeatability coefficient, 16% vs. >26%). Yet BFM generated the most accurate K 1 values ( R 2 = 0.94; intraclass correlation coefficient, 0.96). Parametric K 1 data showed a larger variability in general; however, no differences were found in robustness between methods (repeatability coefficient, 80%-84%). Conclusion: Both BFM and SA can generate quantitatively accurate parametric 18 F-FLT V T images in NSCLC patients before and during therapy. SA was more robust to noise, yet BFM provided more accurate parametric K 1 data. We therefore recommend BFM as the preferred parametric method for analysis of dynamic 18 F-FLT PET/CT studies; however, SA can also be used. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.

  11. Nonlinear Adjustment with or without Constraints, Applicable to Geodetic Models

    DTIC Science & Technology

    1989-03-01

    corrections are neglected, resulting in the familiar (linearized) observation equations. In matrix notation, the latter are expressed by V = A X + I...where A is the design matrix, x=X -x is the column-vector of parametric corrections , VzLa-L b is the column-vector of residuals, and L=L -Lb is the...X0 . corresponds to the set ua of model-surface 0 coordinates describing the initial point P. The final set of parametric corrections , X, then

  12. Resolution of a Rank-Deficient Adjustment Model Via an Isomorphic Geometrical Setup with Tensor Structure.

    DTIC Science & Technology

    1987-03-01

    would be transcribed as L =AX - V where L, X, and V are the vectors of constant terms, parametric corrections , and b_o bresiduals, respectively. The...tensor. a Just as du’ represents the parametric corrections in tensor notations, the necessary associated metric tensor a’ corresponds to the variance...observations, n residuals, and 0 n- parametric corrections to X (an initial set of parameters), respectively. b 0 b The vctor L is formed as 1. L where

  13. Performance characterization of a low power hydrazine arcjet

    NASA Technical Reports Server (NTRS)

    Knowles, S. C.; Smith, W. W.; Curran, F. M.; Haag, T. W.

    1987-01-01

    Hydrazine arcjets, which offer substantial performance advantages over alternatives in geosynchronous satellite stationkeeping applications, have undergone startup, materials compatibility, lifetime, and power conditioning unit design issues. Devices in the 1000-3000 W output range have been characterized for several different electrode configurations. Constrictor length and diameter, electrode gap setting, and vortex strength have been parametrically studied in order to ascertain the influence of each on specific impulse and efficiency; specific impulse levels greater than 700 sec have been achieved.

  14. A unified framework for weighted parametric multiple test procedures.

    PubMed

    Xi, Dong; Glimm, Ekkehard; Maurer, Willi; Bretz, Frank

    2017-09-01

    We describe a general framework for weighted parametric multiple test procedures based on the closure principle. We utilize general weighting strategies that can reflect complex study objectives and include many procedures in the literature as special cases. The proposed weighted parametric tests bridge the gap between rejection rules using either adjusted significance levels or adjusted p-values. This connection is made by allowing intersection hypotheses of the underlying closed test procedure to be tested at level smaller than α. This may be also necessary to take certain study situations into account. For such cases we introduce a subclass of exact α-level parametric tests that satisfy the consonance property. When the correlation is known only for certain subsets of the test statistics, a new procedure is proposed to fully utilize this knowledge within each subset. We illustrate the proposed weighted parametric tests using a clinical trial example and conduct a simulation study to investigate its operating characteristics. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Reconfigurable fuzzy cell

    NASA Technical Reports Server (NTRS)

    Salazar, George A. (Inventor)

    1993-01-01

    This invention relates to a reconfigurable fuzzy cell comprising a digital control programmable gain operation amplifier, an analog-to-digital converter, an electrically erasable PROM, and 8-bit counter and comparator, and supporting logic configured to achieve in real-time fuzzy systems high throughput, grade-of-membership or membership-value conversion of multi-input sensor data. The invention provides a flexible multiplexing-capable configuration, implemented entirely in hardware, for effectuating S-, Z-, and PI-membership functions or combinations thereof, based upon fuzzy logic level-set theory. A membership value table storing 'knowledge data' for each of S-, Z-, and PI-functions is contained within a nonvolatile memory for storing bits of membership and parametric information in a plurality of address spaces. Based upon parametric and control signals, analog sensor data is digitized and converted into grade-of-membership data. In situ learn and recognition modes of operation are also provided.

  16. Results of the JIMO Follow-on Destinations Parametric Studies

    NASA Technical Reports Server (NTRS)

    Noca, Muriel A.; Hack, Kurt J.

    2005-01-01

    NASA's proposed Jupiter Icy Moon Orbiter (JIMO) mission currently in conceptual development is to be the first one of a series of highly capable Nuclear Electric Propulsion (NEP) science driven missions. To understand the implications of a multi-mission capability requirement on the JIMO vehicle and mission, the NASA Prometheus Program initiated a set of parametric high-level studies to be followed by a series of more in-depth studies. The JIMO potential follow-on destinations identified include a Saturn system tour, a Neptune system tour, a Kuiper Belt Objects rendezvous, an Interstellar Precursor mission, a Multiple Asteroid Sample Return and a Comet Sample Return. This paper shows that the baseline JIMO reactor and design envelop can satisfy five out of six of the follow-on destinations. Flight time to these destinations can significantly be reduced by increasing the launch energy or/and by inserting gravity assists to the heliocentric phase.

  17. Do Students Expect Compensation for Wage Risk?

    ERIC Educational Resources Information Center

    Schweri, Juerg; Hartog, Joop; Wolter, Stefan C.

    2011-01-01

    We use a unique data set about the wage distribution that Swiss students expect for themselves ex ante, deriving parametric and non-parametric measures to capture expected wage risk. These wage risk measures are unfettered by heterogeneity which handicapped the use of actual market wage dispersion as risk measure in earlier studies. Students in…

  18. Model Adaptation in Parametric Space for POD-Galerkin Models

    NASA Astrophysics Data System (ADS)

    Gao, Haotian; Wei, Mingjun

    2017-11-01

    The development of low-order POD-Galerkin models is largely motivated by the expectation to use the model developed with a set of parameters at their native values to predict the dynamic behaviors of the same system under different parametric values, in other words, a successful model adaptation in parametric space. However, most of time, even small deviation of parameters from their original value may lead to large deviation or unstable results. It has been shown that adding more information (e.g. a steady state, mean value of a different unsteady state, or an entire different set of POD modes) may improve the prediction of flow with other parametric states. For a simple case of the flow passing a fixed cylinder, an orthogonal mean mode at a different Reynolds number may stabilize the POD-Galerkin model when Reynolds number is changed. For a more complicated case of the flow passing an oscillatory cylinder, a global POD-Galerkin model is first applied to handle the moving boundaries, then more information (e.g. more POD modes) is required to predicate the flow under different oscillatory frequencies. Supported by ARL.

  19. Multiple Hypothesis Testing for Experimental Gingivitis Based on Wilcoxon Signed Rank Statistics

    PubMed Central

    Preisser, John S.; Sen, Pranab K.; Offenbacher, Steven

    2011-01-01

    Dental research often involves repeated multivariate outcomes on a small number of subjects for which there is interest in identifying outcomes that exhibit change in their levels over time as well as to characterize the nature of that change. In particular, periodontal research often involves the analysis of molecular mediators of inflammation for which multivariate parametric methods are highly sensitive to outliers and deviations from Gaussian assumptions. In such settings, nonparametric methods may be favored over parametric ones. Additionally, there is a need for statistical methods that control an overall error rate for multiple hypothesis testing. We review univariate and multivariate nonparametric hypothesis tests and apply them to longitudinal data to assess changes over time in 31 biomarkers measured from the gingival crevicular fluid in 22 subjects whereby gingivitis was induced by temporarily withholding tooth brushing. To identify biomarkers that can be induced to change, multivariate Wilcoxon signed rank tests for a set of four summary measures based upon area under the curve are applied for each biomarker and compared to their univariate counterparts. Multiple hypothesis testing methods with choice of control of the false discovery rate or strong control of the family-wise error rate are examined. PMID:21984957

  20. Hybrid-Wing-Body Vehicle Composite Fuselage Analysis and Case Study

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, Vivek

    2014-01-01

    Recent progress in the structural analysis of a Hybrid Wing-Body (HWB) fuselage concept is presented with the objective of structural weight reduction under a set of critical design loads. This pressurized efficient HWB fuselage design is presently being investigated by the NASA Environmentally Responsible Aviation (ERA) project in collaboration with the Boeing Company, Huntington Beach. The Pultruded Rod-Stiffened Efficient Unitized Structure (PRSEUS) composite concept, developed at the Boeing Company, is approximately modeled for an analytical study and finite element analysis. Stiffened plate linear theories are employed for a parametric case study. Maximum deflection and stress levels are obtained with appropriate assumptions for a set of feasible stiffened panel configurations. An analytical parametric case study is presented to examine the effects of discrete stiffener spacing and skin thickness on structural weight, deflection and stress. A finite-element model (FEM) of an integrated fuselage section with bulkhead is developed for an independent assessment. Stress analysis and scenario based case studies are conducted for design improvement. The FEM model specific weight of the improved fuselage concept is computed and compared to previous studies, in order to assess the relative weight/strength advantages of this advanced composite airframe technology

  1. Dynamic balancing of super-critical rotating structures using slow-speed data via parametric excitation

    NASA Astrophysics Data System (ADS)

    Tresser, Shachar; Dolev, Amit; Bucher, Izhak

    2018-02-01

    High-speed machinery is often designed to pass several "critical speeds", where vibration levels can be very high. To reduce vibrations, rotors usually undergo a mass balancing process, where the machine is rotated at its full speed range, during which the dynamic response near critical speeds can be measured. High sensitivity, which is required for a successful balancing process, is achieved near the critical speeds, where a single deflection mode shape becomes dominant, and is excited by the projection of the imbalance on it. The requirement to rotate the machine at high speeds is an obstacle in many cases, where it is impossible to perform measurements at high speeds, due to harsh conditions such as high temperatures and inaccessibility (e.g., jet engines). This paper proposes a novel balancing method of flexible rotors, which does not require the machine to be rotated at high speeds. With this method, the rotor is spun at low speeds, while subjecting it to a set of externally controlled forces. The external forces comprise a set of tuned, response dependent, parametric excitations, and nonlinear stiffness terms. The parametric excitation can isolate any desired mode, while keeping the response directly linked to the imbalance. A software controlled nonlinear stiffness term limits the response, hence preventing the rotor to become unstable. These forces warrant sufficient sensitivity required to detect the projection of the imbalance on any desired mode without rotating the machine at high speeds. Analytical, numerical and experimental results are shown to validate and demonstrate the method.

  2. Parametric and Non-Parametric Vibration-Based Structural Identification Under Earthquake Excitation

    NASA Astrophysics Data System (ADS)

    Pentaris, Fragkiskos P.; Fouskitakis, George N.

    2014-05-01

    The problem of modal identification in civil structures is of crucial importance, and thus has been receiving increasing attention in recent years. Vibration-based methods are quite promising as they are capable of identifying the structure's global characteristics, they are relatively easy to implement and they tend to be time effective and less expensive than most alternatives [1]. This paper focuses on the off-line structural/modal identification of civil (concrete) structures subjected to low-level earthquake excitations, under which, they remain within their linear operating regime. Earthquakes and their details are recorded and provided by the seismological network of Crete [2], which 'monitors' the broad region of south Hellenic arc, an active seismic region which functions as a natural laboratory for earthquake engineering of this kind. A sufficient number of seismic events are analyzed in order to reveal the modal characteristics of the structures under study, that consist of the two concrete buildings of the School of Applied Sciences, Technological Education Institute of Crete, located in Chania, Crete, Hellas. Both buildings are equipped with high-sensitivity and accuracy seismographs - providing acceleration measurements - established at the basement (structure's foundation) presently considered as the ground's acceleration (excitation) and at all levels (ground floor, 1st floor, 2nd floor and terrace). Further details regarding the instrumentation setup and data acquisition may be found in [3]. The present study invokes stochastic, both non-parametric (frequency-based) and parametric methods for structural/modal identification (natural frequencies and/or damping ratios). Non-parametric methods include Welch-based spectrum and Frequency response Function (FrF) estimation, while parametric methods, include AutoRegressive (AR), AutoRegressive with eXogeneous input (ARX) and Autoregressive Moving-Average with eXogeneous input (ARMAX) models[4, 5]. Preliminary results indicate that parametric methods are capable of sufficiently providing the structural/modal characteristics such as natural frequencies and damping ratios. The study also aims - at a further level of investigation - to provide a reliable statistically-based methodology for structural health monitoring after major seismic events which potentially cause harming consequences in structures. Acknowledgments This work was supported by the State Scholarships Foundation of Hellas. References [1] J. S. Sakellariou and S. D. Fassois, "Stochastic output error vibration-based damage detection and assessment in structures under earthquake excitation," Journal of Sound and Vibration, vol. 297, pp. 1048-1067, 2006. [2] G. Hloupis, I. Papadopoulos, J. P. Makris, and F. Vallianatos, "The South Aegean seismological network - HSNC," Adv. Geosci., vol. 34, pp. 15-21, 2013. [3] F. P. Pentaris, J. Stonham, and J. P. Makris, "A review of the state-of-the-art of wireless SHM systems and an experimental set-up towards an improved design," presented at the EUROCON, 2013 IEEE, Zagreb, 2013. [4] S. D. Fassois, "Parametric Identification of Vibrating Structures," in Encyclopedia of Vibration, S. G. Braun, D. J. Ewins, and S. S. Rao, Eds., ed London: Academic Press, London, 2001. [5] S. D. Fassois and J. S. Sakellariou, "Time-series methods for fault detection and identification in vibrating structures," Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, vol. 365, pp. 411-448, February 15 2007.

  3. A Parametric Analysis of the Techniques Used for the Recovery and Evacuation of Battle Damaged Tracked Vehicles.

    DTIC Science & Technology

    1980-06-01

    problems, a parametric model was built which uses the TI - 59 programmable calculator as its ve- hicle. Although the calculator has many disadvantages for...previous experience using the TI 59 programmable calculator . For example, explicit instructions for reading cards into the memory set will not be given

  4. The parametric resonance—from LEGO Mindstorms to cold atoms

    NASA Astrophysics Data System (ADS)

    Kawalec, Tomasz; Sierant, Aleksandra

    2017-07-01

    We show an experimental setup based on a popular LEGO Mindstorms set, allowing us to both observe and investigate the parametric resonance phenomenon. The presented method is simple but covers a variety of student activities like embedded software development, conducting measurements, data collection and analysis. It may be used during science shows, as part of student projects and to illustrate the parametric resonance in mechanics or even quantum physics, during lectures or classes. The parametrically driven LEGO pendulum gains energy in a spectacular way, increasing its amplitude from 10° to about 100° within a few tens of seconds. We provide also a short description of a wireless absolute orientation sensor that may be used in quantitative analysis of driven or free pendulum movement.

  5. SOFIA: a flexible source finder for 3D spectral line data

    NASA Astrophysics Data System (ADS)

    Serra, Paolo; Westmeier, Tobias; Giese, Nadine; Jurek, Russell; Flöer, Lars; Popping, Attila; Winkel, Benjamin; van der Hulst, Thijs; Meyer, Martin; Koribalski, Bärbel S.; Staveley-Smith, Lister; Courtois, Hélène

    2015-04-01

    We introduce SOFIA, a flexible software application for the detection and parametrization of sources in 3D spectral line data sets. SOFIA combines for the first time in a single piece of software a set of new source-finding and parametrization algorithms developed on the way to future H I surveys with ASKAP (WALLABY, DINGO) and APERTIF. It is designed to enable the general use of these new algorithms by the community on a broad range of data sets. The key advantages of SOFIA are the ability to: search for line emission on multiple scales to detect 3D sources in a complete and reliable way, taking into account noise level variations and the presence of artefacts in a data cube; estimate the reliability of individual detections; look for signal in arbitrarily large data cubes using a catalogue of 3D coordinates as a prior; provide a wide range of source parameters and output products which facilitate further analysis by the user. We highlight the modularity of SOFIA, which makes it a flexible package allowing users to select and apply only the algorithms useful for their data and science questions. This modularity makes it also possible to easily expand SOFIA in order to include additional methods as they become available. The full SOFIA distribution, including a dedicated graphical user interface, is publicly available for download.

  6. Parametric modelling of cost data in medical studies.

    PubMed

    Nixon, R M; Thompson, S G

    2004-04-30

    The cost of medical resources used is often recorded for each patient in clinical studies in order to inform decision-making. Although cost data are generally skewed to the right, interest is in making inferences about the population mean cost. Common methods for non-normal data, such as data transformation, assuming asymptotic normality of the sample mean or non-parametric bootstrapping, are not ideal. This paper describes possible parametric models for analysing cost data. Four example data sets are considered, which have different sample sizes and degrees of skewness. Normal, gamma, log-normal, and log-logistic distributions are fitted, together with three-parameter versions of the latter three distributions. Maximum likelihood estimates of the population mean are found; confidence intervals are derived by a parametric BC(a) bootstrap and checked by MCMC methods. Differences between model fits and inferences are explored.Skewed parametric distributions fit cost data better than the normal distribution, and should in principle be preferred for estimating the population mean cost. However for some data sets, we find that models that fit badly can give similar inferences to those that fit well. Conversely, particularly when sample sizes are not large, different parametric models that fit the data equally well can lead to substantially different inferences. We conclude that inferences are sensitive to choice of statistical model, which itself can remain uncertain unless there is enough data to model the tail of the distribution accurately. Investigating the sensitivity of conclusions to choice of model should thus be an essential component of analysing cost data in practice. Copyright 2004 John Wiley & Sons, Ltd.

  7. Practical statistics in pain research.

    PubMed

    Kim, Tae Kyun

    2017-10-01

    Pain is subjective, while statistics related to pain research are objective. This review was written to help researchers involved in pain research make statistical decisions. The main issues are related with the level of scales that are often used in pain research, the choice of statistical methods between parametric or nonparametric statistics, and problems which arise from repeated measurements. In the field of pain research, parametric statistics used to be applied in an erroneous way. This is closely related with the scales of data and repeated measurements. The level of scales includes nominal, ordinal, interval, and ratio scales. The level of scales affects the choice of statistics between parametric or non-parametric methods. In the field of pain research, the most frequently used pain assessment scale is the ordinal scale, which would include the visual analogue scale (VAS). There used to be another view, however, which considered the VAS to be an interval or ratio scale, so that the usage of parametric statistics would be accepted practically in some cases. Repeated measurements of the same subjects always complicates statistics. It means that measurements inevitably have correlations between each other, and would preclude the application of one-way ANOVA in which independence between the measurements is necessary. Repeated measures of ANOVA (RMANOVA), however, would permit the comparison between the correlated measurements as long as the condition of sphericity assumption is satisfied. Conclusively, parametric statistical methods should be used only when the assumptions of parametric statistics, such as normality and sphericity, are established.

  8. Prediction of forest fires occurrences with area-level Poisson mixed models.

    PubMed

    Boubeta, Miguel; Lombardía, María José; Marey-Pérez, Manuel Francisco; Morales, Domingo

    2015-05-01

    The number of fires in forest areas of Galicia (north-west of Spain) during the summer period is quite high. Local authorities are interested in analyzing the factors that explain this phenomenon. Poisson regression models are good tools for describing and predicting the number of fires per forest areas. This work employs area-level Poisson mixed models for treating real data about fires in forest areas. A parametric bootstrap method is applied for estimating the mean squared errors of fires predictors. The developed methodology and software are applied to a real data set of fires in forest areas of Galicia. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Parameter identifiability of linear dynamical systems

    NASA Technical Reports Server (NTRS)

    Glover, K.; Willems, J. C.

    1974-01-01

    It is assumed that the system matrices of a stationary linear dynamical system were parametrized by a set of unknown parameters. The question considered here is, when can such a set of unknown parameters be identified from the observed data? Conditions for the local identifiability of a parametrization are derived in three situations: (1) when input/output observations are made, (2) when there exists an unknown feedback matrix in the system and (3) when the system is assumed to be driven by white noise and only output observations are made. Also a sufficient condition for global identifiability is derived.

  10. Sgr A* Emission Parametrizations from GRMHD Simulations

    NASA Astrophysics Data System (ADS)

    Anantua, Richard; Ressler, Sean; Quataert, Eliot

    2018-06-01

    Galactic Center emission near the vicinity of the central black hole, Sagittarius (Sgr) A*, is modeled using parametrizations involving the electron temperature, which is found from general relativistic magnetohydrodynamic (GRMHD) simulations to be highest in the disk-outflow corona. Jet-motivated prescriptions generalizing equipartition of particle and magnetic energies, e.g., by scaling relativistic electron energy density to powers of the magnetic field strength, are also introduced. GRMHD jet (or outflow)/accretion disk/black hole (JAB) simulation postprocessing codes IBOTHROS and GRMONTY are employed in the calculation of images and spectra. Various parametric models reproduce spectral and morphological features, such as the sub-mm spectral bump in electron temperature models and asymmetric photon rings in equipartition-based models. The Event Horizon Telescope (EHT) will provide unprecedentedly high-resolution 230+ GHz observations of the "shadow" around Sgr A*'s supermassive black hole, which the synthetic models presented here will reverse-engineer. Both electron temperature and equipartition-based models can be constructed to be compatible with EHT size constraints for the emitting region of Sgr A*. This program sets the groundwork for devising a unified emission parametrization flexible enough to model disk, corona and outflow/jet regions with a small set of parameters including electron heating fraction and plasma beta.

  11. Marginally specified priors for non-parametric Bayesian estimation

    PubMed Central

    Kessler, David C.; Hoff, Peter D.; Dunson, David B.

    2014-01-01

    Summary Prior specification for non-parametric Bayesian inference involves the difficult task of quantifying prior knowledge about a parameter of high, often infinite, dimension. A statistician is unlikely to have informed opinions about all aspects of such a parameter but will have real information about functionals of the parameter, such as the population mean or variance. The paper proposes a new framework for non-parametric Bayes inference in which the prior distribution for a possibly infinite dimensional parameter is decomposed into two parts: an informative prior on a finite set of functionals, and a non-parametric conditional prior for the parameter given the functionals. Such priors can be easily constructed from standard non-parametric prior distributions in common use and inherit the large support of the standard priors on which they are based. Additionally, posterior approximations under these informative priors can generally be made via minor adjustments to existing Markov chain approximation algorithms for standard non-parametric prior distributions. We illustrate the use of such priors in the context of multivariate density estimation using Dirichlet process mixture models, and in the modelling of high dimensional sparse contingency tables. PMID:25663813

  12. Parametrically excited helicopter ground resonance dynamics with high blade asymmetries

    NASA Astrophysics Data System (ADS)

    Sanches, L.; Michon, G.; Berlioz, A.; Alazard, D.

    2012-07-01

    The present work is aimed at verifying the influence of high asymmetries in the variation of in-plane lead-lag stiffness of one blade on the ground resonance phenomenon in helicopters. The periodical equations of motions are analyzed by using Floquet's Theory (FM) and the boundaries of instabilities predicted. The stability chart obtained as a function of asymmetry parameters and rotor speed reveals a complex evolution of critical zones and the existence of bifurcation points at low rotor speed values. Additionally, it is known that when treated as parametric excitations; periodic terms may cause parametric resonances in dynamic systems, some of which can become unstable. Therefore, the helicopter is later considered as a parametrically excited system and the equations are treated analytically by applying the Method of Multiple Scales (MMS). A stability analysis is used to verify the existence of unstable parametric resonances with first and second-order sets of equations. The results are compared and validated with those obtained by Floquet's Theory. Moreover, an explanation is given for the presence of unstable motion at low rotor speeds due to parametric instabilities of the second order.

  13. Parametric pendulum based wave energy converter

    NASA Astrophysics Data System (ADS)

    Yurchenko, Daniil; Alevras, Panagiotis

    2018-01-01

    The paper investigates the dynamics of a novel wave energy converter based on the parametrically excited pendulum. The herein developed concept of the parametric pendulum allows reducing the influence of the gravity force thereby significantly improving the device performance at a regular sea state, which could not be achieved in the earlier proposed original point-absorber design. The suggested design of a wave energy converter achieves a dominant rotational motion without any additional mechanisms, like a gearbox, or any active control involvement. Presented numerical results of deterministic and stochastic modeling clearly reflect the advantage of the proposed design. A set of experimental results confirms the numerical findings and validates the new design of a parametric pendulum based wave energy converter. Power harvesting potential of the novel device is also presented.

  14. Localization and identification of structural nonlinearities using cascaded optimization and neural networks

    NASA Astrophysics Data System (ADS)

    Koyuncu, A.; Cigeroglu, E.; Özgüven, H. N.

    2017-10-01

    In this study, a new approach is proposed for identification of structural nonlinearities by employing cascaded optimization and neural networks. Linear finite element model of the system and frequency response functions measured at arbitrary locations of the system are used in this approach. Using the finite element model, a training data set is created, which appropriately spans the possible nonlinear configurations space of the system. A classification neural network trained on these data sets then localizes and determines the types of all nonlinearities associated with the nonlinear degrees of freedom in the system. A new training data set spanning the parametric space associated with the determined nonlinearities is created to facilitate parametric identification. Utilizing this data set, initially, a feed forward regression neural network is trained, which parametrically identifies the classified nonlinearities. Then, the results obtained are further improved by carrying out an optimization which uses network identified values as starting points. Unlike identification methods available in literature, the proposed approach does not require data collection from the degrees of freedoms where nonlinear elements are attached, and furthermore, it is sufficiently accurate even in the presence of measurement noise. The application of the proposed approach is demonstrated on an example system with nonlinear elements and on a real life experimental setup with a local nonlinearity.

  15. Direct parametric reconstruction in dynamic PET myocardial perfusion imaging: in vivo studies.

    PubMed

    Petibon, Yoann; Rakvongthai, Yothin; El Fakhri, Georges; Ouyang, Jinsong

    2017-05-07

    Dynamic PET myocardial perfusion imaging (MPI) used in conjunction with tracer kinetic modeling enables the quantification of absolute myocardial blood flow (MBF). However, MBF maps computed using the traditional indirect method (i.e. post-reconstruction voxel-wise fitting of kinetic model to PET time-activity-curves-TACs) suffer from poor signal-to-noise ratio (SNR). Direct reconstruction of kinetic parameters from raw PET projection data has been shown to offer parametric images with higher SNR compared to the indirect method. The aim of this study was to extend and evaluate the performance of a direct parametric reconstruction method using in vivo dynamic PET MPI data for the purpose of quantifying MBF. Dynamic PET MPI studies were performed on two healthy pigs using a Siemens Biograph mMR scanner. List-mode PET data for each animal were acquired following a bolus injection of ~7-8 mCi of 18 F-flurpiridaz, a myocardial perfusion agent. Fully-3D dynamic PET sinograms were obtained by sorting the coincidence events into 16 temporal frames covering ~5 min after radiotracer administration. Additionally, eight independent noise realizations of both scans-each containing 1/8th of the total number of events-were generated from the original list-mode data. Dynamic sinograms were then used to compute parametric maps using the conventional indirect method and the proposed direct method. For both methods, a one-tissue compartment model accounting for spillover from the left and right ventricle blood-pools was used to describe the kinetics of 18 F-flurpiridaz. An image-derived arterial input function obtained from a TAC taken in the left ventricle cavity was used for tracer kinetic analysis. For the indirect method, frame-by-frame images were estimated using two fully-3D reconstruction techniques: the standard ordered subset expectation maximization (OSEM) reconstruction algorithm on one side, and the one-step late maximum a posteriori (OSL-MAP) algorithm on the other side, which incorporates a quadratic penalty function. The parametric images were then calculated using voxel-wise weighted least-square fitting of the reconstructed myocardial PET TACs. For the direct method, parametric images were estimated directly from the dynamic PET sinograms using a maximum a posteriori (MAP) parametric reconstruction algorithm which optimizes an objective function comprised of the Poisson log-likelihood term, the kinetic model and a quadratic penalty function. Maximization of the objective function with respect to each set of parameters was achieved using a preconditioned conjugate gradient algorithm with a specifically developed pre-conditioner. The performance of the direct method was evaluated by comparing voxel- and segment-wise estimates of [Formula: see text], the tracer transport rate (ml · min -1 · ml -1 ), to those obtained using the indirect method applied to both OSEM and OSL-MAP dynamic reconstructions. The proposed direct reconstruction method produced [Formula: see text] maps with visibly lower noise than the indirect method based on OSEM and OSL-MAP reconstructions. At normal count levels, the direct method was shown to outperform the indirect method based on OSL-MAP in the sense that at matched level of bias, reduced regional noise levels were obtained. At lower count levels, the direct method produced [Formula: see text] estimates with significantly lower standard deviation across noise realizations than the indirect method based on OSL-MAP at matched bias level. In all cases, the direct method yielded lower noise and standard deviation than the indirect method based on OSEM. Overall, the proposed direct reconstruction offered a better bias-variance tradeoff than the indirect method applied to either OSEM and OSL-MAP. Direct parametric reconstruction as applied to in vivo dynamic PET MPI data is therefore a promising method for producing MBF maps with lower variance.

  16. Direct parametric reconstruction in dynamic PET myocardial perfusion imaging: in-vivo studies

    PubMed Central

    Petibon, Yoann; Rakvongthai, Yothin; Fakhri, Georges El; Ouyang, Jinsong

    2017-01-01

    Dynamic PET myocardial perfusion imaging (MPI) used in conjunction with tracer kinetic modeling enables the quantification of absolute myocardial blood flow (MBF). However, MBF maps computed using the traditional indirect method (i.e. post-reconstruction voxel-wise fitting of kinetic model to PET time-activity-curves -TACs) suffer from poor signal-to-noise ratio (SNR). Direct reconstruction of kinetic parameters from raw PET projection data has been shown to offer parametric images with higher SNR compared to the indirect method. The aim of this study was to extend and evaluate the performance of a direct parametric reconstruction method using in-vivo dynamic PET MPI data for the purpose of quantifying MBF. Dynamic PET MPI studies were performed on two healthy pigs using a Siemens Biograph mMR scanner. List-mode PET data for each animal were acquired following a bolus injection of ~7-8 mCi of 18F-flurpiridaz, a myocardial perfusion agent. Fully-3D dynamic PET sinograms were obtained by sorting the coincidence events into 16 temporal frames covering ~5 min after radiotracer administration. Additionally, eight independent noise realizations of both scans - each containing 1/8th of the total number of events - were generated from the original list-mode data. Dynamic sinograms were then used to compute parametric maps using the conventional indirect method and the proposed direct method. For both methods, a one-tissue compartment model accounting for spillover from the left and right ventricle blood-pools was used to describe the kinetics of 18F-flurpiridaz. An image-derived arterial input function obtained from a TAC taken in the left ventricle cavity was used for tracer kinetic analysis. For the indirect method, frame-by-frame images were estimated using two fully-3D reconstruction techniques: the standard Ordered Subset Expectation Maximization (OSEM) reconstruction algorithm on one side, and the One-Step Late Maximum a Posteriori (OSL-MAP) algorithm on the other side, which incorporates a quadratic penalty function. The parametric images were then calculated using voxel-wise weighted least-square fitting of the reconstructed myocardial PET TACs. For the direct method, parametric images were estimated directly from the dynamic PET sinograms using a maximum a posteriori (MAP) parametric reconstruction algorithm which optimizes an objective function comprised of the Poisson log-likelihood term, the kinetic model and a quadratic penalty function. Maximization of the objective function with respect to each set of parameters was achieved using a preconditioned conjugate gradient algorithm with a specifically developed pre-conditioner. The performance of the direct method was evaluated by comparing voxel- and segment-wise estimates of K1, the tracer transport rate (mL.min−1.mL−1), to those obtained using the indirect method applied to both OSEM and OSL-MAP dynamic reconstructions. The proposed direct reconstruction method produced K1 maps with visibly lower noise than the indirect method based on OSEM and OSL-MAP reconstructions. At normal count levels, the direct method was shown to outperform the indirect method based on OSL-MAP in the sense that at matched level of bias, reduced regional noise levels were obtained. At lower count levels, the direct method produced K1 estimates with significantly lower standard deviation across noise realizations than the indirect method based on OSL-MAP at matched bias level. In all cases, the direct method yielded lower noise and standard deviation than the indirect method based on OSEM. Overall, the proposed direct reconstruction offered a better bias-variance tradeoff than the indirect method applied to either OSEM and OSL-MAP. Direct parametric reconstruction as applied to in-vivo dynamic PET MPI data is therefore a promising method for producing MBF maps with lower variance. PMID:28379843

  17. Direct parametric reconstruction in dynamic PET myocardial perfusion imaging: in vivo studies

    NASA Astrophysics Data System (ADS)

    Petibon, Yoann; Rakvongthai, Yothin; El Fakhri, Georges; Ouyang, Jinsong

    2017-05-01

    Dynamic PET myocardial perfusion imaging (MPI) used in conjunction with tracer kinetic modeling enables the quantification of absolute myocardial blood flow (MBF). However, MBF maps computed using the traditional indirect method (i.e. post-reconstruction voxel-wise fitting of kinetic model to PET time-activity-curves-TACs) suffer from poor signal-to-noise ratio (SNR). Direct reconstruction of kinetic parameters from raw PET projection data has been shown to offer parametric images with higher SNR compared to the indirect method. The aim of this study was to extend and evaluate the performance of a direct parametric reconstruction method using in vivo dynamic PET MPI data for the purpose of quantifying MBF. Dynamic PET MPI studies were performed on two healthy pigs using a Siemens Biograph mMR scanner. List-mode PET data for each animal were acquired following a bolus injection of ~7-8 mCi of 18F-flurpiridaz, a myocardial perfusion agent. Fully-3D dynamic PET sinograms were obtained by sorting the coincidence events into 16 temporal frames covering ~5 min after radiotracer administration. Additionally, eight independent noise realizations of both scans—each containing 1/8th of the total number of events—were generated from the original list-mode data. Dynamic sinograms were then used to compute parametric maps using the conventional indirect method and the proposed direct method. For both methods, a one-tissue compartment model accounting for spillover from the left and right ventricle blood-pools was used to describe the kinetics of 18F-flurpiridaz. An image-derived arterial input function obtained from a TAC taken in the left ventricle cavity was used for tracer kinetic analysis. For the indirect method, frame-by-frame images were estimated using two fully-3D reconstruction techniques: the standard ordered subset expectation maximization (OSEM) reconstruction algorithm on one side, and the one-step late maximum a posteriori (OSL-MAP) algorithm on the other side, which incorporates a quadratic penalty function. The parametric images were then calculated using voxel-wise weighted least-square fitting of the reconstructed myocardial PET TACs. For the direct method, parametric images were estimated directly from the dynamic PET sinograms using a maximum a posteriori (MAP) parametric reconstruction algorithm which optimizes an objective function comprised of the Poisson log-likelihood term, the kinetic model and a quadratic penalty function. Maximization of the objective function with respect to each set of parameters was achieved using a preconditioned conjugate gradient algorithm with a specifically developed pre-conditioner. The performance of the direct method was evaluated by comparing voxel- and segment-wise estimates of {{K}1} , the tracer transport rate (ml · min-1 · ml-1), to those obtained using the indirect method applied to both OSEM and OSL-MAP dynamic reconstructions. The proposed direct reconstruction method produced {{K}1} maps with visibly lower noise than the indirect method based on OSEM and OSL-MAP reconstructions. At normal count levels, the direct method was shown to outperform the indirect method based on OSL-MAP in the sense that at matched level of bias, reduced regional noise levels were obtained. At lower count levels, the direct method produced {{K}1} estimates with significantly lower standard deviation across noise realizations than the indirect method based on OSL-MAP at matched bias level. In all cases, the direct method yielded lower noise and standard deviation than the indirect method based on OSEM. Overall, the proposed direct reconstruction offered a better bias-variance tradeoff than the indirect method applied to either OSEM and OSL-MAP. Direct parametric reconstruction as applied to in vivo dynamic PET MPI data is therefore a promising method for producing MBF maps with lower variance.

  18. Integrated System-Level Optimization for Concurrent Engineering With Parametric Subsystem Modeling

    NASA Technical Reports Server (NTRS)

    Schuman, Todd; DeWeck, Oliver L.; Sobieski, Jaroslaw

    2005-01-01

    The introduction of concurrent design practices to the aerospace industry has greatly increased the productivity of engineers and teams during design sessions as demonstrated by JPL's Team X. Simultaneously, advances in computing power have given rise to a host of potent numerical optimization methods capable of solving complex multidisciplinary optimization problems containing hundreds of variables, constraints, and governing equations. Unfortunately, such methods are tedious to set up and require significant amounts of time and processor power to execute, thus making them unsuitable for rapid concurrent engineering use. This paper proposes a framework for Integration of System-Level Optimization with Concurrent Engineering (ISLOCE). It uses parametric neural-network approximations of the subsystem models. These approximations are then linked to a system-level optimizer that is capable of reaching a solution quickly due to the reduced complexity of the approximations. The integration structure is described in detail and applied to the multiobjective design of a simplified Space Shuttle external fuel tank model. Further, a comparison is made between the new framework and traditional concurrent engineering (without system optimization) through an experimental trial with two groups of engineers. Each method is evaluated in terms of optimizer accuracy, time to solution, and ease of use. The results suggest that system-level optimization, running as a background process during integrated concurrent engineering sessions, is potentially advantageous as long as it is judiciously implemented.

  19. Genetic Algorithm Based Framework for Automation of Stochastic Modeling of Multi-Season Streamflows

    NASA Astrophysics Data System (ADS)

    Srivastav, R. K.; Srinivasan, K.; Sudheer, K.

    2009-05-01

    Synthetic streamflow data generation involves the synthesis of likely streamflow patterns that are statistically indistinguishable from the observed streamflow data. The various kinds of stochastic models adopted for multi-season streamflow generation in hydrology are: i) parametric models which hypothesize the form of the periodic dependence structure and the distributional form a priori (examples are PAR, PARMA); disaggregation models that aim to preserve the correlation structure at the periodic level and the aggregated annual level; ii) Nonparametric models (examples are bootstrap/kernel based methods), which characterize the laws of chance, describing the stream flow process, without recourse to prior assumptions as to the form or structure of these laws; (k-nearest neighbor (k-NN), matched block bootstrap (MABB)); non-parametric disaggregation model. iii) Hybrid models which blend both parametric and non-parametric models advantageously to model the streamflows effectively. Despite many of these developments that have taken place in the field of stochastic modeling of streamflows over the last four decades, accurate prediction of the storage and the critical drought characteristics has been posing a persistent challenge to the stochastic modeler. This is partly because, usually, the stochastic streamflow model parameters are estimated by minimizing a statistically based objective function (such as maximum likelihood (MLE) or least squares (LS) estimation) and subsequently the efficacy of the models is being validated based on the accuracy of prediction of the estimates of the water-use characteristics, which requires large number of trial simulations and inspection of many plots and tables. Still accurate prediction of the storage and the critical drought characteristics may not be ensured. In this study a multi-objective optimization framework is proposed to find the optimal hybrid model (blend of a simple parametric model, PAR(1) model and matched block bootstrap (MABB) ) based on the explicit objective functions of minimizing the relative bias and relative root mean square error in estimating the storage capacity of the reservoir. The optimal parameter set of the hybrid model is obtained based on the search over a multi- dimensional parameter space (involving simultaneous exploration of the parametric (PAR(1)) as well as the non-parametric (MABB) components). This is achieved using the efficient evolutionary search based optimization tool namely, non-dominated sorting genetic algorithm - II (NSGA-II). This approach helps in reducing the drudgery involved in the process of manual selection of the hybrid model, in addition to predicting the basic summary statistics dependence structure, marginal distribution and water-use characteristics accurately. The proposed optimization framework is used to model the multi-season streamflows of River Beaver and River Weber of USA. In case of both the rivers, the proposed GA-based hybrid model yields a much better prediction of the storage capacity (where simultaneous exploration of both parametric and non-parametric components is done) when compared with the MLE-based hybrid models (where the hybrid model selection is done in two stages, thus probably resulting in a sub-optimal model). This framework can be further extended to include different linear/non-linear hybrid stochastic models at other temporal and spatial scales as well.

  20. A strategy for improved computational efficiency of the method of anchored distributions

    NASA Astrophysics Data System (ADS)

    Over, Matthew William; Yang, Yarong; Chen, Xingyuan; Rubin, Yoram

    2013-06-01

    This paper proposes a strategy for improving the computational efficiency of model inversion using the method of anchored distributions (MAD) by "bundling" similar model parametrizations in the likelihood function. Inferring the likelihood function typically requires a large number of forward model (FM) simulations for each possible model parametrization; as a result, the process is quite expensive. To ease this prohibitive cost, we present an approximation for the likelihood function called bundling that relaxes the requirement for high quantities of FM simulations. This approximation redefines the conditional statement of the likelihood function as the probability of a set of similar model parametrizations "bundle" replicating field measurements, which we show is neither a model reduction nor a sampling approach to improving the computational efficiency of model inversion. To evaluate the effectiveness of these modifications, we compare the quality of predictions and computational cost of bundling relative to a baseline MAD inversion of 3-D flow and transport model parameters. Additionally, to aid understanding of the implementation we provide a tutorial for bundling in the form of a sample data set and script for the R statistical computing language. For our synthetic experiment, bundling achieved a 35% reduction in overall computational cost and had a limited negative impact on predicted probability distributions of the model parameters. Strategies for minimizing error in the bundling approximation, for enforcing similarity among the sets of model parametrizations, and for identifying convergence of the likelihood function are also presented.

  1. GRACILE: a comprehensive climatology of atmospheric gravity wave parameters based on satellite limb soundings

    NASA Astrophysics Data System (ADS)

    Ern, Manfred; Trinh, Quang Thai; Preusse, Peter; Gille, John C.; Mlynczak, Martin G.; Russell, James M., III; Riese, Martin

    2018-04-01

    Gravity waves are one of the main drivers of atmospheric dynamics. The spatial resolution of most global atmospheric models, however, is too coarse to properly resolve the small scales of gravity waves, which range from tens to a few thousand kilometers horizontally, and from below 1 km to tens of kilometers vertically. Gravity wave source processes involve even smaller scales. Therefore, general circulation models (GCMs) and chemistry climate models (CCMs) usually parametrize the effect of gravity waves on the global circulation. These parametrizations are very simplified. For this reason, comparisons with global observations of gravity waves are needed for an improvement of parametrizations and an alleviation of model biases. We present a gravity wave climatology based on atmospheric infrared limb emissions observed by satellite (GRACILE). GRACILE is a global data set of gravity wave distributions observed in the stratosphere and the mesosphere by the infrared limb sounding satellite instruments High Resolution Dynamics Limb Sounder (HIRDLS) and Sounding of the Atmosphere using Broadband Emission Radiometry (SABER). Typical distributions (zonal averages and global maps) of gravity wave vertical wavelengths and along-track horizontal wavenumbers are provided, as well as gravity wave temperature variances, potential energies and absolute momentum fluxes. This global data set captures the typical seasonal variations of these parameters, as well as their spatial variations. The GRACILE data set is suitable for scientific studies, and it can serve for comparison with other instruments (ground-based, airborne, or other satellite instruments) and for comparison with gravity wave distributions, both resolved and parametrized, in GCMs and CCMs. The GRACILE data set is available as supplementary data at https://doi.org/10.1594/PANGAEA.879658.

  2. Uncertainty in determining extreme precipitation thresholds

    NASA Astrophysics Data System (ADS)

    Liu, Bingjun; Chen, Junfan; Chen, Xiaohong; Lian, Yanqing; Wu, Lili

    2013-10-01

    Extreme precipitation events are rare and occur mostly on a relatively small and local scale, which makes it difficult to set the thresholds for extreme precipitations in a large basin. Based on the long term daily precipitation data from 62 observation stations in the Pearl River Basin, this study has assessed the applicability of the non-parametric, parametric, and the detrended fluctuation analysis (DFA) methods in determining extreme precipitation threshold (EPT) and the certainty to EPTs from each method. Analyses from this study show the non-parametric absolute critical value method is easy to use, but unable to reflect the difference of spatial rainfall distribution. The non-parametric percentile method can account for the spatial distribution feature of precipitation, but the problem with this method is that the threshold value is sensitive to the size of rainfall data series and is subjected to the selection of a percentile thus make it difficult to determine reasonable threshold values for a large basin. The parametric method can provide the most apt description of extreme precipitations by fitting extreme precipitation distributions with probability distribution functions; however, selections of probability distribution functions, the goodness-of-fit tests, and the size of the rainfall data series can greatly affect the fitting accuracy. In contrast to the non-parametric and the parametric methods which are unable to provide information for EPTs with certainty, the DFA method although involving complicated computational processes has proven to be the most appropriate method that is able to provide a unique set of EPTs for a large basin with uneven spatio-temporal precipitation distribution. The consistency between the spatial distribution of DFA-based thresholds with the annual average precipitation, the coefficient of variation (CV), and the coefficient of skewness (CS) for the daily precipitation further proves that EPTs determined by the DFA method are more reasonable and applicable for the Pearl River Basin.

  3. Quantitative representations of an exaggerated anxiety response in the brain of female spider phobics-a parametric fMRI study.

    PubMed

    Zilverstand, Anna; Sorger, Bettina; Kaemingk, Anita; Goebel, Rainer

    2017-06-01

    We employed a novel parametric spider picture set in the context of a parametric fMRI anxiety provocation study, designed to tease apart brain regions involved in threat monitoring from regions representing an exaggerated anxiety response in spider phobics. For the stimulus set, we systematically manipulated perceived proximity of threat by varying a depicted spider's context, size, and posture. All stimuli were validated in a behavioral rating study (phobics n = 20; controls n = 20; all female). An independent group participated in a subsequent fMRI anxiety provocation study (phobics n = 7; controls n = 7; all female), in which we compared a whole-brain categorical to a whole-brain parametric analysis. Results demonstrated that the parametric analysis provided a richer characterization of the functional role of the involved brain networks. In three brain regions-the mid insula, the dorsal anterior cingulate, and the ventrolateral prefrontal cortex-activation was linearly modulated by perceived proximity specifically in the spider phobia group, indicating a quantitative representation of an exaggerated anxiety response. In other regions (e.g., the amygdala), activation was linearly modulated in both groups, suggesting a functional role in threat monitoring. Prefrontal regions, such as dorsolateral prefrontal cortex, were activated during anxiety provocation but did not show a stimulus-dependent linear modulation in either group. The results confirm that brain regions involved in anxiety processing hold a quantitative representation of a pathological anxiety response and more generally suggest that parametric fMRI designs may be a very powerful tool for clinical research in the future, particularly when developing novel brain-based interventions (e.g., neurofeedback training). Hum Brain Mapp 38:3025-3038, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  4. Fitting the constitution type Ia supernova data with the redshift-binned parametrization method

    NASA Astrophysics Data System (ADS)

    Huang, Qing-Guo; Li, Miao; Li, Xiao-Dong; Wang, Shuang

    2009-10-01

    In this work, we explore the cosmological consequences of the recently released Constitution sample of 397 Type Ia supernovae (SNIa). By revisiting the Chevallier-Polarski-Linder (CPL) parametrization, we find that, for fitting the Constitution set alone, the behavior of dark energy (DE) significantly deviates from the cosmological constant Λ, where the equation of state (EOS) w and the energy density ρΛ of DE will rapidly decrease along with the increase of redshift z. Inspired by this clue, we separate the redshifts into different bins, and discuss the models of a constant w or a constant ρΛ in each bin, respectively. It is found that for fitting the Constitution set alone, w and ρΛ will also rapidly decrease along with the increase of z, which is consistent with the result of CPL model. Moreover, a step function model in which ρΛ rapidly decreases at redshift z˜0.331 presents a significant improvement (Δχ2=-4.361) over the CPL parametrization, and performs better than other DE models. We also plot the error bars of DE density of this model, and find that this model deviates from the cosmological constant Λ at 68.3% confidence level (CL); this may arise from some biasing systematic errors in the handling of SNIa data, or more interestingly from the nature of DE itself. In addition, for models with same number of redshift bins, a piecewise constant ρΛ model always performs better than a piecewise constant w model; this shows the advantage of using ρΛ, instead of w, to probe the variation of DE.

  5. Fitting the constitution type Ia supernova data with the redshift-binned parametrization method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang Qingguo; Kavli Institute for Theoretical Physics China, Chinese Academy of Sciences, Beijing 100190; Li Miao

    2009-10-15

    In this work, we explore the cosmological consequences of the recently released Constitution sample of 397 Type Ia supernovae (SNIa). By revisiting the Chevallier-Polarski-Linder (CPL) parametrization, we find that, for fitting the Constitution set alone, the behavior of dark energy (DE) significantly deviates from the cosmological constant {lambda}, where the equation of state (EOS) w and the energy density {rho}{sub {lambda}} of DE will rapidly decrease along with the increase of redshift z. Inspired by this clue, we separate the redshifts into different bins, and discuss the models of a constant w or a constant {rho}{sub {lambda}} in each bin,more » respectively. It is found that for fitting the Constitution set alone, w and {rho}{sub {lambda}} will also rapidly decrease along with the increase of z, which is consistent with the result of CPL model. Moreover, a step function model in which {rho}{sub {lambda}} rapidly decreases at redshift z{approx}0.331 presents a significant improvement ({delta}{chi}{sup 2}=-4.361) over the CPL parametrization, and performs better than other DE models. We also plot the error bars of DE density of this model, and find that this model deviates from the cosmological constant {lambda} at 68.3% confidence level (CL); this may arise from some biasing systematic errors in the handling of SNIa data, or more interestingly from the nature of DE itself. In addition, for models with same number of redshift bins, a piecewise constant {rho}{sub {lambda}} model always performs better than a piecewise constant w model; this shows the advantage of using {rho}{sub {lambda}}, instead of w, to probe the variation of DE.« less

  6. Tropical cyclone induced asymmetry of sea level surge and fall and its presentation in a storm surge model with parametric wind fields

    NASA Astrophysics Data System (ADS)

    Peng, Machuan; Xie, Lian; Pietrafesa, Leonard J.

    The asymmetry of tropical cyclone induced maximum coastal sea level rise (positive surge) and fall (negative surge) is studied using a three-dimensional storm surge model. It is found that the negative surge induced by offshore winds is more sensitive to wind speed and direction changes than the positive surge by onshore winds. As a result, negative surge is inherently more difficult to forecast than positive surge since there is uncertainty in tropical storm wind forecasts. The asymmetry of negative and positive surge under parametric wind forcing is more apparent in shallow water regions. For tropical cyclones with fixed central pressure, the surge asymmetry increases with decreasing storm translation speed. For those with the same translation speed, a weaker tropical cyclone is expected to gain a higher AI (asymmetry index) value though its induced maximum surge and fall are smaller. With fixed RMW (radius of maximum wind), the relationship between central pressure and AI is heterogeneous and depends on the value of RMW. Tropical cyclone's wind inflow angle can also affect surge asymmetry. A set of idealized cases as well as two historic tropical cyclones are used to illustrate the surge asymmetry.

  7. Reference interval estimation: Methodological comparison using extensive simulations and empirical data.

    PubMed

    Daly, Caitlin H; Higgins, Victoria; Adeli, Khosrow; Grey, Vijay L; Hamid, Jemila S

    2017-12-01

    To statistically compare and evaluate commonly used methods of estimating reference intervals and to determine which method is best based on characteristics of the distribution of various data sets. Three approaches for estimating reference intervals, i.e. parametric, non-parametric, and robust, were compared with simulated Gaussian and non-Gaussian data. The hierarchy of the performances of each method was examined based on bias and measures of precision. The findings of the simulation study were illustrated through real data sets. In all Gaussian scenarios, the parametric approach provided the least biased and most precise estimates. In non-Gaussian scenarios, no single method provided the least biased and most precise estimates for both limits of a reference interval across all sample sizes, although the non-parametric approach performed the best for most scenarios. The hierarchy of the performances of the three methods was only impacted by sample size and skewness. Differences between reference interval estimates established by the three methods were inflated by variability. Whenever possible, laboratories should attempt to transform data to a Gaussian distribution and use the parametric approach to obtain the most optimal reference intervals. When this is not possible, laboratories should consider sample size and skewness as factors in their choice of reference interval estimation method. The consequences of false positives or false negatives may also serve as factors in this decision. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  8. Parametric amplification in MoS2 drum resonator.

    PubMed

    Prasad, Parmeshwar; Arora, Nishta; Naik, A K

    2017-11-30

    Parametric amplification is widely used in diverse areas from optics to electronic circuits to enhance low level signals by varying relevant system parameters. Parametric amplification has also been performed in several micro-nano resonators including nano-electromechanical system (NEMS) resonators based on a two-dimensional (2D) material. Here, we report the enhancement of mechanical response in a MoS 2 drum resonator using degenerate parametric amplification. We use parametric pumping to modulate the spring constant of the MoS 2 resonator and achieve a 10 dB amplitude gain. We also demonstrate quality factor enhancement in the resonator with parametric amplification. We investigate the effect of cubic nonlinearity on parametric amplification and show that it limits the gain of the mechanical resonator. Amplifying ultra-small displacements at room temperature and understanding the limitations of the amplification in these devices is key for using these devices for practical applications.

  9. Wind Plant Power Optimization through Yaw Control using a Parametric Model for Wake Effects -- A CFD Simulation Study

    DOE PAGES

    Gebraad, P. M. O.; Teeuwisse, F. W.; van Wingerden, J. W.; ...

    2016-01-01

    This article presents a wind plant control strategy that optimizes the yaw settings of wind turbines for improved energy production of the whole wind plant by taking into account wake effects. The optimization controller is based on a novel internal parametric model for wake effects, called the FLOw Redirection and Induction in Steady-state (FLORIS) model. The FLORIS model predicts the steady-state wake locations and the effective flow velocities at each turbine, and the resulting turbine electrical energy production levels, as a function of the axial induction and the yaw angle of the different rotors. The FLORIS model has a limitedmore » number of parameters that are estimated based on turbine electrical power production data. In high-fidelity computational fluid dynamics simulations of a small wind plant, we demonstrate that the optimization control based on the FLORIS model increases the energy production of the wind plant, with a reduction of loads on the turbines as an additional effect.« less

  10. The geometry of distributional preferences and a non-parametric identification approach: The Equality Equivalence Test.

    PubMed

    Kerschbamer, Rudolf

    2015-05-01

    This paper proposes a geometric delineation of distributional preference types and a non-parametric approach for their identification in a two-person context. It starts with a small set of assumptions on preferences and shows that this set (i) naturally results in a taxonomy of distributional archetypes that nests all empirically relevant types considered in previous work; and (ii) gives rise to a clean experimental identification procedure - the Equality Equivalence Test - that discriminates between archetypes according to core features of preferences rather than properties of specific modeling variants. As a by-product the test yields a two-dimensional index of preference intensity.

  11. A Multivariate Quality Loss Function Approach for Optimization of Spinning Processes

    NASA Astrophysics Data System (ADS)

    Chakraborty, Shankar; Mitra, Ankan

    2018-05-01

    Recent advancements in textile industry have given rise to several spinning techniques, such as ring spinning, rotor spinning etc., which can be used to produce a wide variety of textile apparels so as to fulfil the end requirements of the customers. To achieve the best out of these processes, they should be utilized at their optimal parametric settings. However, in presence of multiple yarn characteristics which are often conflicting in nature, it becomes a challenging task for the spinning industry personnel to identify the best parametric mix which would simultaneously optimize all the responses. Hence, in this paper, the applicability of a new systematic approach in the form of multivariate quality loss function technique is explored for optimizing multiple quality characteristics of yarns while identifying the ideal settings of two spinning processes. It is observed that this approach performs well against the other multi-objective optimization techniques, such as desirability function, distance function and mean squared error methods. With slight modifications in the upper and lower specification limits of the considered quality characteristics, and constraints of the non-linear optimization problem, it can be successfully applied to other processes in textile industry to determine their optimal parametric settings.

  12. Weakly Supervised Segmentation-Aided Classification of Urban Scenes from 3d LIDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Guinard, S.; Landrieu, L.

    2017-05-01

    We consider the problem of the semantic classification of 3D LiDAR point clouds obtained from urban scenes when the training set is limited. We propose a non-parametric segmentation model for urban scenes composed of anthropic objects of simple shapes, partionning the scene into geometrically-homogeneous segments which size is determined by the local complexity. This segmentation can be integrated into a conditional random field classifier (CRF) in order to capture the high-level structure of the scene. For each cluster, this allows us to aggregate the noisy predictions of a weakly-supervised classifier to produce a higher confidence data term. We demonstrate the improvement provided by our method over two publicly-available large-scale data sets.

  13. Incorporating parametric uncertainty into population viability analysis models

    USGS Publications Warehouse

    McGowan, Conor P.; Runge, Michael C.; Larson, Michael A.

    2011-01-01

    Uncertainty in parameter estimates from sampling variation or expert judgment can introduce substantial uncertainty into ecological predictions based on those estimates. However, in standard population viability analyses, one of the most widely used tools for managing plant, fish and wildlife populations, parametric uncertainty is often ignored in or discarded from model projections. We present a method for explicitly incorporating this source of uncertainty into population models to fully account for risk in management and decision contexts. Our method involves a two-step simulation process where parametric uncertainty is incorporated into the replication loop of the model and temporal variance is incorporated into the loop for time steps in the model. Using the piping plover, a federally threatened shorebird in the USA and Canada, as an example, we compare abundance projections and extinction probabilities from simulations that exclude and include parametric uncertainty. Although final abundance was very low for all sets of simulations, estimated extinction risk was much greater for the simulation that incorporated parametric uncertainty in the replication loop. Decisions about species conservation (e.g., listing, delisting, and jeopardy) might differ greatly depending on the treatment of parametric uncertainty in population models.

  14. A Non-Parametric Approach for the Activation Detection of Block Design fMRI Simulated Data Using Self-Organizing Maps and Support Vector Machine.

    PubMed

    Bahrami, Sheyda; Shamsi, Mousa

    2017-01-01

    Functional magnetic resonance imaging (fMRI) is a popular method to probe the functional organization of the brain using hemodynamic responses. In this method, volume images of the entire brain are obtained with a very good spatial resolution and low temporal resolution. However, they always suffer from high dimensionality in the face of classification algorithms. In this work, we combine a support vector machine (SVM) with a self-organizing map (SOM) for having a feature-based classification by using SVM. Then, a linear kernel SVM is used for detecting the active areas. Here, we use SOM for feature extracting and labeling the datasets. SOM has two major advances: (i) it reduces dimension of data sets for having less computational complexity and (ii) it is useful for identifying brain regions with small onset differences in hemodynamic responses. Our non-parametric model is compared with parametric and non-parametric methods. We use simulated fMRI data sets and block design inputs in this paper and consider the contrast to noise ratio (CNR) value equal to 0.6 for simulated datasets. fMRI simulated dataset has contrast 1-4% in active areas. The accuracy of our proposed method is 93.63% and the error rate is 6.37%.

  15. Comparative Analysis of a Principal Component Analysis-Based and an Artificial Neural Network-Based Method for Baseline Removal.

    PubMed

    Carvajal, Roberto C; Arias, Luis E; Garces, Hugo O; Sbarbaro, Daniel G

    2016-04-01

    This work presents a non-parametric method based on a principal component analysis (PCA) and a parametric one based on artificial neural networks (ANN) to remove continuous baseline features from spectra. The non-parametric method estimates the baseline based on a set of sampled basis vectors obtained from PCA applied over a previously composed continuous spectra learning matrix. The parametric method, however, uses an ANN to filter out the baseline. Previous studies have demonstrated that this method is one of the most effective for baseline removal. The evaluation of both methods was carried out by using a synthetic database designed for benchmarking baseline removal algorithms, containing 100 synthetic composed spectra at different signal-to-baseline ratio (SBR), signal-to-noise ratio (SNR), and baseline slopes. In addition to deomonstrating the utility of the proposed methods and to compare them in a real application, a spectral data set measured from a flame radiation process was used. Several performance metrics such as correlation coefficient, chi-square value, and goodness-of-fit coefficient were calculated to quantify and compare both algorithms. Results demonstrate that the PCA-based method outperforms the one based on ANN both in terms of performance and simplicity. © The Author(s) 2016.

  16. Relativistic force field: parametric computations of proton-proton coupling constants in (1)H NMR spectra.

    PubMed

    Kutateladze, Andrei G; Mukhina, Olga A

    2014-09-05

    Spin-spin coupling constants in (1)H NMR carry a wealth of structural information and offer a powerful tool for deciphering molecular structures. However, accurate ab initio or DFT calculations of spin-spin coupling constants have been very challenging and expensive. Scaling of (easy) Fermi contacts, fc, especially in the context of recent findings by Bally and Rablen (Bally, T.; Rablen, P. R. J. Org. Chem. 2011, 76, 4818), offers a framework for achieving practical evaluation of spin-spin coupling constants. We report a faster and more precise parametrization approach utilizing a new basis set for hydrogen atoms optimized in conjunction with (i) inexpensive B3LYP/6-31G(d) molecular geometries, (ii) inexpensive 4-31G basis set for carbon atoms in fc calculations, and (iii) individual parametrization for different atom types/hybridizations, not unlike a force field in molecular mechanics, but designed for the fc's. With the training set of 608 experimental constants we achieved rmsd <0.19 Hz. The methodology performs very well as we illustrate with a set of complex organic natural products, including strychnine (rmsd 0.19 Hz), morphine (rmsd 0.24 Hz), etc. This precision is achieved with much shorter computational times: accurate spin-spin coupling constants for the two conformers of strychnine were computed in parallel on two 16-core nodes of a Linux cluster within 10 min.

  17. Sensitivity enhancement in swept-source optical coherence tomography by parametric balanced detector and amplifier

    PubMed Central

    Kang, Jiqiang; Wei, Xiaoming; Li, Bowen; Wang, Xie; Yu, Luoqin; Tan, Sisi; Jinata, Chandra; Wong, Kenneth K. Y.

    2016-01-01

    We proposed a sensitivity enhancement method of the interference-based signal detection approach and applied it on a swept-source optical coherence tomography (SS-OCT) system through all-fiber optical parametric amplifier (FOPA) and parametric balanced detector (BD). The parametric BD was realized by combining the signal and phase conjugated idler band that was newly-generated through FOPA, and specifically by superimposing these two bands at a photodetector. The sensitivity enhancement by FOPA and parametric BD in SS-OCT were demonstrated experimentally. The results show that SS-OCT with FOPA and SS-OCT with parametric BD can provide more than 9 dB and 12 dB sensitivity improvement, respectively, when compared with the conventional SS-OCT in a spectral bandwidth spanning over 76 nm. To further verify and elaborate their sensitivity enhancement, a bio-sample imaging experiment was conducted on loach eyes by conventional SS-OCT setup, SS-OCT with FOPA and parametric BD at different illumination power levels. All these results proved that using FOPA and parametric BD could improve the sensitivity significantly in SS-OCT systems. PMID:27446655

  18. Maximum Marginal Likelihood Estimation of a Monotonic Polynomial Generalized Partial Credit Model with Applications to Multiple Group Analysis.

    PubMed

    Falk, Carl F; Cai, Li

    2016-06-01

    We present a semi-parametric approach to estimating item response functions (IRF) useful when the true IRF does not strictly follow commonly used functions. Our approach replaces the linear predictor of the generalized partial credit model with a monotonic polynomial. The model includes the regular generalized partial credit model at the lowest order polynomial. Our approach extends Liang's (A semi-parametric approach to estimate IRFs, Unpublished doctoral dissertation, 2007) method for dichotomous item responses to the case of polytomous data. Furthermore, item parameter estimation is implemented with maximum marginal likelihood using the Bock-Aitkin EM algorithm, thereby facilitating multiple group analyses useful in operational settings. Our approach is demonstrated on both educational and psychological data. We present simulation results comparing our approach to more standard IRF estimation approaches and other non-parametric and semi-parametric alternatives.

  19. Improved estimation of parametric images of cerebral glucose metabolic rate from dynamic FDG-PET using volume-wise principle component analysis

    NASA Astrophysics Data System (ADS)

    Dai, Xiaoqian; Tian, Jie; Chen, Zhe

    2010-03-01

    Parametric images can represent both spatial distribution and quantification of the biological and physiological parameters of tracer kinetics. The linear least square (LLS) method is a well-estimated linear regression method for generating parametric images by fitting compartment models with good computational efficiency. However, bias exists in LLS-based parameter estimates, owing to the noise present in tissue time activity curves (TTACs) that propagates as correlated error in the LLS linearized equations. To address this problem, a volume-wise principal component analysis (PCA) based method is proposed. In this method, firstly dynamic PET data are properly pre-transformed to standardize noise variance as PCA is a data driven technique and can not itself separate signals from noise. Secondly, the volume-wise PCA is applied on PET data. The signals can be mostly represented by the first few principle components (PC) and the noise is left in the subsequent PCs. Then the noise-reduced data are obtained using the first few PCs by applying 'inverse PCA'. It should also be transformed back according to the pre-transformation method used in the first step to maintain the scale of the original data set. Finally, the obtained new data set is used to generate parametric images using the linear least squares (LLS) estimation method. Compared with other noise-removal method, the proposed method can achieve high statistical reliability in the generated parametric images. The effectiveness of the method is demonstrated both with computer simulation and with clinical dynamic FDG PET study.

  20. Unemployment and subsequent depression: A mediation analysis using the parametric G-formula.

    PubMed

    Bijlsma, Maarten J; Tarkiainen, Lasse; Myrskylä, Mikko; Martikainen, Pekka

    2017-12-01

    The effects of unemployment on depression are difficult to establish because of confounding and limited understanding of the mechanisms at the population level. In particular, due to longitudinal interdependencies between exposures, mediators and outcomes, intermediate confounding is an obstacle for mediation analyses. Using longitudinal Finnish register data on socio-economic characteristics and medication purchases, we extracted individuals who entered the labor market between ages 16 and 25 in the period 1996 to 2001 and followed them until the year 2007 (n = 42,172). With the parametric G-formula we estimated the population-averaged effect on first antidepressant purchase of a simulated intervention which set all unemployed person-years to employed. In the data, 74% of person-years were employed and 8% unemployed, the rest belonging to studying or other status. In the intervention scenario, employment rose to 85% and the hazard of first antidepressant purchase decreased by 7.6%. Of this reduction 61% was mediated, operating primarily through changes in income and household status, while mediation through other health conditions was negligible. These effects were negligible for women and particularly prominent among less educated men. By taking complex interdependencies into account in a framework of observed repeated measures data, we found that eradicating unemployment raises income levels, promotes family formation, and thereby reduces antidepressant consumption at the population-level. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  1. Parametric modeling studies of turbulent non-premixed jet flames with thin reaction zones

    NASA Astrophysics Data System (ADS)

    Wang, Haifeng

    2013-11-01

    The Sydney piloted jet flame series (Flames L, B, and M) feature thinner reaction zones and hence impose greater challenges to modeling than the Sanida Piloted jet flames (Flames D, E, and F). Recently, the Sydney flames received renewed interest due to these challenges. Several new modeling efforts have emerged. However, no systematic parametric modeling studies have been reported for the Sydney flames. A large set of modeling computations of the Sydney flames is presented here by using the coupled large eddy simulation (LES)/probability density function (PDF) method. Parametric studies are performed to gain insight into the model performance, its sensitivity and the effect of numerics.

  2. Yadage and Packtivity - analysis preservation using parametrized workflows

    NASA Astrophysics Data System (ADS)

    Cranmer, Kyle; Heinrich, Lukas

    2017-10-01

    Preserving data analyses produced by the collaborations at LHC in a parametrized fashion is crucial in order to maintain reproducibility and re-usability. We argue for a declarative description in terms of individual processing steps - “packtivities” - linked through a dynamic directed acyclic graph (DAG) and present an initial set of JSON schemas for such a description and an implementation - “yadage” - capable of executing workflows of analysis preserved via Linux containers.

  3. SEC sensor parametric test and evaluation system

    NASA Technical Reports Server (NTRS)

    1978-01-01

    This system provides the necessary automated hardware required to carry out, in conjunction with the existing 70 mm SEC television camera, the sensor evaluation tests which are described in detail. The Parametric Test Set (PTS) was completed and is used in a semiautomatic data acquisition and control mode to test the development of the 70 mm SEC sensor, WX 32193. Data analysis of raw data is performed on the Princeton IBM 360-91 computer.

  4. 20 mJ, 1 ps Yb:YAG Thin-disk Regenerative Amplifier

    PubMed Central

    Alismail, Ayman; Wang, Haochuan; Brons, Jonathan; Fattahi, Hanieh

    2017-01-01

    This is a report on a 100 W, 20 mJ, 1 ps Yb:YAG thin-disk regenerative amplifier. A homemade Yb:YAG thin-disk, Kerr-lens mode-locked oscillator with turn-key performance and microjoule-level pulse energy is used to seed the regenerative chirped-pulse amplifier. The amplifier is placed in airtight housing. It operates at room temperature and exhibits stable operation at a 5 kHz repetition rate, with a pulse-to-pulse stability less than 1%. By employing a 1.5 mm-thick beta barium borate crystal, the frequency of the laser output is doubled to 515 nm, with an average power of 70 W, which corresponds to an optical-to-optical efficiency of 70%. This superior performance makes the system an attractive pump source for optical parametric chirped-pulse amplifiers in the near-infrared and mid-infrared spectral range. Combining the turn-key performance and the superior stability of the regenerative amplifier, the system facilitates the generation of a broadband, CEP-stable seed. Providing the seed and pump of the optical parametric chirped-pulse amplification (OPCPA) from one laser source eliminates the demand of active temporal synchronization between these pulses. This work presents a detailed guide to set up and operate a Yb:YAG thin-disk regenerative amplifier, based on chirped-pulse amplification (CPA), as a pump source for an optical parametric chirped-pulse amplifier. PMID:28745636

  5. Summary of the Fourth AIAA CFD Drag Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Vassberg, John C.; Tinoco, Edward N.; Mani, Mori; Rider, Ben; Zickuhr, Tom; Levy, David W.; Brodersen, Olaf P.; Eisfeld, Bernhard; Crippa, Simone; Wahls, Richard A.; hide

    2010-01-01

    Results from the Fourth AIAA Drag Prediction Workshop (DPW-IV) are summarized. The workshop focused on the prediction of both absolute and differential drag levels for wing-body and wing-body-horizontal-tail configurations that are representative of transonic transport air- craft. Numerical calculations are performed using industry-relevant test cases that include lift- specific flight conditions, trimmed drag polars, downwash variations, dragrises and Reynolds- number effects. Drag, lift and pitching moment predictions from numerous Reynolds-Averaged Navier-Stokes computational fluid dynamics methods are presented. Solutions are performed on structured, unstructured and hybrid grid systems. The structured-grid sets include point- matched multi-block meshes and over-set grid systems. The unstructured and hybrid grid sets are comprised of tetrahedral, pyramid, prismatic, and hexahedral elements. Effort is made to provide a high-quality and parametrically consistent family of grids for each grid type about each configuration under study. The wing-body-horizontal families are comprised of a coarse, medium and fine grid; an optional extra-fine grid augments several of the grid families. These mesh sequences are utilized to determine asymptotic grid-convergence characteristics of the solution sets, and to estimate grid-converged absolute drag levels of the wing-body-horizontal configuration using Richardson extrapolation.

  6. Sparkle/AM1 Parameters for the Modeling of Samarium(III) and Promethium(III) Complexes.

    PubMed

    Freire, Ricardo O; da Costa, Nivan B; Rocha, Gerd B; Simas, Alfredo M

    2006-01-01

    The Sparkle/AM1 model is extended to samarium(III) and promethium(III) complexes. A set of 15 structures of high crystallographic quality (R factor < 0.05 Å), with ligands chosen to be representative of all samarium complexes in the Cambridge Crystallographic Database 2004, CSD, with nitrogen or oxygen directly bonded to the samarium ion, was used as a training set. In the validation procedure, we used a set of 42 other complexes, also of high crystallographic quality. The results show that this parametrization for the Sm(III) ion is similar in accuracy to the previous parametrizations for Eu(III), Gd(III), and Tb(III). On the other hand, promethium is an artificial radioactive element with no stable isotope. So far, there are no promethium complex crystallographic structures in CSD. To circumvent this, we confirmed our previous result that RHF/STO-3G/ECP, with the MWB effective core potential (ECP), appears to be the most efficient ab initio model chemistry in terms of coordination polyhedron crystallographic geometry predictions from isolated lanthanide complex ion calculations. We thus generated a set of 15 RHF/STO-3G/ECP promethium complex structures with ligands chosen to be representative of complexes available in the CSD for all other trivalent lanthanide cations, with nitrogen or oxygen directly bonded to the lanthanide ion. For the 42 samarium(III) complexes and 15 promethium(III) complexes considered, the Sparkle/AM1 unsigned mean error, for all interatomic distances between the Ln(III) ion and the ligand atoms of the first sphere of coordination, is 0.07 and 0.06 Å, respectively, a level of accuracy comparable to present day ab initio/ECP geometries, while being hundreds of times faster.

  7. BROCCOLI: Software for fast fMRI analysis on many-core CPUs and GPUs

    PubMed Central

    Eklund, Anders; Dufort, Paul; Villani, Mattias; LaConte, Stephen

    2014-01-01

    Analysis of functional magnetic resonance imaging (fMRI) data is becoming ever more computationally demanding as temporal and spatial resolutions improve, and large, publicly available data sets proliferate. Moreover, methodological improvements in the neuroimaging pipeline, such as non-linear spatial normalization, non-parametric permutation tests and Bayesian Markov Chain Monte Carlo approaches, can dramatically increase the computational burden. Despite these challenges, there do not yet exist any fMRI software packages which leverage inexpensive and powerful graphics processing units (GPUs) to perform these analyses. Here, we therefore present BROCCOLI, a free software package written in OpenCL (Open Computing Language) that can be used for parallel analysis of fMRI data on a large variety of hardware configurations. BROCCOLI has, for example, been tested with an Intel CPU, an Nvidia GPU, and an AMD GPU. These tests show that parallel processing of fMRI data can lead to significantly faster analysis pipelines. This speedup can be achieved on relatively standard hardware, but further, dramatic speed improvements require only a modest investment in GPU hardware. BROCCOLI (running on a GPU) can perform non-linear spatial normalization to a 1 mm3 brain template in 4–6 s, and run a second level permutation test with 10,000 permutations in about a minute. These non-parametric tests are generally more robust than their parametric counterparts, and can also enable more sophisticated analyses by estimating complicated null distributions. Additionally, BROCCOLI includes support for Bayesian first-level fMRI analysis using a Gibbs sampler. The new software is freely available under GNU GPL3 and can be downloaded from github (https://github.com/wanderine/BROCCOLI/). PMID:24672471

  8. Task-based detectability comparison of exponential transformation of free-response operating characteristic (EFROC) curve and channelized Hotelling observer (CHO)

    NASA Astrophysics Data System (ADS)

    Khobragade, P.; Fan, Jiahua; Rupcich, Franco; Crotty, Dominic J.; Gilat Schmidt, Taly

    2016-03-01

    This study quantitatively evaluated the performance of the exponential transformation of the free-response operating characteristic curve (EFROC) metric, with the Channelized Hotelling Observer (CHO) as a reference. The CHO has been used for image quality assessment of reconstruction algorithms and imaging systems and often it is applied to study the signal-location-known cases. The CHO also requires a large set of images to estimate the covariance matrix. In terms of clinical applications, this assumption and requirement may be unrealistic. The newly developed location-unknown EFROC detectability metric is estimated from the confidence scores reported by a model observer. Unlike the CHO, EFROC does not require a channelization step and is a non-parametric detectability metric. There are few quantitative studies available on application of the EFROC metric, most of which are based on simulation data. This study investigated the EFROC metric using experimental CT data. A phantom with four low contrast objects: 3mm (14 HU), 5mm (7HU), 7mm (5 HU) and 10 mm (3 HU) was scanned at dose levels ranging from 25 mAs to 270 mAs and reconstructed using filtered backprojection. The area under the curve values for CHO (AUC) and EFROC (AFE) were plotted with respect to different dose levels. The number of images required to estimate the non-parametric AFE metric was calculated for varying tasks and found to be less than the number of images required for parametric CHO estimation. The AFE metric was found to be more sensitive to changes in dose than the CHO metric. This increased sensitivity and the assumption of unknown signal location may be useful for investigating and optimizing CT imaging methods. Future work is required to validate the AFE metric against human observers.

  9. Parametric vs. non-parametric statistics of low resolution electromagnetic tomography (LORETA).

    PubMed

    Thatcher, R W; North, D; Biver, C

    2005-01-01

    This study compared the relative statistical sensitivity of non-parametric and parametric statistics of 3-dimensional current sources as estimated by the EEG inverse solution Low Resolution Electromagnetic Tomography (LORETA). One would expect approximately 5% false positives (classification of a normal as abnormal) at the P < .025 level of probability (two tailed test) and approximately 1% false positives at the P < .005 level. EEG digital samples (2 second intervals sampled 128 Hz, 1 to 2 minutes eyes closed) from 43 normal adult subjects were imported into the Key Institute's LORETA program. We then used the Key Institute's cross-spectrum and the Key Institute's LORETA output files (*.lor) as the 2,394 gray matter pixel representation of 3-dimensional currents at different frequencies. The mean and standard deviation *.lor files were computed for each of the 2,394 gray matter pixels for each of the 43 subjects. Tests of Gaussianity and different transforms were computed in order to best approximate a normal distribution for each frequency and gray matter pixel. The relative sensitivity of parametric vs. non-parametric statistics were compared using a "leave-one-out" cross validation method in which individual normal subjects were withdrawn and then statistically classified as being either normal or abnormal based on the remaining subjects. Log10 transforms approximated Gaussian distribution in the range of 95% to 99% accuracy. Parametric Z score tests at P < .05 cross-validation demonstrated an average misclassification rate of approximately 4.25%, and range over the 2,394 gray matter pixels was 27.66% to 0.11%. At P < .01 parametric Z score cross-validation false positives were 0.26% and ranged from 6.65% to 0% false positives. The non-parametric Key Institute's t-max statistic at P < .05 had an average misclassification error rate of 7.64% and ranged from 43.37% to 0.04% false positives. The nonparametric t-max at P < .01 had an average misclassification rate of 6.67% and ranged from 41.34% to 0% false positives of the 2,394 gray matter pixels for any cross-validated normal subject. In conclusion, adequate approximation to Gaussian distribution and high cross-validation can be achieved by the Key Institute's LORETA programs by using a log10 transform and parametric statistics, and parametric normative comparisons had lower false positive rates than the non-parametric tests.

  10. Application of Group-Level Item Response Models in the Evaluation of Consumer Reports about Health Plan Quality

    ERIC Educational Resources Information Center

    Reise, Steven P.; Meijer, Rob R.; Ainsworth, Andrew T.; Morales, Leo S.; Hays, Ron D.

    2006-01-01

    Group-level parametric and non-parametric item response theory models were applied to the Consumer Assessment of Healthcare Providers and Systems (CAHPS[R]) 2.0 core items in a sample of 35,572 Medicaid recipients nested within 131 health plans. Results indicated that CAHPS responses are dominated by within health plan variation, and only weakly…

  11. Parametrically Guided Generalized Additive Models with Application to Mergers and Acquisitions Data

    PubMed Central

    Fan, Jianqing; Maity, Arnab; Wang, Yihui; Wu, Yichao

    2012-01-01

    Generalized nonparametric additive models present a flexible way to evaluate the effects of several covariates on a general outcome of interest via a link function. In this modeling framework, one assumes that the effect of each of the covariates is nonparametric and additive. However, in practice, often there is prior information available about the shape of the regression functions, possibly from pilot studies or exploratory analysis. In this paper, we consider such situations and propose an estimation procedure where the prior information is used as a parametric guide to fit the additive model. Specifically, we first posit a parametric family for each of the regression functions using the prior information (parametric guides). After removing these parametric trends, we then estimate the remainder of the nonparametric functions using a nonparametric generalized additive model, and form the final estimates by adding back the parametric trend. We investigate the asymptotic properties of the estimates and show that when a good guide is chosen, the asymptotic variance of the estimates can be reduced significantly while keeping the asymptotic variance same as the unguided estimator. We observe the performance of our method via a simulation study and demonstrate our method by applying to a real data set on mergers and acquisitions. PMID:23645976

  12. Parametrically Guided Generalized Additive Models with Application to Mergers and Acquisitions Data.

    PubMed

    Fan, Jianqing; Maity, Arnab; Wang, Yihui; Wu, Yichao

    2013-01-01

    Generalized nonparametric additive models present a flexible way to evaluate the effects of several covariates on a general outcome of interest via a link function. In this modeling framework, one assumes that the effect of each of the covariates is nonparametric and additive. However, in practice, often there is prior information available about the shape of the regression functions, possibly from pilot studies or exploratory analysis. In this paper, we consider such situations and propose an estimation procedure where the prior information is used as a parametric guide to fit the additive model. Specifically, we first posit a parametric family for each of the regression functions using the prior information (parametric guides). After removing these parametric trends, we then estimate the remainder of the nonparametric functions using a nonparametric generalized additive model, and form the final estimates by adding back the parametric trend. We investigate the asymptotic properties of the estimates and show that when a good guide is chosen, the asymptotic variance of the estimates can be reduced significantly while keeping the asymptotic variance same as the unguided estimator. We observe the performance of our method via a simulation study and demonstrate our method by applying to a real data set on mergers and acquisitions.

  13. Daylight exposure and the other predictors of burnout among nurses in a University Hospital.

    PubMed

    Alimoglu, Mustafa Kemal; Donmez, Levent

    2005-07-01

    The purpose of the study was to investigate if daylight exposure in work setting could be placed among the predictors of job burnout. The sample was composed of 141 nurses who work in Akdeniz University Hospital in Antalya, Turkey. All participants were asked to complete a personal data collection form, the Maslach Burnout Inventory, the Work Related Strain Inventory and the Work Satisfaction Questionnaire to collect data about their burnout, work-related stress (WRS) and job satisfaction (JS) levels in addition to personal characteristics. Descriptive statistics, parametric and non-parametric tests and correlation analysis were used in statistical analyses. Daylight exposure showed no direct effect on burnout but it was indirectly effective via WRS and JS. Exposure to daylight at least 3h a day was found to cause less stress and higher satisfaction at work. Suffering from sleep disorders, younger age, job-related health problems and educational level were found to have total or partial direct effects on burnout. Night shifts may lead to burnout via work related strain and working in inpatient services and dissatisfaction with annual income may be effective via job dissatisfaction. This study confirmed some established predictors of burnout and provided data on an unexplored area. Daylight exposure may be effective on job burnout.

  14. A climatology of gravity wave parameters based on satellite limb soundings

    NASA Astrophysics Data System (ADS)

    Ern, Manfred; Trinh, Quang Thai; Preusse, Peter; Riese, Martin

    2017-04-01

    Gravity waves are one of the main drivers of atmospheric dynamics. The resolution of most global circulation models (GCMs) and chemistry climate models (CCMs), however, is too coarse to properly resolve the small scales of gravity waves. Horizontal scales of gravity waves are in the range of tens to a few thousand kilometers. Gravity wave source processes involve even smaller scales. Therefore GCMs/CCMs usually parametrize the effect of gravity waves on the global circulation. These parametrizations are very simplified, and comparisons with global observations of gravity waves are needed for an improvement of parametrizations and an alleviation of model biases. In our study, we present a global data set of gravity wave distributions observed in the stratosphere and the mesosphere by the infrared limb sounding satellite instruments High Resolution Dynamics Limb Sounder (HIRDLS) and Sounding of the Atmosphere using Broadband Emission Radiometry (SABER). We provide various gravity wave parameters (for example, gravity variances, potential energies and absolute momentum fluxes). This comprehensive climatological data set can serve for comparison with other instruments (ground based, airborne, or other satellite instruments), as well as for comparison with gravity wave distributions, both resolved and parametrized, in GCMs and CCMs. The purpose of providing various different parameters is to make our data set useful for a large number of potential users and to overcome limitations of other observation techniques, or of models, that may be able to provide only one of those parameters. We present a climatology of typical average global distributions and of zonal averages, as well as their natural range of variations. In addition, we discuss seasonal variations of the global distribution of gravity waves, as well as limitations of our method of deriving gravity wave parameters from satellite data.

  15. Model-independent fit to Planck and BICEP2 data

    NASA Astrophysics Data System (ADS)

    Barranco, Laura; Boubekeur, Lotfi; Mena, Olga

    2014-09-01

    Inflation is the leading theory to describe elegantly the initial conditions that led to structure formation in our Universe. In this paper, we present a novel phenomenological fit to the Planck, WMAP polarization (WP) and the BICEP2 data sets using an alternative parametrization. Instead of starting from inflationary potentials and computing the inflationary observables, we use a phenomenological parametrization due to Mukhanov, describing inflation by an effective equation of state, in terms of the number of e-folds and two phenomenological parameters α and β. Within such a parametrization, which captures the different inflationary models in a model-independent way, the values of the scalar spectral index ns, its running and the tensor-to-scalar ratio r are predicted, given a set of parameters (α ,β). We perform a Markov Chain Monte Carlo analysis of these parameters, and we show that the combined analysis of Planck and WP data favors the Starobinsky and Higgs inflation scenarios. Assuming that the BICEP2 signal is not entirely due to foregrounds, the addition of this last data set prefers instead the ϕ2 chaotic models. The constraint we get from Planck and WP data alone on the derived tensor-to-scalar ratio is r <0.18 at 95% C.L., value which is consistent with the one quoted from the BICEP2 Collaboration analysis, r =0.16-0.05+0-06, after foreground subtraction. This is not necessarily at odds with the 2σ tension found between Planck and BICEP2 measurements when analyzing data in terms of the usual ns and r parameters, given that the parametrization used here, for the preferred value ns≃0.96, allows only for a restricted parameter space in the usual (ns,r) plane.

  16. Relative Critical Points

    NASA Astrophysics Data System (ADS)

    Lewis, Debra

    2013-05-01

    Relative equilibria of Lagrangian and Hamiltonian systems with symmetry are critical points of appropriate scalar functions parametrized by the Lie algebra (or its dual) of the symmetry group. Setting aside the structures - symplectic, Poisson, or variational - generating dynamical systems from such functions highlights the common features of their construction and analysis, and supports the construction of analogous functions in non-Hamiltonian settings. If the symmetry group is nonabelian, the functions are invariant only with respect to the isotropy subgroup of the given parameter value. Replacing the parametrized family of functions with a single function on the product manifold and extending the action using the (co)adjoint action on the algebra or its dual yields a fully invariant function. An invariant map can be used to reverse the usual perspective: rather than selecting a parametrized family of functions and finding their critical points, conditions under which functions will be critical on specific orbits, typically distinguished by isotropy class, can be derived. This strategy is illustrated using several well-known mechanical systems - the Lagrange top, the double spherical pendulum, the free rigid body, and the Riemann ellipsoids - and generalizations of these systems.

  17. A design study for the addition of higher order parametric discrete elements to NASTRAN

    NASA Technical Reports Server (NTRS)

    Stanton, E. L.

    1972-01-01

    The addition of discrete elements to NASTRAN poses significant interface problems with the level 15.1 assembly modules and geometry modules. Potential problems in designing new modules for higher-order parametric discrete elements are reviewed in both areas. An assembly procedure is suggested that separates grid point degrees of freedom on the basis of admissibility. New geometric input data are described that facilitate the definition of surfaces in parametric space.

  18. From 2D to 3D: Construction of a 3D Parametric Model for Detection of Dental Roots Shape and Position from a Panoramic Radiograph—A Preliminary Report

    PubMed Central

    Mazzotta, Laura; Cozzani, Mauro; Mutinelli, Sabrina; Castaldo, Attilio; Silvestrini-Biavati, Armando

    2013-01-01

    Objectives. To build a 3D parametric model to detect shape and volume of dental roots, from a panoramic radiograph (PAN) of the patient. Materials and Methods. A PAN and a cone beam computed tomography (CBCT) of a patient were acquired. For each tooth, various parameters were considered (coronal and root lengths and widths): these were measured from the CBCT and from the PAN. Measures were compared to evaluate the accuracy level of PAN measurements. By using a CAD software, parametric models of an incisor and of a molar were constructed employing B-spline curves and free-form surfaces. PAN measures of teeth 2.1 and 3.6 were assigned to the parametric models; the same two teeth were segmented from CBCT. The two models were superimposed to assess the accuracy of the parametric model. Results. PAN measures resulted to be accurate and comparable with all other measurements. From model superimposition the maximum error resulted was 1.1 mm on the incisor crown and 2 mm on the molar furcation. Conclusion. This study shows that it is possible to build a 3D parametric model starting from 2D information with a clinically valid accuracy level. This can ultimately lead to a crown-root movement simulation. PMID:23554814

  19. Estimating technical efficiency in the hospital sector with panel data: a comparison of parametric and non-parametric techniques.

    PubMed

    Siciliani, Luigi

    2006-01-01

    Policy makers are increasingly interested in developing performance indicators that measure hospital efficiency. These indicators may give the purchasers of health services an additional regulatory tool to contain health expenditure. Using panel data, this study compares different parametric (econometric) and non-parametric (linear programming) techniques for the measurement of a hospital's technical efficiency. This comparison was made using a sample of 17 Italian hospitals in the years 1996-9. Highest correlations are found in the efficiency scores between the non-parametric data envelopment analysis under the constant returns to scale assumption (DEA-CRS) and several parametric models. Correlation reduces markedly when using more flexible non-parametric specifications such as data envelopment analysis under the variable returns to scale assumption (DEA-VRS) and the free disposal hull (FDH) model. Correlation also generally reduces when moving from one output to two-output specifications. This analysis suggests that there is scope for developing performance indicators at hospital level using panel data, but it is important that extensive sensitivity analysis is carried out if purchasers wish to make use of these indicators in practice.

  20. Parametrization study of the land multiparameter VTI elastic waveform inversion

    NASA Astrophysics Data System (ADS)

    He, W.; Plessix, R.-É.; Singh, S.

    2018-06-01

    Multiparameter inversion of seismic data remains challenging due to the trade-off between the different elastic parameters and the non-uniqueness of the solution. The sensitivity of the seismic data to a given subsurface elastic parameter depends on the source and receiver ray/wave path orientations at the subsurface point. In a high-frequency approximation, this is commonly analysed through the study of the radiation patterns that indicate the sensitivity of each parameter versus the incoming (from the source) and outgoing (to the receiver) angles. In practice, this means that the inversion result becomes sensitive to the choice of parametrization, notably because the null-space of the inversion depends on this choice. We can use a least-overlapping parametrization that minimizes the overlaps between the radiation patterns, in this case each parameter is only sensitive in a restricted angle domain, or an overlapping parametrization that contains a parameter sensitive to all angles, in this case overlaps between the radiation parameters occur. Considering a multiparameter inversion in an elastic vertically transverse isotropic medium and a complex land geological setting, we show that the inversion with the least-overlapping parametrization gives less satisfactory results than with the overlapping parametrization. The difficulties come from the complex wave paths that make difficult to predict the areas of sensitivity of each parameter. This shows that the parametrization choice should not only be based on the radiation pattern analysis but also on the angular coverage at each subsurface point that depends on geology and the acquisition layout.

  1. Non-parametric directionality analysis - Extension for removal of a single common predictor and application to time series.

    PubMed

    Halliday, David M; Senik, Mohd Harizal; Stevenson, Carl W; Mason, Rob

    2016-08-01

    The ability to infer network structure from multivariate neuronal signals is central to computational neuroscience. Directed network analyses typically use parametric approaches based on auto-regressive (AR) models, where networks are constructed from estimates of AR model parameters. However, the validity of using low order AR models for neurophysiological signals has been questioned. A recent article introduced a non-parametric approach to estimate directionality in bivariate data, non-parametric approaches are free from concerns over model validity. We extend the non-parametric framework to include measures of directed conditional independence, using scalar measures that decompose the overall partial correlation coefficient summatively by direction, and a set of functions that decompose the partial coherence summatively by direction. A time domain partial correlation function allows both time and frequency views of the data to be constructed. The conditional independence estimates are conditioned on a single predictor. The framework is applied to simulated cortical neuron networks and mixtures of Gaussian time series data with known interactions. It is applied to experimental data consisting of local field potential recordings from bilateral hippocampus in anaesthetised rats. The framework offers a non-parametric approach to estimation of directed interactions in multivariate neuronal recordings, and increased flexibility in dealing with both spike train and time series data. The framework offers a novel alternative non-parametric approach to estimate directed interactions in multivariate neuronal recordings, and is applicable to spike train and time series data. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. A Parametric Approach to Numerical Modeling of TKR Contact Forces

    PubMed Central

    Lundberg, Hannah J.; Foucher, Kharma C.; Wimmer, Markus A.

    2009-01-01

    In vivo knee contact forces are difficult to determine using numerical methods because there are more unknown forces than equilibrium equations available. We developed parametric methods for computing contact forces across the knee joint during the stance phase of level walking. Three-dimensional contact forces were calculated at two points of contact between the tibia and the femur, one on the lateral aspect of the tibial plateau, and one on the medial side. Muscle activations were parametrically varied over their physiologic range resulting in a solution space of contact forces. The obtained solution space was reasonably small and the resulting force pattern compared well to a previous model from the literature for kinematics and external kinetics from the same patient. Peak forces of the parametric model and the previous model were similar for the first half of the stance phase, but differed for the second half. The previous model did not take into account the transverse external moment about the knee and could not calculate muscle activation levels. Ultimately, the parametric model will result in more accurate contact force inputs for total knee simulators, as current inputs are not generally based on kinematics and kinetics inputs from TKR patients. PMID:19155015

  3. Model-based approach for design verification and co-optimization of catastrophic and parametric-related defects due to systematic manufacturing variations

    NASA Astrophysics Data System (ADS)

    Perry, Dan; Nakamoto, Mark; Verghese, Nishath; Hurat, Philippe; Rouse, Rich

    2007-03-01

    Model-based hotspot detection and silicon-aware parametric analysis help designers optimize their chips for yield, area and performance without the high cost of applying foundries' recommended design rules. This set of DFM/ recommended rules is primarily litho-driven, but cannot guarantee a manufacturable design without imposing overly restrictive design requirements. This rule-based methodology of making design decisions based on idealized polygons that no longer represent what is on silicon needs to be replaced. Using model-based simulation of the lithography, OPC, RET and etch effects, followed by electrical evaluation of the resulting shapes, leads to a more realistic and accurate analysis. This analysis can be used to evaluate intelligent design trade-offs and identify potential failures due to systematic manufacturing defects during the design phase. The successful DFM design methodology consists of three parts: 1. Achieve a more aggressive layout through limited usage of litho-related recommended design rules. A 10% to 15% area reduction is achieved by using more aggressive design rules. DFM/recommended design rules are used only if there is no impact on cell size. 2. Identify and fix hotspots using a model-based layout printability checker. Model-based litho and etch simulation are done at the cell level to identify hotspots. Violations of recommended rules may cause additional hotspots, which are then fixed. The resulting design is ready for step 3. 3. Improve timing accuracy with a process-aware parametric analysis tool for transistors and interconnect. Contours of diffusion, poly and metal layers are used for parametric analysis. In this paper, we show the results of this physical and electrical DFM methodology at Qualcomm. We describe how Qualcomm was able to develop more aggressive cell designs that yielded a 10% to 15% area reduction using this methodology. Model-based shape simulation was employed during library development to validate architecture choices and to optimize cell layout. At the physical verification stage, the shape simulator was run at full-chip level to identify and fix residual hotspots on interconnect layers, on poly or metal 1 due to interaction between adjacent cells, or on metal 1 due to interaction between routing (via and via cover) and cell geometry. To determine an appropriate electrical DFM solution, Qualcomm developed an experiment to examine various electrical effects. After reporting the silicon results of this experiment, which showed sizeable delay variations due to lithography-related systematic effects, we also explain how contours of diffusion, poly and metal can be used for silicon-aware parametric analysis of transistors and interconnect at the cell-, block- and chip-level.

  4. Beam patterns in an optical parametric oscillator set-up employing walk-off compensating beta barium borate crystals

    NASA Astrophysics Data System (ADS)

    Kaucikas, M.; Warren, M.; Michailovas, A.; Antanavicius, R.; van Thor, J. J.

    2013-02-01

    This paper describes the investigation of an optical parametric oscillator (OPO) set-up based on two beta barium borate (BBO) crystals, where the interplay between the crystal orientations, cut angles and air dispersion substantially influenced the OPO performance, and especially the angular spectrum of the output beam. Theory suggests that if two BBO crystals are used in this type of design, they should be of different cuts. This paper aims to provide an experimental manifestation of this fact. Furthermore, it has been shown that air dispersion produces similar effects and should be taken into account. An x-ray crystallographic indexing of the crystals was performed as an independent test of the above conclusions.

  5. The Pariser-Parr-Pople model for trans-polyenes. I. Ab initio and semiempirical study of the bond alternation in trans-butadiene

    NASA Astrophysics Data System (ADS)

    Förner, Wolfgang

    1992-03-01

    Ab initio investigations of the bond alternation in butadiene are presented. The atomic basis sets applied range from minimal to split valence plus polarization quality. With the latter one the Hartree-Fock limit for the bond alternation is reached. Correlation is considered on Møller-Plesset many-body perturbation theory of second order (MP2), linear coupled cluster doubles (L-CCD) and coupled cluster doubles (CCD) level. For the smaller basis sets it is shown that for the bond alternation π-π correlations are essential while the effects of σ-σ and σ-π correlations are, though large, nearly independent of bond alternation. On MP2 level the variation of σ-π correlation with bond alternation is surprisingly large. This is discussed as an artefact of MP2. Comparative Su-Schrieffer-Heeger (SSH) and Pariser-Parr-Pople (PPP) calculations show that these models in their usual parametrizations cannot reproduce the ab initio results.

  6. Human discomfort response to noise combined with vertical vibration

    NASA Technical Reports Server (NTRS)

    Leatherwood, J. D.

    1979-01-01

    An experimental investigation was conducted (1) to determine the effects of combined environmental noise and vertical vibration upon human subjective discomfort response, (2) to develop a model for the prediction of passenger discomfort response to the combined environment, and (3) to develop a set of noise-vibration curves for use as criteria in ride quality design. Subjects were exposed to parametric combinations of noise and vibrations through the use of a realistic laboratory simulator. Results indicated that accurate prediction of passenger ride comfort requires knowledge of both the level and frequency content of the noise and vibration components of a ride environment as well as knowledge of the interactive effects of combined noise and vibration. A design tool in the form of an empirical model of passenger discomfort response to combined noise and vertical vibration was developed and illustrated by several computational examples. Finally, a set of noise-vibration criteria curves were generated to illustrate the fundamental design trade-off possible between passenger discomfort and the noise-vibration levels that produce the discomfort.

  7. Parametric fMRI of paced motor responses uncovers novel whole-brain imaging biomarkers in spinocerebellar ataxia type 3.

    PubMed

    Duarte, João Valente; Faustino, Ricardo; Lobo, Mercês; Cunha, Gil; Nunes, César; Ferreira, Carlos; Januário, Cristina; Castelo-Branco, Miguel

    2016-10-01

    Machado-Joseph Disease, inherited type 3 spinocerebellar ataxia (SCA3), is the most common form worldwide. Neuroimaging and neuropathology have consistently demonstrated cerebellar alterations. Here we aimed to discover whole-brain functional biomarkers, based on parametric performance-level-dependent signals. We assessed 13 patients with early SCA3 and 14 healthy participants. We used a combined parametric behavioral/functional neuroimaging design to investigate disease fingerprints, as a function of performance levels, coupled with structural MRI and voxel-based morphometry. Functional magnetic resonance imaging (fMRI) was designed to parametrically analyze behavior and neural responses to audio-paced bilateral thumb movements at temporal frequencies of 1, 3, and 5 Hz. Our performance-level-based design probing neuronal correlates of motor coordination enabled the discovery that neural activation and behavior show critical loss of parametric modulation specifically in SCA3, associated with frequency-dependent cortico/subcortical activation/deactivation patterns. Cerebellar/cortical rate-dependent dissociation patterns could clearly differentiate between groups irrespective of grey matter loss. Our findings suggest functional reorganization of the motor network and indicate a possible role of fMRI as a tool to monitor disease progression in SCA3. Accordingly, fMRI patterns proved to be potential biomarkers in early SCA3, as tested by receiver operating characteristic analysis of both behavior and neural activation at different frequencies. Discrimination analysis based on BOLD signal in response to the applied parametric finger-tapping task significantly often reached >80% sensitivity and specificity in single regions-of-interest.Functional fingerprints based on cerebellar and cortical BOLD performance dependent signal modulation can thus be combined as diagnostic and/or therapeutic targets in hereditary ataxia. Hum Brain Mapp 37:3656-3668, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  8. Toward a self-organizing pre-symbolic neural model representing sensorimotor primitives.

    PubMed

    Zhong, Junpei; Cangelosi, Angelo; Wermter, Stefan

    2014-01-01

    The acquisition of symbolic and linguistic representations of sensorimotor behavior is a cognitive process performed by an agent when it is executing and/or observing own and others' actions. According to Piaget's theory of cognitive development, these representations develop during the sensorimotor stage and the pre-operational stage. We propose a model that relates the conceptualization of the higher-level information from visual stimuli to the development of ventral/dorsal visual streams. This model employs neural network architecture incorporating a predictive sensory module based on an RNNPB (Recurrent Neural Network with Parametric Biases) and a horizontal product model. We exemplify this model through a robot passively observing an object to learn its features and movements. During the learning process of observing sensorimotor primitives, i.e., observing a set of trajectories of arm movements and its oriented object features, the pre-symbolic representation is self-organized in the parametric units. These representational units act as bifurcation parameters, guiding the robot to recognize and predict various learned sensorimotor primitives. The pre-symbolic representation also accounts for the learning of sensorimotor primitives in a latent learning context.

  9. Toward a self-organizing pre-symbolic neural model representing sensorimotor primitives

    PubMed Central

    Zhong, Junpei; Cangelosi, Angelo; Wermter, Stefan

    2014-01-01

    The acquisition of symbolic and linguistic representations of sensorimotor behavior is a cognitive process performed by an agent when it is executing and/or observing own and others' actions. According to Piaget's theory of cognitive development, these representations develop during the sensorimotor stage and the pre-operational stage. We propose a model that relates the conceptualization of the higher-level information from visual stimuli to the development of ventral/dorsal visual streams. This model employs neural network architecture incorporating a predictive sensory module based on an RNNPB (Recurrent Neural Network with Parametric Biases) and a horizontal product model. We exemplify this model through a robot passively observing an object to learn its features and movements. During the learning process of observing sensorimotor primitives, i.e., observing a set of trajectories of arm movements and its oriented object features, the pre-symbolic representation is self-organized in the parametric units. These representational units act as bifurcation parameters, guiding the robot to recognize and predict various learned sensorimotor primitives. The pre-symbolic representation also accounts for the learning of sensorimotor primitives in a latent learning context. PMID:24550798

  10. Nonparametric Regression and the Parametric Bootstrap for Local Dependence Assessment.

    ERIC Educational Resources Information Center

    Habing, Brian

    2001-01-01

    Discusses ideas underlying nonparametric regression and the parametric bootstrap with an overview of their application to item response theory and the assessment of local dependence. Illustrates the use of the method in assessing local dependence that varies with examinee trait levels. (SLD)

  11. Auditory object salience: human cortical processing of non-biological action sounds and their acoustic signal attributes

    PubMed Central

    Lewis, James W.; Talkington, William J.; Tallaksen, Katherine C.; Frum, Chris A.

    2012-01-01

    Whether viewed or heard, an object in action can be segmented as a distinct salient event based on a number of different sensory cues. In the visual system, several low-level attributes of an image are processed along parallel hierarchies, involving intermediate stages wherein gross-level object form and/or motion features are extracted prior to stages that show greater specificity for different object categories (e.g., people, buildings, or tools). In the auditory system, though relying on a rather different set of low-level signal attributes, meaningful real-world acoustic events and “auditory objects” can also be readily distinguished from background scenes. However, the nature of the acoustic signal attributes or gross-level perceptual features that may be explicitly processed along intermediate cortical processing stages remain poorly understood. Examining mechanical and environmental action sounds, representing two distinct non-biological categories of action sources, we had participants assess the degree to which each sound was perceived as object-like versus scene-like. We re-analyzed data from two of our earlier functional magnetic resonance imaging (fMRI) task paradigms (Engel et al., 2009) and found that scene-like action sounds preferentially led to activation along several midline cortical structures, but with strong dependence on listening task demands. In contrast, bilateral foci along the superior temporal gyri (STG) showed parametrically increasing activation to action sounds rated as more “object-like,” independent of sound category or task demands. Moreover, these STG regions also showed parametric sensitivity to spectral structure variations (SSVs) of the action sounds—a quantitative measure of change in entropy of the acoustic signals over time—and the right STG additionally showed parametric sensitivity to measures of mean entropy and harmonic content of the environmental sounds. Analogous to the visual system, intermediate stages of the auditory system appear to process or extract a number of quantifiable low-order signal attributes that are characteristic of action events perceived as being object-like, representing stages that may begin to dissociate different perceptual dimensions and categories of every-day, real-world action sounds. PMID:22582038

  12. Parametric-Studies and Data-Plotting Modules for the SOAP

    NASA Technical Reports Server (NTRS)

    2008-01-01

    "Parametric Studies" and "Data Table Plot View" are the names of software modules in the Satellite Orbit Analysis Program (SOAP). Parametric Studies enables parameterization of as many as three satellite or ground-station attributes across a range of values and computes the average, minimum, and maximum of a specified metric, the revisit time, or 21 other functions at each point in the parameter space. This computation produces a one-, two-, or three-dimensional table of data representing statistical results across the parameter space. Inasmuch as the output of a parametric study in three dimensions can be a very large data set, visualization is a paramount means of discovering trends in the data (see figure). Data Table Plot View enables visualization of the data table created by Parametric Studies or by another data source: this module quickly generates a display of the data in the form of a rotatable three-dimensional-appearing plot, making it unnecessary to load the SOAP output data into a separate plotting program. The rotatable three-dimensionalappearing plot makes it easy to determine which points in the parameter space are most desirable. Both modules provide intuitive user interfaces for ease of use.

  13. Parametric Model of an Aerospike Rocket Engine

    NASA Technical Reports Server (NTRS)

    Korte, J. J.

    2000-01-01

    A suite of computer codes was assembled to simulate the performance of an aerospike engine and to generate the engine input for the Program to Optimize Simulated Trajectories. First an engine simulator module was developed that predicts the aerospike engine performance for a given mixture ratio, power level, thrust vectoring level, and altitude. This module was then used to rapidly generate the aerospike engine performance tables for axial thrust, normal thrust, pitching moment, and specific thrust. Parametric engine geometry was defined for use with the engine simulator module. The parametric model was also integrated into the iSIGHTI multidisciplinary framework so that alternate designs could be determined. The computer codes were used to support in-house conceptual studies of reusable launch vehicle designs.

  14. Parametric Model of an Aerospike Rocket Engine

    NASA Technical Reports Server (NTRS)

    Korte, J. J.

    2000-01-01

    A suite of computer codes was assembled to simulate the performance of an aerospike engine and to generate the engine input for the Program to Optimize Simulated Trajectories. First an engine simulator module was developed that predicts the aerospike engine performance for a given mixture ratio, power level, thrust vectoring level, and altitude. This module was then used to rapidly generate the aerospike engine performance tables for axial thrust, normal thrust, pitching moment, and specific thrust. Parametric engine geometry was defined for use with the engine simulator module. The parametric model was also integrated into the iSIGHT multidisciplinary framework so that alternate designs could be determined. The computer codes were used to support in-house conceptual studies of reusable launch vehicle designs.

  15. Learning from Friends: Measuring Influence in a Dyadic Computer Instructional Setting

    ERIC Educational Resources Information Center

    DeLay, Dawn; Hartl, Amy C.; Laursen, Brett; Denner, Jill; Werner, Linda; Campe, Shannon; Ortiz, Eloy

    2014-01-01

    Data collected from partners in a dyadic instructional setting are, by definition, not statistically independent. As a consequence, conventional parametric statistical analyses of change and influence carry considerable risk of bias. In this article, we illustrate a strategy to overcome this obstacle: the longitudinal actor-partner interdependence…

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hentschke, Clemens M., E-mail: clemens.hentschke@gmail.com; Tönnies, Klaus D.; Beuing, Oliver

    Purpose: The early detection of cerebral aneurysms plays a major role in preventing subarachnoid hemorrhage. The authors present a system to automatically detect cerebral aneurysms in multimodal 3D angiographic data sets. The authors’ system is parametrizable for contrast-enhanced magnetic resonance angiography (CE-MRA), time-of-flight magnetic resonance angiography (TOF-MRA), and computed tomography angiography (CTA). Methods: Initial volumes of interest are found by applying a multiscale sphere-enhancing filter. Several features are combined in a linear discriminant function (LDF) to distinguish between true aneurysms and false positives. The features include shape information, spatial information, and probability information. The LDF can either be parametrized bymore » domain experts or automatically by training. Vessel segmentation is avoided as it could heavily influence the detection algorithm. Results: The authors tested their method with 151 clinical angiographic data sets containing 112 aneurysms. The authors reach a sensitivity of 95% with CE-MRA data sets at an average false positive rate per data set (FP{sub DS}) of 8.2. For TOF-MRA, we achieve 95% sensitivity at 11.3 FP{sub DS}. For CTA, we reach a sensitivity of 95% at 22.8 FP{sub DS}. For all modalities, the expert parametrization led to similar or better results than the trained parametrization eliminating the need for training. 93% of aneurysms that were smaller than 5 mm were found. The authors also showed that their algorithm is capable of detecting aneurysms that were previously overlooked by radiologists. Conclusions: The authors present an automatic system to detect cerebral aneurysms in multimodal angiographic data sets. The system proved as a suitable computer-aided detection tool to help radiologists find cerebral aneurysms.« less

  17. Thresholding functional connectomes by means of mixture modeling.

    PubMed

    Bielczyk, Natalia Z; Walocha, Fabian; Ebel, Patrick W; Haak, Koen V; Llera, Alberto; Buitelaar, Jan K; Glennon, Jeffrey C; Beckmann, Christian F

    2018-05-01

    Functional connectivity has been shown to be a very promising tool for studying the large-scale functional architecture of the human brain. In network research in fMRI, functional connectivity is considered as a set of pair-wise interactions between the nodes of the network. These interactions are typically operationalized through the full or partial correlation between all pairs of regional time series. Estimating the structure of the latent underlying functional connectome from the set of pair-wise partial correlations remains an open research problem though. Typically, this thresholding problem is approached by proportional thresholding, or by means of parametric or non-parametric permutation testing across a cohort of subjects at each possible connection. As an alternative, we propose a data-driven thresholding approach for network matrices on the basis of mixture modeling. This approach allows for creating subject-specific sparse connectomes by modeling the full set of partial correlations as a mixture of low correlation values associated with weak or unreliable edges in the connectome and a sparse set of reliable connections. Consequently, we propose to use alternative thresholding strategy based on the model fit using pseudo-False Discovery Rates derived on the basis of the empirical null estimated as part of the mixture distribution. We evaluate the method on synthetic benchmark fMRI datasets where the underlying network structure is known, and demonstrate that it gives improved performance with respect to the alternative methods for thresholding connectomes, given the canonical thresholding levels. We also demonstrate that mixture modeling gives highly reproducible results when applied to the functional connectomes of the visual system derived from the n-back Working Memory task in the Human Connectome Project. The sparse connectomes obtained from mixture modeling are further discussed in the light of the previous knowledge of the functional architecture of the visual system in humans. We also demonstrate that with use of our method, we are able to extract similar information on the group level as can be achieved with permutation testing even though these two methods are not equivalent. We demonstrate that with both of these methods, we obtain functional decoupling between the two hemispheres in the higher order areas of the visual cortex during visual stimulation as compared to the resting state, which is in line with previous studies suggesting lateralization in the visual processing. However, as opposed to permutation testing, our approach does not require inference at the cohort level and can be used for creating sparse connectomes at the level of a single subject. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  18. Evaluating effects of developmental education for college students using a regression discontinuity design.

    PubMed

    Moss, Brian G; Yeaton, William H

    2013-10-01

    Annually, American colleges and universities provide developmental education (DE) to millions of underprepared students; however, evaluation estimates of DE benefits have been mixed. Using a prototypic exemplar of DE, our primary objective was to investigate the utility of a replicative evaluative framework for assessing program effectiveness. Within the context of the regression discontinuity (RD) design, this research examined the effectiveness of a DE program for five, sequential cohorts of first-time college students. Discontinuity estimates were generated for individual terms and cumulatively, across terms. Participants were 3,589 first-time community college students. DE program effects were measured by contrasting both college-level English grades and a dichotomous measure of pass/fail, for DE and non-DE students. Parametric and nonparametric estimates of overall effect were positive for continuous and dichotomous measures of achievement (grade and pass/fail). The variability of program effects over time was determined by tracking results within individual terms and cumulatively, across terms. Applying this replication strategy, DE's overall impact was modest (an effect size of approximately .20) but quite consistent, based on parametric and nonparametric estimation approaches. A meta-analysis of five RD results yielded virtually the same estimate as the overall, parametric findings. Subset analysis, though tentative, suggested that males benefited more than females, while academic gains were comparable for different ethnicities. The cumulative, within-study comparison, replication approach offers considerable potential for the evaluation of new and existing policies, particularly when effects are relatively small, as is often the case in applied settings.

  19. Range compensation for backscattering measurements in the difference-frequency nearfield of a parametric sonar.

    PubMed

    Foote, Kenneth G

    2012-05-01

    Measurement of acoustic backscattering properties of targets requires removal of the range dependence of echoes. This process is called range compensation. For conventional sonars making measurements in the transducer farfield, the compensation removes effects of geometrical spreading and absorption. For parametric sonars consisting of a parametric acoustic transmitter and a conventional-sonar receiver, two additional range dependences require compensation when making measurements in the nonlinearly generated difference-frequency nearfield: an apparently increasing source level and a changing beamwidth. General expressions are derived for range compensation functions in the difference-frequency nearfield of parametric sonars. These are evaluated numerically for a parametric sonar whose difference-frequency band, effectively 1-6 kHz, is being used to observe Atlantic herring (Clupea harengus) in situ. Range compensation functions for this sonar are compared with corresponding functions for conventional sonars for the cases of single and multiple scatterers. Dependences of these range compensation functions on the parametric sonar transducer shape, size, acoustic power density, and hydrography are investigated. Parametric range compensation functions, when applied with calibration data, will enable difference-frequency echoes to be expressed in physical units of volume backscattering, and backscattering spectra, including fish-swimbladder-resonances, to be analyzed.

  20. Design of a terahertz parametric oscillator based on a resonant cavity in a terahertz waveguide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saito, K., E-mail: k-saito@material.tohoku.ac.jp; Oyama, Y.; Tanabe, T.

    We demonstrate ns-pulsed pumping of terahertz (THz) parametric oscillations in a quasi-triply resonant cavity in a THz waveguide. The THz waves, down converted through parametric interactions between the pump and signal waves at telecom frequencies, are confined to a GaP single mode ridge waveguide. By combining the THz waveguide with a quasi-triply resonant cavity, the nonlinear interactions can be enhanced. A low threshold pump intensity for parametric oscillations can be achieved in the cavity waveguide. The THz output power can be maximized by optimizing the quality factors of the cavity so that an optical to THz photon conversion efficiency, η{submore » p}, of 0.35, which is near the quantum-limit level, can be attained. The proposed THz optical parametric oscillator can be utilized as an efficient and monochromatic THz source.« less

  1. Acoustic attenuation design requirements established through EPNL parametric trades

    NASA Technical Reports Server (NTRS)

    Veldman, H. F.

    1972-01-01

    An optimization procedure for the provision of an acoustic lining configuration that is balanced with respect to engine performance losses and lining attenuation characteristics was established using a method which determined acoustic attenuation design requirements through parametric trade studies using the subjective noise unit of effective perceived noise level (EPNL).

  2. Polarization of light and hopf fibration

    NASA Astrophysics Data System (ADS)

    Jurčo, B.

    1987-09-01

    A set of polarization states of quasi-monochromatic light is described geometrically in terms of the Hopf fibration. Several associated alternative polarization parametrizations are given explicitly, including the Stokes parameters.

  3. Multi-Parametric MRI and Texture Analysis to Visualize Spatial Histologic Heterogeneity and Tumor Extent in Glioblastoma.

    PubMed

    Hu, Leland S; Ning, Shuluo; Eschbacher, Jennifer M; Gaw, Nathan; Dueck, Amylou C; Smith, Kris A; Nakaji, Peter; Plasencia, Jonathan; Ranjbar, Sara; Price, Stephen J; Tran, Nhan; Loftus, Joseph; Jenkins, Robert; O'Neill, Brian P; Elmquist, William; Baxter, Leslie C; Gao, Fei; Frakes, David; Karis, John P; Zwart, Christine; Swanson, Kristin R; Sarkaria, Jann; Wu, Teresa; Mitchell, J Ross; Li, Jing

    2015-01-01

    Genetic profiling represents the future of neuro-oncology but suffers from inadequate biopsies in heterogeneous tumors like Glioblastoma (GBM). Contrast-enhanced MRI (CE-MRI) targets enhancing core (ENH) but yields adequate tumor in only ~60% of cases. Further, CE-MRI poorly localizes infiltrative tumor within surrounding non-enhancing parenchyma, or brain-around-tumor (BAT), despite the importance of characterizing this tumor segment, which universally recurs. In this study, we use multiple texture analysis and machine learning (ML) algorithms to analyze multi-parametric MRI, and produce new images indicating tumor-rich targets in GBM. We recruited primary GBM patients undergoing image-guided biopsies and acquired pre-operative MRI: CE-MRI, Dynamic-Susceptibility-weighted-Contrast-enhanced-MRI, and Diffusion Tensor Imaging. Following image coregistration and region of interest placement at biopsy locations, we compared MRI metrics and regional texture with histologic diagnoses of high- vs low-tumor content (≥80% vs <80% tumor nuclei) for corresponding samples. In a training set, we used three texture analysis algorithms and three ML methods to identify MRI-texture features that optimized model accuracy to distinguish tumor content. We confirmed model accuracy in a separate validation set. We collected 82 biopsies from 18 GBMs throughout ENH and BAT. The MRI-based model achieved 85% cross-validated accuracy to diagnose high- vs low-tumor in the training set (60 biopsies, 11 patients). The model achieved 81.8% accuracy in the validation set (22 biopsies, 7 patients). Multi-parametric MRI and texture analysis can help characterize and visualize GBM's spatial histologic heterogeneity to identify regional tumor-rich biopsy targets.

  4. UQTools: The Uncertainty Quantification Toolbox - Introduction and Tutorial

    NASA Technical Reports Server (NTRS)

    Kenny, Sean P.; Crespo, Luis G.; Giesy, Daniel P.

    2012-01-01

    UQTools is the short name for the Uncertainty Quantification Toolbox, a software package designed to efficiently quantify the impact of parametric uncertainty on engineering systems. UQTools is a MATLAB-based software package and was designed to be discipline independent, employing very generic representations of the system models and uncertainty. Specifically, UQTools accepts linear and nonlinear system models and permits arbitrary functional dependencies between the system s measures of interest and the probabilistic or non-probabilistic parametric uncertainty. One of the most significant features incorporated into UQTools is the theoretical development centered on homothetic deformations and their application to set bounding and approximating failure probabilities. Beyond the set bounding technique, UQTools provides a wide range of probabilistic and uncertainty-based tools to solve key problems in science and engineering.

  5. Application of grey-fuzzy approach in parametric optimization of EDM process in machining of MDN 300 steel

    NASA Astrophysics Data System (ADS)

    Protim Das, Partha; Gupta, P.; Das, S.; Pradhan, B. B.; Chakraborty, S.

    2018-01-01

    Maraging steel (MDN 300) find its application in many industries as it exhibits high hardness which are very difficult to machine material. Electro discharge machining (EDM) is an extensively popular machining process which can be used in machining of such materials. Optimization of response parameters are essential for effective machining of these materials. Past researchers have already used Taguchi for obtaining the optimal responses of EDM process for this material with responses such as material removal rate (MRR), tool wear rate (TWR), relative wear ratio (RWR), and surface roughness (SR) considering discharge current, pulse on time, pulse off time, arc gap, and duty cycle as process parameters. In this paper, grey relation analysis (GRA) with fuzzy logic is applied to this multi objective optimization problem to check the responses by an implementation of the derived parametric setting. It was found that the parametric setting derived by the proposed method results in better a response than those reported by the past researchers. Obtained results are also verified using the technique for order of preference by similarity to ideal solution (TOPSIS). The predicted result also shows that there is a significant improvement in comparison to the results of past researchers.

  6. Multilevel Latent Class Analysis: Parametric and Nonparametric Models

    ERIC Educational Resources Information Center

    Finch, W. Holmes; French, Brian F.

    2014-01-01

    Latent class analysis is an analytic technique often used in educational and psychological research to identify meaningful groups of individuals within a larger heterogeneous population based on a set of variables. This technique is flexible, encompassing not only a static set of variables but also longitudinal data in the form of growth mixture…

  7. Ku band low noise parametric amplifier

    NASA Technical Reports Server (NTRS)

    1976-01-01

    A low noise, K sub u-band, parametric amplifier (paramp) was developed. The unit is a spacecraft-qualifiable, prototype, parametric amplifier for eventual application in the shuttle orbiter. The amplifier was required to have a noise temperature of less than 150 K. A noise temperature of less than 120 K at a gain level of 17 db was achieved. A 3-db bandwidth in excess of 350 MHz was attained, while deviation from phase linearity of about + or - 1 degree over 50 MHz was achieved. The paramp operates within specification over an ambient temperature range of -5 C to +50 C. The performance requirements and the operation of the K sub u-band parametric amplifier system are described. The final test results are also given.

  8. Likert scales, levels of measurement and the "laws" of statistics.

    PubMed

    Norman, Geoff

    2010-12-01

    Reviewers of research reports frequently criticize the choice of statistical methods. While some of these criticisms are well-founded, frequently the use of various parametric methods such as analysis of variance, regression, correlation are faulted because: (a) the sample size is too small, (b) the data may not be normally distributed, or (c) The data are from Likert scales, which are ordinal, so parametric statistics cannot be used. In this paper, I dissect these arguments, and show that many studies, dating back to the 1930s consistently show that parametric statistics are robust with respect to violations of these assumptions. Hence, challenges like those above are unfounded, and parametric methods can be utilized without concern for "getting the wrong answer".

  9. Z/sub n/ Baxter model: symmetries and the Belavin parametrization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Richey, M.P.; Tracy, C.A.

    1986-02-01

    The Z/sub n/ Baxter model is an exactly solvable lattice model in the special case of the Belavin parametrization. For this parametrization the authors calculate the partition function in an antiferromagnetic region and the order parameter in a ferromagnetic region. They find that the order parameter is expressible in terms of a modular function of level n which for n=2 is the Onsager-Yang-Baxter result. In addition they determine the symmetry group of the finite lattice partition function for the general Z/sub n/ Baxter model.

  10. Hydrogen peroxide clusters: the role of open book motif in cage and helical structures.

    PubMed

    Elango, M; Parthasarathi, R; Subramanian, V; Ramachandran, C N; Sathyamurthy, N

    2006-05-18

    Hartree-Fock (HF) calculations using 6-31G*, 6-311++G(d,p), aug-cc-pVDZ, and aug-cc-pVTZ basis sets show that hydrogen peroxide molecular clusters tend to form hydrogen-bonded cyclic and cage structures along the lines expected of a molecule which can act as a proton donor as well as an acceptor. These results are reiterated by density functional theoretic (DFT) calculations with B3LYP parametrization and also by second-order Møller-Plesset perturbation (MP2) theory using 6-31G* and 6-311++G(d,p) basis sets. Trends in stabilization energies and geometrical parameters obtained at the HF level using 6-311++G(d,p), aug-cc-pVDZ, and aug-cc-pVTZ basis sets are similar to those obtained from HF/6-31G* calculation. In addition, the HF calculations suggest the formation of stable helical structures for larger clusters, provided the neighbors form an open book structure.

  11. Alternative evaluation metrics for risk adjustment methods.

    PubMed

    Park, Sungchul; Basu, Anirban

    2018-06-01

    Risk adjustment is instituted to counter risk selection by accurately equating payments with expected expenditures. Traditional risk-adjustment methods are designed to estimate accurate payments at the group level. However, this generates residual risks at the individual level, especially for high-expenditure individuals, thereby inducing health plans to avoid those with high residual risks. To identify an optimal risk-adjustment method, we perform a comprehensive comparison of prediction accuracies at the group level, at the tail distributions, and at the individual level across 19 estimators: 9 parametric regression, 7 machine learning, and 3 distributional estimators. Using the 2013-2014 MarketScan database, we find that no one estimator performs best in all prediction accuracies. Generally, machine learning and distribution-based estimators achieve higher group-level prediction accuracy than parametric regression estimators. However, parametric regression estimators show higher tail distribution prediction accuracy and individual-level prediction accuracy, especially at the tails of the distribution. This suggests that there is a trade-off in selecting an appropriate risk-adjustment method between estimating accurate payments at the group level and lower residual risks at the individual level. Our results indicate that an optimal method cannot be determined solely on the basis of statistical metrics but rather needs to account for simulating plans' risk selective behaviors. Copyright © 2018 John Wiley & Sons, Ltd.

  12. Parametric Modeling as a Technology of Rapid Prototyping in Light Industry

    NASA Astrophysics Data System (ADS)

    Tomilov, I. N.; Grudinin, S. N.; Frolovsky, V. D.; Alexandrov, A. A.

    2016-04-01

    The paper deals with the parametric modeling method of virtual mannequins for the purposes of design automation in clothing industry. The described approach includes the steps of generation of the basic model on the ground of the initial one (obtained in 3D-scanning process), its parameterization and deformation. The complex surfaces are presented by the wireframe model. The modeling results are evaluated with the set of similarity factors. Deformed models are compared with their virtual prototypes. The results of modeling are estimated by the standard deviation factor.

  13. Model selection criterion in survival analysis

    NASA Astrophysics Data System (ADS)

    Karabey, Uǧur; Tutkun, Nihal Ata

    2017-07-01

    Survival analysis deals with time until occurrence of an event of interest such as death, recurrence of an illness, the failure of an equipment or divorce. There are various survival models with semi-parametric or parametric approaches used in medical, natural or social sciences. The decision on the most appropriate model for the data is an important point of the analysis. In literature Akaike information criteria or Bayesian information criteria are used to select among nested models. In this study,the behavior of these information criterion is discussed for a real data set.

  14. Characteristics of stereo reproduction with parametric loudspeakers

    NASA Astrophysics Data System (ADS)

    Aoki, Shigeaki; Toba, Masayoshi; Tsujita, Norihisa

    2012-05-01

    A parametric loudspeaker utilizes nonlinearity of a medium and is known as a super-directivity loudspeaker. The parametric loudspeaker is one of the prominent applications of nonlinear ultrasonics. So far, the applications have been limited monaural reproduction sound system for public address in museum, station and street etc. In this paper, we discussed characteristics of stereo reproduction with two parametric loudspeakers by comparing with those with two ordinary dynamic loudspeakers. In subjective tests, three typical listening positions were selected to investigate the possibility of correct sound localization in a wide listening area. The binaural information was ILD (Interaural Level Difference) or ITD (Interaural Time Delay). The parametric loudspeaker was an equilateral hexagon. The inner and outer diameters were 99 and 112 mm, respectively. Signals were 500 Hz, 1 kHz, 2 kHz and 4 kHz pure tones and pink noise. Three young males listened to test signals 10 times in each listening condition. Subjective test results showed that listeners at the three typical listening positions perceived correct sound localization of all signals using the parametric loudspeakers. It was almost similar to those using the ordinary dynamic loudspeakers, however, except for the case of sinusoidal waves with ITD. It was determined the parametric loudspeaker could exclude the contradiction between the binaural information ILD and ITD that occurred in stereo reproduction with ordinary dynamic loudspeakers because the super directivity of parametric loudspeaker suppressed the cross talk components.

  15. Parametric Methods for Dynamic 11C-Phenytoin PET Studies.

    PubMed

    Mansor, Syahir; Yaqub, Maqsood; Boellaard, Ronald; Froklage, Femke E; de Vries, Anke; Bakker, Esther D M; Voskuyl, Rob A; Eriksson, Jonas; Schwarte, Lothar A; Verbeek, Joost; Windhorst, Albert D; Lammertsma, Adriaan A

    2017-03-01

    In this study, the performance of various methods for generating quantitative parametric images of dynamic 11 C-phenytoin PET studies was evaluated. Methods: Double-baseline 60-min dynamic 11 C-phenytoin PET studies, including online arterial sampling, were acquired for 6 healthy subjects. Parametric images were generated using Logan plot analysis, a basis function method, and spectral analysis. Parametric distribution volume (V T ) and influx rate ( K 1 ) were compared with those obtained from nonlinear regression analysis of time-activity curves. In addition, global and regional test-retest (TRT) variability was determined for parametric K 1 and V T values. Results: Biases in V T observed with all parametric methods were less than 5%. For K 1 , spectral analysis showed a negative bias of 16%. The mean TRT variabilities of V T and K 1 were less than 10% for all methods. Shortening the scan duration to 45 min provided similar V T and K 1 with comparable TRT performance compared with 60-min data. Conclusion: Among the various parametric methods tested, the basis function method provided parametric V T and K 1 values with the least bias compared with nonlinear regression data and showed TRT variabilities lower than 5%, also for smaller volume-of-interest sizes (i.e., higher noise levels) and shorter scan duration. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.

  16. A generalized exponential link function to map a conflict indicator into severity index within safety continuum framework.

    PubMed

    Zheng, Lai; Ismail, Karim

    2017-05-01

    Traffic conflict indicators measure the temporal and spatial proximity of conflict-involved road users. These indicators can reflect the severity of traffic conflicts to a reliable extent. Instead of using the indicator value directly as a severity index, many link functions have been developed to map the conflict indicator to a severity index. However, little information is available about the choice of a particular link function. To guard against link misspecification or subjectivity, a generalized exponential link function was developed. The severity index generated by this link was introduced to a parametric safety continuum model which objectively models the centre and tail regions. An empirical method, together with full Bayesian estimation method was adopted to estimate model parameters. The safety implication of return level was calculated based on the model parameters. The proposed approach was applied to the conflict and crash data collected from 21 segments from three freeways located in Guangdong province, China. The Pearson's correlation test between return levels and observed crashes showed that a θ value of 1.2 was the best choice of the generalized parameter for current data set. This provides statistical support for using the generalized exponential link function. With the determined generalized exponential link function, the visualization of parametric safety continuum was found to be a gyroscope-shaped hierarchy. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Parametric and non-parametric masking of randomness in sequence alignments can be improved and leads to better resolved trees.

    PubMed

    Kück, Patrick; Meusemann, Karen; Dambach, Johannes; Thormann, Birthe; von Reumont, Björn M; Wägele, Johann W; Misof, Bernhard

    2010-03-31

    Methods of alignment masking, which refers to the technique of excluding alignment blocks prior to tree reconstructions, have been successful in improving the signal-to-noise ratio in sequence alignments. However, the lack of formally well defined methods to identify randomness in sequence alignments has prevented a routine application of alignment masking. In this study, we compared the effects on tree reconstructions of the most commonly used profiling method (GBLOCKS) which uses a predefined set of rules in combination with alignment masking, with a new profiling approach (ALISCORE) based on Monte Carlo resampling within a sliding window, using different data sets and alignment methods. While the GBLOCKS approach excludes variable sections above a certain threshold which choice is left arbitrary, the ALISCORE algorithm is free of a priori rating of parameter space and therefore more objective. ALISCORE was successfully extended to amino acids using a proportional model and empirical substitution matrices to score randomness in multiple sequence alignments. A complex bootstrap resampling leads to an even distribution of scores of randomly similar sequences to assess randomness of the observed sequence similarity. Testing performance on real data, both masking methods, GBLOCKS and ALISCORE, helped to improve tree resolution. The sliding window approach was less sensitive to different alignments of identical data sets and performed equally well on all data sets. Concurrently, ALISCORE is capable of dealing with different substitution patterns and heterogeneous base composition. ALISCORE and the most relaxed GBLOCKS gap parameter setting performed best on all data sets. Correspondingly, Neighbor-Net analyses showed the most decrease in conflict. Alignment masking improves signal-to-noise ratio in multiple sequence alignments prior to phylogenetic reconstruction. Given the robust performance of alignment profiling, alignment masking should routinely be used to improve tree reconstructions. Parametric methods of alignment profiling can be easily extended to more complex likelihood based models of sequence evolution which opens the possibility of further improvements.

  18. A quasi-Monte-Carlo comparison of parametric and semiparametric regression methods for heavy-tailed and non-normal data: an application to healthcare costs.

    PubMed

    Jones, Andrew M; Lomas, James; Moore, Peter T; Rice, Nigel

    2016-10-01

    We conduct a quasi-Monte-Carlo comparison of the recent developments in parametric and semiparametric regression methods for healthcare costs, both against each other and against standard practice. The population of English National Health Service hospital in-patient episodes for the financial year 2007-2008 (summed for each patient) is randomly divided into two equally sized subpopulations to form an estimation set and a validation set. Evaluating out-of-sample using the validation set, a conditional density approximation estimator shows considerable promise in forecasting conditional means, performing best for accuracy of forecasting and among the best four for bias and goodness of fit. The best performing model for bias is linear regression with square-root-transformed dependent variables, whereas a generalized linear model with square-root link function and Poisson distribution performs best in terms of goodness of fit. Commonly used models utilizing a log-link are shown to perform badly relative to other models considered in our comparison.

  19. DNN-state identification of 2D distributed parameter systems

    NASA Astrophysics Data System (ADS)

    Chairez, I.; Fuentes, R.; Poznyak, A.; Poznyak, T.; Escudero, M.; Viana, L.

    2012-02-01

    There are many examples in science and engineering which are reduced to a set of partial differential equations (PDEs) through a process of mathematical modelling. Nevertheless there exist many sources of uncertainties around the aforementioned mathematical representation. Moreover, to find exact solutions of those PDEs is not a trivial task especially if the PDE is described in two or more dimensions. It is well known that neural networks can approximate a large set of continuous functions defined on a compact set to an arbitrary accuracy. In this article, a strategy based on the differential neural network (DNN) for the non-parametric identification of a mathematical model described by a class of two-dimensional (2D) PDEs is proposed. The adaptive laws for weights ensure the 'practical stability' of the DNN-trajectories to the parabolic 2D-PDE states. To verify the qualitative behaviour of the suggested methodology, here a non-parametric modelling problem for a distributed parameter plant is analysed.

  20. Comparison of four approaches to a rock facies classification problem

    USGS Publications Warehouse

    Dubois, M.K.; Bohling, Geoffrey C.; Chakrabarti, S.

    2007-01-01

    In this study, seven classifiers based on four different approaches were tested in a rock facies classification problem: classical parametric methods using Bayes' rule, and non-parametric methods using fuzzy logic, k-nearest neighbor, and feed forward-back propagating artificial neural network. Determining the most effective classifier for geologic facies prediction in wells without cores in the Panoma gas field, in Southwest Kansas, was the objective. Study data include 3600 samples with known rock facies class (from core) with each sample having either four or five measured properties (wire-line log curves), and two derived geologic properties (geologic constraining variables). The sample set was divided into two subsets, one for training and one for testing the ability of the trained classifier to correctly assign classes. Artificial neural networks clearly outperformed all other classifiers and are effective tools for this particular classification problem. Classical parametric models were inadequate due to the nature of the predictor variables (high dimensional and not linearly correlated), and feature space of the classes (overlapping). The other non-parametric methods tested, k-nearest neighbor and fuzzy logic, would need considerable improvement to match the neural network effectiveness, but further work, possibly combining certain aspects of the three non-parametric methods, may be justified. ?? 2006 Elsevier Ltd. All rights reserved.

  1. Direct reconstruction of parametric images for brain PET with event-by-event motion correction: evaluation in two tracers across count levels

    NASA Astrophysics Data System (ADS)

    Germino, Mary; Gallezot, Jean-Dominque; Yan, Jianhua; Carson, Richard E.

    2017-07-01

    Parametric images for dynamic positron emission tomography (PET) are typically generated by an indirect method, i.e. reconstructing a time series of emission images, then fitting a kinetic model to each voxel time activity curve. Alternatively, ‘direct reconstruction’, incorporates the kinetic model into the reconstruction algorithm itself, directly producing parametric images from projection data. Direct reconstruction has been shown to achieve parametric images with lower standard error than the indirect method. Here, we present direct reconstruction for brain PET using event-by-event motion correction of list-mode data, applied to two tracers. Event-by-event motion correction was implemented for direct reconstruction in the Parametric Motion-compensation OSEM List-mode Algorithm for Resolution-recovery reconstruction. The direct implementation was tested on simulated and human datasets with tracers [11C]AFM (serotonin transporter) and [11C]UCB-J (synaptic density), which follow the 1-tissue compartment model. Rigid head motion was tracked with the Vicra system. Parametric images of K 1 and distribution volume (V T  =  K 1/k 2) were compared to those generated by the indirect method by regional coefficient of variation (CoV). Performance across count levels was assessed using sub-sampled datasets. For simulated and real datasets at high counts, the two methods estimated K 1 and V T with comparable accuracy. At lower count levels, the direct method was substantially more robust to outliers than the indirect method. Compared to the indirect method, direct reconstruction reduced regional K 1 CoV by 35-48% (simulated dataset), 39-43% ([11C]AFM dataset) and 30-36% ([11C]UCB-J dataset) across count levels (averaged over regions at matched iteration); V T CoV was reduced by 51-58%, 54-60% and 30-46%, respectively. Motion correction played an important role in the dataset with larger motion: correction increased regional V T by 51% on average in the [11C]UCB-J dataset. Direct reconstruction of dynamic brain PET with event-by-event motion correction is achievable and dramatically more robust to noise in V T images than the indirect method.

  2. Keeping nurses at work: a duration analysis.

    PubMed

    Holmås, Tor Helge

    2002-09-01

    A shortage of nurses is currently a problem in several countries, and an important question is therefore how one can increase the supply of nursing labour. In this paper, we focus on the issue of nurses leaving the public health sector by utilising a unique data set containing information on both the supply and demand side of the market. To describe the exit rate from the health sector we apply a semi-parametric hazard rate model. In the estimations, we correct for unobserved heterogeneity by both a parametric (Gamma) and a non-parametric approach. We find that both wages and working conditions have an impact on nurses' decision to quit. Furthermore, failing to correct for the fact that nurses' income partly consists of compensation for inconvenient working hours results in a considerable downward bias of the wage effect. Copyright 2002 John Wiley & Sons, Ltd.

  3. Parametric Sensitivity Analysis of Oscillatory Delay Systems with an Application to Gene Regulation.

    PubMed

    Ingalls, Brian; Mincheva, Maya; Roussel, Marc R

    2017-07-01

    A parametric sensitivity analysis for periodic solutions of delay-differential equations is developed. Because phase shifts cause the sensitivity coefficients of a periodic orbit to diverge, we focus on sensitivities of the extrema, from which amplitude sensitivities are computed, and of the period. Delay-differential equations are often used to model gene expression networks. In these models, the parametric sensitivities of a particular genotype define the local geometry of the evolutionary landscape. Thus, sensitivities can be used to investigate directions of gradual evolutionary change. An oscillatory protein synthesis model whose properties are modulated by RNA interference is used as an example. This model consists of a set of coupled delay-differential equations involving three delays. Sensitivity analyses are carried out at several operating points. Comments on the evolutionary implications of the results are offered.

  4. Geometric Model for a Parametric Study of the Blended-Wing-Body Airplane

    NASA Technical Reports Server (NTRS)

    Mastin, C. Wayne; Smith, Robert E.; Sadrehaghighi, Ideen; Wiese, Micharl R.

    1996-01-01

    A parametric model is presented for the blended-wing-body airplane, one concept being proposed for the next generation of large subsonic transports. The model is defined in terms of a small set of parameters which facilitates analysis and optimization during the conceptual design process. The model is generated from a preliminary CAD geometry. From this geometry, airfoil cross sections are cut at selected locations and fitted with analytic curves. The airfoils are then used as boundaries for surfaces defined as the solution of partial differential equations. Both the airfoil curves and the surfaces are generated with free parameters selected to give a good representation of the original geometry. The original surface is compared with the parametric model, and solutions of the Euler equations for compressible flow are computed for both geometries. The parametric model is a good approximation of the CAD model and the computed solutions are qualitatively similar. An optimal NURBS approximation is constructed and can be used by a CAD model for further refinement or modification of the original geometry.

  5. Free response approach in a parametric system

    NASA Astrophysics Data System (ADS)

    Huang, Dishan; Zhang, Yueyue; Shao, Hexi

    2017-07-01

    In this study, a new approach to predict the free response in a parametric system is investigated. It is proposed in the special form of a trigonometric series with an exponentially decaying function of time, based on the concept of frequency splitting. By applying harmonic balance, the parametric vibration equation is transformed into an infinite set of homogeneous linear equations, from which the principal oscillation frequency can be computed, and all coefficients of harmonic components can be obtained. With initial conditions, arbitrary constants in a general solution can be determined. To analyze the computational accuracy and consistency, an approach error function is defined, which is used to assess the computational error in the proposed approach and in the standard numerical approach based on the Runge-Kutta algorithm. Furthermore, an example of a dynamic model of airplane wing flutter on a turbine engine is given to illustrate the applicability of the proposed approach. Numerical solutions show that the proposed approach exhibits high accuracy in mathematical expression, and it is valuable for theoretical research and engineering applications of parametric systems.

  6. Multiresponse semiparametric regression for modelling the effect of regional socio-economic variables on the use of information technology

    NASA Astrophysics Data System (ADS)

    Wibowo, Wahyu; Wene, Chatrien; Budiantara, I. Nyoman; Permatasari, Erma Oktania

    2017-03-01

    Multiresponse semiparametric regression is simultaneous equation regression model and fusion of parametric and nonparametric model. The regression model comprise several models and each model has two components, parametric and nonparametric. The used model has linear function as parametric and polynomial truncated spline as nonparametric component. The model can handle both linearity and nonlinearity relationship between response and the sets of predictor variables. The aim of this paper is to demonstrate the application of the regression model for modeling of effect of regional socio-economic on use of information technology. More specific, the response variables are percentage of households has access to internet and percentage of households has personal computer. Then, predictor variables are percentage of literacy people, percentage of electrification and percentage of economic growth. Based on identification of the relationship between response and predictor variable, economic growth is treated as nonparametric predictor and the others are parametric predictors. The result shows that the multiresponse semiparametric regression can be applied well as indicate by the high coefficient determination, 90 percent.

  7. Modelling and multi-parametric control for delivery of anaesthetic agents.

    PubMed

    Dua, Pinky; Dua, Vivek; Pistikopoulos, Efstratios N

    2010-06-01

    This article presents model predictive controllers (MPCs) and multi-parametric model-based controllers for delivery of anaesthetic agents. The MPC can take into account constraints on drug delivery rates and state of the patient but requires solving an optimization problem at regular time intervals. The multi-parametric controller has all the advantages of the MPC and does not require repetitive solution of optimization problem for its implementation. This is achieved by obtaining the optimal drug delivery rates as a set of explicit functions of the state of the patient. The derivation of the controllers relies on using detailed models of the system. A compartmental model for the delivery of three drugs for anaesthesia is developed. The key feature of this model is that mean arterial pressure, cardiac output and unconsciousness of the patient can be simultaneously regulated. This is achieved by using three drugs: dopamine (DP), sodium nitroprusside (SNP) and isoflurane. A number of dynamic simulation experiments are carried out for the validation of the model. The model is then used for the design of model predictive and multi-parametric controllers, and the performance of the controllers is analyzed.

  8. Nonparametric Simulation of Signal Transduction Networks with Semi-Synchronized Update

    PubMed Central

    Nassiri, Isar; Masoudi-Nejad, Ali; Jalili, Mahdi; Moeini, Ali

    2012-01-01

    Simulating signal transduction in cellular signaling networks provides predictions of network dynamics by quantifying the changes in concentration and activity-level of the individual proteins. Since numerical values of kinetic parameters might be difficult to obtain, it is imperative to develop non-parametric approaches that combine the connectivity of a network with the response of individual proteins to signals which travel through the network. The activity levels of signaling proteins computed through existing non-parametric modeling tools do not show significant correlations with the observed values in experimental results. In this work we developed a non-parametric computational framework to describe the profile of the evolving process and the time course of the proportion of active form of molecules in the signal transduction networks. The model is also capable of incorporating perturbations. The model was validated on four signaling networks showing that it can effectively uncover the activity levels and trends of response during signal transduction process. PMID:22737250

  9. Separation and purification of enzymes by continuous pH-parametric pumping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, S.Y.; Lin, C.K.; Juang, L.Y.

    1985-10-01

    Trypsin and chymotrypsin were separated from porcine pancreas extract by continuous pH-parametric pumping. CHOM (chicken ovomucoid) was convalently bound to laboratory-prepared crab chitin with glutaraldehyde to form an affinity adsorbent of trypsin. The pH levels of top and bottom feeds were 8.0 and 2.5, respectively. Similar inhibitor, DKOM (duck ovomucoid), and pH levels 8.0 and 2.0 for top and bottom feeds, respectively, were used for separation and purification of chymotrypsin. e-Amino caproyl-D-tryptophan methyl ester was coupled to chitosan to form an affinity adsorbent for stem bromelain. The pH levels were 8.7 and 3.0. Separation continued fairly well with high yield,more » e.g., 95% recovery of trypsin after continuous pumping of 10 cycles. Optimum operational conditions for concentration and purification of these enzymes were investigated. The results showed that the continuous pH-parametric pumping coupled with affinity chromatography is effective for concentration and purification of enzymes. 19 references.« less

  10. Parametric decay of an extraordinary electromagnetic wave in relativistic plasma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dorofeenko, V. G.; Krasovitskiy, V. B., E-mail: krasovit@mail.ru; Turikov, V. A.

    2015-03-15

    Parametric instability of an extraordinary electromagnetic wave in plasma preheated to a relativistic temperature is considered. A set of self-similar nonlinear differential equations taking into account the electron “thermal” mass is derived and investigated. Small perturbations of the parameters of the heated plasma are analyzed in the linear approximation by using the dispersion relation determining the phase velocities of the fast and slow extraordinary waves. In contrast to cold plasma, the evanescence zone in the frequency range above the electron upper hybrid frequency vanishes and the asymptotes of both branches converge. Theoretical analysis of the set of nonlinear equations showsmore » that the growth rate of decay instability increases with increasing initial temperature of plasma electrons. This result is qualitatively confirmed by numerical simulations of plasma heating by a laser pulse injected from vacuum.« less

  11. Location tests for biomarker studies: a comparison using simulations for the two-sample case.

    PubMed

    Scheinhardt, M O; Ziegler, A

    2013-01-01

    Gene, protein, or metabolite expression levels are often non-normally distributed, heavy tailed and contain outliers. Standard statistical approaches may fail as location tests in this situation. In three Monte-Carlo simulation studies, we aimed at comparing the type I error levels and empirical power of standard location tests and three adaptive tests [O'Gorman, Can J Stat 1997; 25: 269 -279; Keselman et al., Brit J Math Stat Psychol 2007; 60: 267- 293; Szymczak et al., Stat Med 2013; 32: 524 - 537] for a wide range of distributions. We simulated two-sample scenarios using the g-and-k-distribution family to systematically vary tail length and skewness with identical and varying variability between groups. All tests kept the type I error level when groups did not vary in their variability. The standard non-parametric U-test performed well in all simulated scenarios. It was outperformed by the two non-parametric adaptive methods in case of heavy tails or large skewness. Most tests did not keep the type I error level for skewed data in the case of heterogeneous variances. The standard U-test was a powerful and robust location test for most of the simulated scenarios except for very heavy tailed or heavy skewed data, and it is thus to be recommended except for these cases. The non-parametric adaptive tests were powerful for both normal and non-normal distributions under sample variance homogeneity. But when sample variances differed, they did not keep the type I error level. The parametric adaptive test lacks power for skewed and heavy tailed distributions.

  12. A Semi-parametric Transformation Frailty Model for Semi-competing Risks Survival Data

    PubMed Central

    Jiang, Fei; Haneuse, Sebastien

    2016-01-01

    In the analysis of semi-competing risks data interest lies in estimation and inference with respect to a so-called non-terminal event, the observation of which is subject to a terminal event. Multi-state models are commonly used to analyse such data, with covariate effects on the transition/intensity functions typically specified via the Cox model and dependence between the non-terminal and terminal events specified, in part, by a unit-specific shared frailty term. To ensure identifiability, the frailties are typically assumed to arise from a parametric distribution, specifically a Gamma distribution with mean 1.0 and variance, say, σ2. When the frailty distribution is misspecified, however, the resulting estimator is not guaranteed to be consistent, with the extent of asymptotic bias depending on the discrepancy between the assumed and true frailty distributions. In this paper, we propose a novel class of transformation models for semi-competing risks analysis that permit the non-parametric specification of the frailty distribution. To ensure identifiability, the class restricts to parametric specifications of the transformation and the error distribution; the latter are flexible, however, and cover a broad range of possible specifications. We also derive the semi-parametric efficient score under the complete data setting and propose a non-parametric score imputation method to handle right censoring; consistency and asymptotic normality of the resulting estimators is derived and small-sample operating characteristics evaluated via simulation. Although the proposed semi-parametric transformation model and non-parametric score imputation method are motivated by the analysis of semi-competing risks data, they are broadly applicable to any analysis of multivariate time-to-event outcomes in which a unit-specific shared frailty is used to account for correlation. Finally, the proposed model and estimation procedures are applied to a study of hospital readmission among patients diagnosed with pancreatic cancer. PMID:28439147

  13. Modeling river discharge and sediment transport in the Wax Lake-Atchafalaya basin with remote sensing parametrization.

    NASA Astrophysics Data System (ADS)

    Simard, M.; Liu, K.; Denbina, M. W.; Jensen, D.; Rodriguez, E.; Liao, T. H.; Christensen, A.; Jones, C. E.; Twilley, R.; Lamb, M. P.; Thomas, N. A.

    2017-12-01

    Our goal is to estimate the fluxes of water and sediments throughout the Wax Lake-Atchafalaya basin. This was achieved by parametrization of a set of 1D (HEC-RAS) and 2D (DELFT3D) hydrology models with state of the art remote sensing measurements of water surface elevation, water surface slope and total suspended sediment (TSS) concentrations. The model implementations are spatially explicit, simulating river currents, lateral flows to distributaries and marshes, and spatial variations of sediment concentrations. Three remote sensing instruments were flown simultaneously to collect data over the Wax Lake-Atchafalaya basin, and along with in situ field data. A Riegl Lidar was used to measure water surface elevation and slope, while the UAVSAR L-band radar collected data in repeat-pass interferometric mode to measure water level change within adjacent marshes and islands. These data were collected several times as the tide rose and fell. AVRIS-NG instruments measured water surface reflectance spectra, used to estimate TSS. Bathymetry was obtained from sonar transects and water level changes were recorded by 19 water level pressure transducers. We used several Acoustic Doppler Current Profiler (ADCP) transects to estimate river discharge. The remotely sensed measurements of water surface slope were small ( 1cm/km) and varied slightly along the channel, especially at the confluence with bayous and the intra-coastal waterway. The slope also underwent significant changes during the tidal cycle. Lateral fluxes to island marshes were mainly observed by UAVSAR close to the distributaries. The extensive remote sensing measurements showed significant disparity with the hydrology model outputs. Observed variations in water surface slopes were unmatched by the model and tidal wave propagation was much faster than gauge measurements. The slope variations were compensated for in the models by tuning local lateral fluxes, bathymetry and riverbed friction. Overall, the simpler 1D model could best simulate observed tidal wave propagation and water surface slope. The complexity of the 2D model requires further quantification of parameter sensitivity and improvement of the parametrization routine.

  14. Disjunctive Normal Shape and Appearance Priors with Applications to Image Segmentation.

    PubMed

    Mesadi, Fitsum; Cetin, Mujdat; Tasdizen, Tolga

    2015-10-01

    The use of appearance and shape priors in image segmentation is known to improve accuracy; however, existing techniques have several drawbacks. Active shape and appearance models require landmark points and assume unimodal shape and appearance distributions. Level set based shape priors are limited to global shape similarity. In this paper, we present a novel shape and appearance priors for image segmentation based on an implicit parametric shape representation called disjunctive normal shape model (DNSM). DNSM is formed by disjunction of conjunctions of half-spaces defined by discriminants. We learn shape and appearance statistics at varying spatial scales using nonparametric density estimation. Our method can generate a rich set of shape variations by locally combining training shapes. Additionally, by studying the intensity and texture statistics around each discriminant of our shape model, we construct a local appearance probability map. Experiments carried out on both medical and natural image datasets show the potential of the proposed method.

  15. Deorbitalization strategies for meta-generalized-gradient-approximation exchange-correlation functionals

    NASA Astrophysics Data System (ADS)

    Mejia-Rodriguez, Daniel; Trickey, S. B.

    2017-11-01

    We explore the simplification of widely used meta-generalized-gradient approximation (mGGA) exchange-correlation functionals to the Laplacian level of refinement by use of approximate kinetic-energy density functionals (KEDFs). Such deorbitalization is motivated by the prospect of reducing computational cost while recovering a strictly Kohn-Sham local potential framework (rather than the usual generalized Kohn-Sham treatment of mGGAs). A KEDF that has been rather successful in solid simulations proves to be inadequate for deorbitalization, but we produce other forms which, with parametrization to Kohn-Sham results (not experimental data) on a small training set, yield rather good results on standard molecular test sets when used to deorbitalize the meta-GGA made very simple, Tao-Perdew-Staroverov-Scuseria, and strongly constrained and appropriately normed functionals. We also study the difference between high-fidelity and best-performing deorbitalizations and discuss possible implications for use in ab initio molecular dynamics simulations of complicated condensed phase systems.

  16. Power scaling of supercontinuum seeded megahertz-repetition rate optical parametric chirped pulse amplifiers.

    PubMed

    Riedel, R; Stephanides, A; Prandolini, M J; Gronloh, B; Jungbluth, B; Mans, T; Tavella, F

    2014-03-15

    Optical parametric chirped-pulse amplifiers with high average power are possible with novel high-power Yb:YAG amplifiers with kW-level output powers. We demonstrate a compact wavelength-tunable sub-30-fs amplifier with 11.4 W average power with 20.7% pump-to-signal conversion efficiency. For parametric amplification, a beta-barium borate crystal is pumped by a 140 W, 1 ps Yb:YAG InnoSlab amplifier at 3.25 MHz repetition rate. The broadband seed is generated via supercontinuum generation in a YAG crystal.

  17. Structural and parameteric uncertainty quantification in cloud microphysics parameterization schemes

    NASA Astrophysics Data System (ADS)

    van Lier-Walqui, M.; Morrison, H.; Kumjian, M. R.; Prat, O. P.; Martinkus, C.

    2017-12-01

    Atmospheric model parameterization schemes employ approximations to represent the effects of unresolved processes. These approximations are a source of error in forecasts, caused in part by considerable uncertainty about the optimal value of parameters within each scheme -- parameteric uncertainty. Furthermore, there is uncertainty regarding the best choice of the overarching structure of the parameterization scheme -- structrual uncertainty. Parameter estimation can constrain the first, but may struggle with the second because structural choices are typically discrete. We address this problem in the context of cloud microphysics parameterization schemes by creating a flexible framework wherein structural and parametric uncertainties can be simultaneously constrained. Our scheme makes no assuptions about drop size distribution shape or the functional form of parametrized process rate terms. Instead, these uncertainties are constrained by observations using a Markov Chain Monte Carlo sampler within a Bayesian inference framework. Our scheme, the Bayesian Observationally-constrained Statistical-physical Scheme (BOSS), has flexibility to predict various sets of prognostic drop size distribution moments as well as varying complexity of process rate formulations. We compare idealized probabilistic forecasts from versions of BOSS with varying levels of structural complexity. This work has applications in ensemble forecasts with model physics uncertainty, data assimilation, and cloud microphysics process studies.

  18. The epistemic and aleatory uncertainties of the ETAS-type models: an application to the Central Italy seismicity.

    PubMed

    Lombardi, A M

    2017-09-18

    Stochastic models provide quantitative evaluations about the occurrence of earthquakes. A basic component of this type of models are the uncertainties in defining main features of an intrinsically random process. Even if, at a very basic level, any attempting to distinguish between types of uncertainty is questionable, an usual way to deal with this topic is to separate epistemic uncertainty, due to lack of knowledge, from aleatory variability, due to randomness. In the present study this problem is addressed in the narrow context of short-term modeling of earthquakes and, specifically, of ETAS modeling. By mean of an application of a specific version of the ETAS model to seismicity of Central Italy, recently struck by a sequence with a main event of Mw6.5, the aleatory and epistemic (parametric) uncertainty are separated and quantified. The main result of the paper is that the parametric uncertainty of the ETAS-type model, adopted here, is much lower than the aleatory variability in the process. This result points out two main aspects: an analyst has good chances to set the ETAS-type models, but he may retrospectively describe and forecast the earthquake occurrences with still limited precision and accuracy.

  19. A probabilistic strategy for parametric catastrophe insurance

    NASA Astrophysics Data System (ADS)

    Figueiredo, Rui; Martina, Mario; Stephenson, David; Youngman, Benjamin

    2017-04-01

    Economic losses due to natural hazards have shown an upward trend since 1980, which is expected to continue. Recent years have seen a growing worldwide commitment towards the reduction of disaster losses. This requires effective management of disaster risk at all levels, a part of which involves reducing financial vulnerability to disasters ex-ante, ensuring that necessary resources will be available following such events. One way to achieve this is through risk transfer instruments. These can be based on different types of triggers, which determine the conditions under which payouts are made after an event. This study focuses on parametric triggers, where payouts are determined by the occurrence of an event exceeding specified physical parameters at a given location, or at multiple locations, or over a region. This type of product offers a number of important advantages, and its adoption is increasing. The main drawback of parametric triggers is their susceptibility to basis risk, which arises when there is a mismatch between triggered payouts and the occurrence of loss events. This is unavoidable in said programmes, as their calibration is based on models containing a number of different sources of uncertainty. Thus, a deterministic definition of the loss event triggering parameters appears flawed. However, often for simplicity, this is the way in which most parametric models tend to be developed. This study therefore presents an innovative probabilistic strategy for parametric catastrophe insurance. It is advantageous as it recognizes uncertainties and minimizes basis risk while maintaining a simple and transparent procedure. A logistic regression model is constructed here to represent the occurrence of loss events based on certain loss index variables, obtained through the transformation of input environmental variables. Flood-related losses due to rainfall are studied. The resulting model is able, for any given day, to issue probabilities of occurrence of loss events. Due to the nature of parametric programmes, it is still necessary to clearly define when a payout is due or not, and so a decision threshold probability above which a loss event is considered to occur must be set, effectively converting the issued probabilities into deterministic binary outcomes. Model skill and value are evaluated over the range of possible threshold probabilities, with the objective of defining the optimal one. The predictive ability of the model is assessed. In terms of value assessment, a decision model is proposed, allowing users to quantify monetarily their expected expenses when different combinations of model event triggering and actual event occurrence take place, directly tackling the problem of basis risk.

  20. Protein Logic: A Statistical Mechanical Study of Signal Integration at the Single-Molecule Level

    PubMed Central

    de Ronde, Wiet; Rein ten Wolde, Pieter; Mugler, Andrew

    2012-01-01

    Information processing and decision-making is based upon logic operations, which in cellular networks has been well characterized at the level of transcription. In recent years, however, both experimentalists and theorists have begun to appreciate that cellular decision-making can also be performed at the level of a single protein, giving rise to the notion of protein logic. Here we systematically explore protein logic using a well-known statistical mechanical model. As an example system, we focus on receptors that bind either one or two ligands, and their associated dimers. Notably, we find that a single heterodimer can realize any of the 16 possible logic gates, including the XOR gate, by variation of biochemical parameters. We then introduce what to our knowledge is a novel idea: that a set of receptors with fixed parameters can encode functionally unique logic gates simply by forming different dimeric combinations. An exhaustive search reveals that the simplest set of receptors (two single-ligand receptors and one double-ligand receptor) can realize several different groups of three unique gates, a result for which the parametric analysis of single receptors and dimers provides a clear interpretation. Both results underscore the surprising functional freedom readily available to cells at the single-protein level. PMID:23009860

  1. [The short-term effects of air pollution on mortality. The results of the EMECAM project in the city of Vigo, 1991-94. Estudio Multicéntrico Español sobre la Relación entre la Contaminación Atmosférica y la Mortalidad].

    PubMed

    Taracido Trunk, M; Figueiras, A; Castro Lareo, I

    1999-01-01

    In the Autonomous Region of Galicia, no study has been made of the impacts of air pollution on human health, despite the fact that several of its major cities have moderate levels of pollution. Therefore, we have considered the need of making this study in the city of Vigo. The main objective of this analysis is that of analyzing the short-term impact of air pollution on the daily death rate for all reasons in the city of Vigo throughout the 1991-1994 period, by using the procedure for analysis set out as part of the EMECAM Project. The daily fluctuations in the number of deaths for all causes with the exception of the external ones are listed with the daily fluctuations of sulfur dioxide and particles using Poisson regression models. A non-parametric model is also used in order to better control the confusion variables. Using the Poisson regression model, no significant relationships have been found to exist between the pollutants and the death rate. In the non-parametric model, a relationship was found between the concentration of particles on the day immediately prior to the date of death and the death rate, an effect which remains unchanged on including the autoregressive terms. Particle-based air pollution is a health risk despite the average levels of this pollutant falling within the air quality guideline levels in the city of Vigo.

  2. Analytical study of nozzle performance for nuclear thermal rockets

    NASA Technical Reports Server (NTRS)

    Davidian, Kenneth O.; Kacynski, Kenneth J.

    1991-01-01

    A parametric study has been conducted by the NASA-Lewis Rocket Engine Design Expert System for the convergent-divergent nozzle of the Nuclear Thermal Rocket system, which uses a nuclear reactor to heat hydrogen to high temperature and then expands it through the nozzle. It is established by the study that finite-rate chemical reactions lower performance levels from theoretical levels. Major parametric roles are played by chamber temperature and chamber pressure. A maximum performance of 930 sec is projected at 2700 K, and of 1030 at 3100 K.

  3. Phase matched parametric amplification via four-wave mixing in optical microfibers.

    PubMed

    Abdul Khudus, Muhammad I M; De Lucia, Francesco; Corbari, Costantino; Lee, Timothy; Horak, Peter; Sazio, Pier; Brambilla, Gilberto

    2016-02-15

    Four-wave mixing (FWM) based parametric amplification in optical microfibers (OMFs) is demonstrated over a wavelength range of over 1000 nm by exploiting their tailorable dispersion characteristics to achieve phase matching. Simulations indicate that for any set of wavelengths satisfying the FWM energy conservation condition there are two diameters at which phase matching in the fundamental mode can occur. Experiments with a high-power pulsed source working in conjunction with a periodically poled silica fiber (PPSF), producing both fundamental and second harmonic signals, are undertaken to investigate the possibility of FWM parametric amplification in OMFs. Large increases of idler output power at the third harmonic wavelength were recorded for diameters close to the two phase matching diameters. A total amplification of more than 25 dB from the initial signal was observed in a 6 mm long optical microfiber, after accounting for the thermal drift of the PPSF and other losses in the system.

  4. Latest astronomical constraints on some non-linear parametric dark energy models

    NASA Astrophysics Data System (ADS)

    Yang, Weiqiang; Pan, Supriya; Paliathanasis, Andronikos

    2018-04-01

    We consider non-linear redshift-dependent equation of state parameters as dark energy models in a spatially flat Friedmann-Lemaître-Robertson-Walker universe. To depict the expansion history of the universe in such cosmological scenarios, we take into account the large-scale behaviour of such parametric models and fit them using a set of latest observational data with distinct origin that includes cosmic microwave background radiation, Supernove Type Ia, baryon acoustic oscillations, redshift space distortion, weak gravitational lensing, Hubble parameter measurements from cosmic chronometers, and finally the local Hubble constant from Hubble space telescope. The fitting technique avails the publicly available code Cosmological Monte Carlo (COSMOMC), to extract the cosmological information out of these parametric dark energy models. From our analysis, it follows that those models could describe the late time accelerating phase of the universe, while they are distinguished from the Λ-cosmology.

  5. Integrated modeling for parametric evaluation of smart x-ray optics

    NASA Astrophysics Data System (ADS)

    Dell'Agostino, S.; Riva, M.; Spiga, D.; Basso, S.; Civitani, Marta

    2014-08-01

    This work is developed in the framework of AXYOM project, which proposes to study the application of a system of piezoelectric actuators to grazing-incidence X-ray telescope optic prototypes: thin glass or plastic foils, in order to increase their angular resolution. An integrated optomechanical model has been set up to evaluate the performances of X-ray optics under deformation induced by Piezo Actuators. Parametric evaluation has been done looking at different number and position of actuators to optimize the outcome. Different evaluations have also been done over the actuator types, considering Flexible Piezoceramic, Multi Fiber Composites piezo actuators, and PVDF.

  6. Parametric design of tri-axial nested Helmholtz coils

    NASA Astrophysics Data System (ADS)

    Abbott, Jake J.

    2015-05-01

    This paper provides an optimal parametric design for tri-axial nested Helmholtz coils, which are used to generate a uniform magnetic field with controllable magnitude and direction. Circular and square coils, both with square cross section, are considered. Practical considerations such as wire selection, wire-wrapping efficiency, wire bending radius, choice of power supply, and inductance and time response are included. Using the equations provided, a designer can quickly create an optimal set of custom coils to generate a specified field magnitude in the uniform-field region while maintaining specified accessibility to the central workspace. An example case study is included.

  7. Parametric design of tri-axial nested Helmholtz coils.

    PubMed

    Abbott, Jake J

    2015-05-01

    This paper provides an optimal parametric design for tri-axial nested Helmholtz coils, which are used to generate a uniform magnetic field with controllable magnitude and direction. Circular and square coils, both with square cross section, are considered. Practical considerations such as wire selection, wire-wrapping efficiency, wire bending radius, choice of power supply, and inductance and time response are included. Using the equations provided, a designer can quickly create an optimal set of custom coils to generate a specified field magnitude in the uniform-field region while maintaining specified accessibility to the central workspace. An example case study is included.

  8. Parametric design of tri-axial nested Helmholtz coils

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abbott, Jake J., E-mail: jake.abbott@utah.edu

    This paper provides an optimal parametric design for tri-axial nested Helmholtz coils, which are used to generate a uniform magnetic field with controllable magnitude and direction. Circular and square coils, both with square cross section, are considered. Practical considerations such as wire selection, wire-wrapping efficiency, wire bending radius, choice of power supply, and inductance and time response are included. Using the equations provided, a designer can quickly create an optimal set of custom coils to generate a specified field magnitude in the uniform-field region while maintaining specified accessibility to the central workspace. An example case study is included.

  9. Feature selection and classification of multiparametric medical images using bagging and SVM

    NASA Astrophysics Data System (ADS)

    Fan, Yong; Resnick, Susan M.; Davatzikos, Christos

    2008-03-01

    This paper presents a framework for brain classification based on multi-parametric medical images. This method takes advantage of multi-parametric imaging to provide a set of discriminative features for classifier construction by using a regional feature extraction method which takes into account joint correlations among different image parameters; in the experiments herein, MRI and PET images of the brain are used. Support vector machine classifiers are then trained based on the most discriminative features selected from the feature set. To facilitate robust classification and optimal selection of parameters involved in classification, in view of the well-known "curse of dimensionality", base classifiers are constructed in a bagging (bootstrap aggregating) framework for building an ensemble classifier and the classification parameters of these base classifiers are optimized by means of maximizing the area under the ROC (receiver operating characteristic) curve estimated from their prediction performance on left-out samples of bootstrap sampling. This classification system is tested on a sex classification problem, where it yields over 90% classification rates for unseen subjects. The proposed classification method is also compared with other commonly used classification algorithms, with favorable results. These results illustrate that the methods built upon information jointly extracted from multi-parametric images have the potential to perform individual classification with high sensitivity and specificity.

  10. Comparison of some dispersion-corrected and traditional functionals with CCSD(T) and MP2 ab initio methods: Dispersion, induction, and basis set superposition error

    NASA Astrophysics Data System (ADS)

    Roy, Dipankar; Marianski, Mateusz; Maitra, Neepa T.; Dannenberg, J. J.

    2012-10-01

    We compare dispersion and induction interactions for noble gas dimers and for Ne, methane, and 2-butyne with HF and LiF using a variety of functionals (including some specifically parameterized to evaluate dispersion interactions) with ab initio methods including CCSD(T) and MP2. We see that inductive interactions tend to enhance dispersion and may be accompanied by charge-transfer. We show that the functionals do not generally follow the expected trends in interaction energies, basis set superposition errors (BSSE), and interaction distances as a function of basis set size. The functionals parameterized to treat dispersion interactions often overestimate these interactions, sometimes by quite a lot, when compared to higher level calculations. Which functionals work best depends upon the examples chosen. The B3LYP and X3LYP functionals, which do not describe pure dispersion interactions, appear to describe dispersion mixed with induction about as accurately as those parametrized to treat dispersion. We observed significant differences in high-level wavefunction calculations in a basis set larger than those used to generate the structures in many of the databases. We discuss the implications for highly parameterized functionals based on these databases, as well as the use of simple potential energy for fitting the parameters rather than experimentally determinable thermodynamic state functions that involve consideration of vibrational states.

  11. Comparison of some dispersion-corrected and traditional functionals with CCSD(T) and MP2 ab initio methods: dispersion, induction, and basis set superposition error.

    PubMed

    Roy, Dipankar; Marianski, Mateusz; Maitra, Neepa T; Dannenberg, J J

    2012-10-07

    We compare dispersion and induction interactions for noble gas dimers and for Ne, methane, and 2-butyne with HF and LiF using a variety of functionals (including some specifically parameterized to evaluate dispersion interactions) with ab initio methods including CCSD(T) and MP2. We see that inductive interactions tend to enhance dispersion and may be accompanied by charge-transfer. We show that the functionals do not generally follow the expected trends in interaction energies, basis set superposition errors (BSSE), and interaction distances as a function of basis set size. The functionals parameterized to treat dispersion interactions often overestimate these interactions, sometimes by quite a lot, when compared to higher level calculations. Which functionals work best depends upon the examples chosen. The B3LYP and X3LYP functionals, which do not describe pure dispersion interactions, appear to describe dispersion mixed with induction about as accurately as those parametrized to treat dispersion. We observed significant differences in high-level wavefunction calculations in a basis set larger than those used to generate the structures in many of the databases. We discuss the implications for highly parameterized functionals based on these databases, as well as the use of simple potential energy for fitting the parameters rather than experimentally determinable thermodynamic state functions that involve consideration of vibrational states.

  12. Comparison of some dispersion-corrected and traditional functionals with CCSD(T) and MP2 ab initio methods: Dispersion, induction, and basis set superposition error

    PubMed Central

    Roy, Dipankar; Marianski, Mateusz; Maitra, Neepa T.; Dannenberg, J. J.

    2012-01-01

    We compare dispersion and induction interactions for noble gas dimers and for Ne, methane, and 2-butyne with HF and LiF using a variety of functionals (including some specifically parameterized to evaluate dispersion interactions) with ab initio methods including CCSD(T) and MP2. We see that inductive interactions tend to enhance dispersion and may be accompanied by charge-transfer. We show that the functionals do not generally follow the expected trends in interaction energies, basis set superposition errors (BSSE), and interaction distances as a function of basis set size. The functionals parameterized to treat dispersion interactions often overestimate these interactions, sometimes by quite a lot, when compared to higher level calculations. Which functionals work best depends upon the examples chosen. The B3LYP and X3LYP functionals, which do not describe pure dispersion interactions, appear to describe dispersion mixed with induction about as accurately as those parametrized to treat dispersion. We observed significant differences in high-level wavefunction calculations in a basis set larger than those used to generate the structures in many of the databases. We discuss the implications for highly parameterized functionals based on these databases, as well as the use of simple potential energy for fitting the parameters rather than experimentally determinable thermodynamic state functions that involve consideration of vibrational states. PMID:23039587

  13. Impact of state updating and multi-parametric ensemble for streamflow hindcasting in European river basins

    NASA Astrophysics Data System (ADS)

    Noh, S. J.; Rakovec, O.; Kumar, R.; Samaniego, L. E.

    2015-12-01

    Accurate and reliable streamflow prediction is essential to mitigate social and economic damage coming from water-related disasters such as flood and drought. Sequential data assimilation (DA) may facilitate improved streamflow prediction using real-time observations to correct internal model states. In conventional DA methods such as state updating, parametric uncertainty is often ignored mainly due to practical limitations of methodology to specify modeling uncertainty with limited ensemble members. However, if parametric uncertainty related with routing and runoff components is not incorporated properly, predictive uncertainty by model ensemble may be insufficient to capture dynamics of observations, which may deteriorate predictability. Recently, a multi-scale parameter regionalization (MPR) method was proposed to make hydrologic predictions at different scales using a same set of model parameters without losing much of the model performance. The MPR method incorporated within the mesoscale hydrologic model (mHM, http://www.ufz.de/mhm) could effectively represent and control uncertainty of high-dimensional parameters in a distributed model using global parameters. In this study, we evaluate impacts of streamflow data assimilation over European river basins. Especially, a multi-parametric ensemble approach is tested to consider the effects of parametric uncertainty in DA. Because augmentation of parameters is not required within an assimilation window, the approach could be more stable with limited ensemble members and have potential for operational uses. To consider the response times and non-Gaussian characteristics of internal hydrologic processes, lagged particle filtering is utilized. The presentation will be focused on gains and limitations of streamflow data assimilation and multi-parametric ensemble method over large-scale basins.

  14. A new approach to hierarchical data analysis: Targeted maximum likelihood estimation for the causal effect of a cluster-level exposure.

    PubMed

    Balzer, Laura B; Zheng, Wenjing; van der Laan, Mark J; Petersen, Maya L

    2018-01-01

    We often seek to estimate the impact of an exposure naturally occurring or randomly assigned at the cluster-level. For example, the literature on neighborhood determinants of health continues to grow. Likewise, community randomized trials are applied to learn about real-world implementation, sustainability, and population effects of interventions with proven individual-level efficacy. In these settings, individual-level outcomes are correlated due to shared cluster-level factors, including the exposure, as well as social or biological interactions between individuals. To flexibly and efficiently estimate the effect of a cluster-level exposure, we present two targeted maximum likelihood estimators (TMLEs). The first TMLE is developed under a non-parametric causal model, which allows for arbitrary interactions between individuals within a cluster. These interactions include direct transmission of the outcome (i.e. contagion) and influence of one individual's covariates on another's outcome (i.e. covariate interference). The second TMLE is developed under a causal sub-model assuming the cluster-level and individual-specific covariates are sufficient to control for confounding. Simulations compare the alternative estimators and illustrate the potential gains from pairing individual-level risk factors and outcomes during estimation, while avoiding unwarranted assumptions. Our results suggest that estimation under the sub-model can result in bias and misleading inference in an observational setting. Incorporating working assumptions during estimation is more robust than assuming they hold in the underlying causal model. We illustrate our approach with an application to HIV prevention and treatment.

  15. An Efficient Non-iterative Bulk Parametrization of Surface Fluxes for Stable Atmospheric Conditions Over Polar Sea-Ice

    NASA Astrophysics Data System (ADS)

    Gryanik, Vladimir M.; Lüpkes, Christof

    2018-02-01

    In climate and weather prediction models the near-surface turbulent fluxes of heat and momentum and related transfer coefficients are usually parametrized on the basis of Monin-Obukhov similarity theory (MOST). To avoid iteration, required for the numerical solution of the MOST equations, many models apply parametrizations of the transfer coefficients based on an approach relating these coefficients to the bulk Richardson number Rib. However, the parametrizations that are presently used in most climate models are valid only for weaker stability and larger surface roughnesses than those documented during the Surface Heat Budget of the Arctic Ocean campaign (SHEBA). The latter delivered a well-accepted set of turbulence data in the stable surface layer over polar sea-ice. Using stability functions based on the SHEBA data, we solve the MOST equations applying a new semi-analytic approach that results in transfer coefficients as a function of Rib and roughness lengths for momentum and heat. It is shown that the new coefficients reproduce the coefficients obtained by the numerical iterative method with a good accuracy in the most relevant range of stability and roughness lengths. For small Rib, the new bulk transfer coefficients are similar to the traditional coefficients, but for large Rib they are much smaller than currently used coefficients. Finally, a possible adjustment of the latter and the implementation of the new proposed parametrizations in models are discussed.

  16. Quantum Treatment of Two Coupled Oscillators in Interaction with a Two-Level Atom:

    NASA Astrophysics Data System (ADS)

    Khalil, E. M.; Abdalla, M. Sebawe; Obada, A. S.-F.

    In this communication we handle a modified model representing the interaction between a two-level atom and two modes of the electromagnetic field in a cavity. The interaction between the modes is assumed to be of a parametric amplifier type. The model consists of two different systems, one represents the Jaynes-Cummings model (atom-field interaction) and the other represents the two mode parametric amplifier model (field-field interaction). After some canonical transformations the constants of the motion have been obtained and used to derive the time evolution operator. The wave function in the Schrödinger picture is constructed and employed to discuss some statistical properties related to the system. Further discussion related to the statistical properties of some physical quantities is given where we have taken into account an initial correlated pair-coherent state for the modes. We concentrate in our examination on the system behavior that occurred as a result of the variation of the parametric amplifier coupling parameter as well as the detuning parameter. It has been shown that the interaction of the parametric amplifier term increases the revival period and consequently longer period of strong interaction between the atom and the fields.

  17. Parametric embedding for class visualization.

    PubMed

    Iwata, Tomoharu; Saito, Kazumi; Ueda, Naonori; Stromsten, Sean; Griffiths, Thomas L; Tenenbaum, Joshua B

    2007-09-01

    We propose a new method, parametric embedding (PE), that embeds objects with the class structure into a low-dimensional visualization space. PE takes as input a set of class conditional probabilities for given data points and tries to preserve the structure in an embedding space by minimizing a sum of Kullback-Leibler divergences, under the assumption that samples are generated by a gaussian mixture with equal covariances in the embedding space. PE has many potential uses depending on the source of the input data, providing insight into the classifier's behavior in supervised, semisupervised, and unsupervised settings. The PE algorithm has a computational advantage over conventional embedding methods based on pairwise object relations since its complexity scales with the product of the number of objects and the number of classes. We demonstrate PE by visualizing supervised categorization of Web pages, semisupervised categorization of digits, and the relations of words and latent topics found by an unsupervised algorithm, latent Dirichlet allocation.

  18. Stability analysis of a time-periodic 2-dof MEMS structure

    NASA Astrophysics Data System (ADS)

    Kniffka, Till Jochen; Welte, Johannes; Ecker, Horst

    2012-11-01

    Microelectromechanical systems (MEMS) are becoming important for all kinds of industrial applications. Among them are filters in communication devices, due to the growing demand for efficient and accurate filtering of signals. In recent developments single degree of freedom (1-dof) oscillators, that are operated at a parametric resonances, are employed for such tasks. Typically vibration damping is low in such MEM systems. While parametric excitation (PE) is used so far to take advantage of a parametric resonance, this contribution suggests to also exploit parametric anti-resonances in order to improve the damping behavior of such systems. Modeling aspects of a 2-dof MEM system and first results of the analysis of the non-linear and the linearized system are the focus of this paper. In principle the investigated system is an oscillating mechanical system with two degrees of freedom x = [x1x2]T that can be described by Mx+Cx+K1x+K3(x2)x+Fes(x,V(t)) = 0. The system is inherently non-linear because of the cubic mechanical stiffness K3 of the structure, but also because of electrostatic forces (1+cos(ωt))Fes(x) that act on the system. Electrostatic forces are generated by comb drives and are proportional to the applied time-periodic voltage V(t). These drives also provide the means to introduce time-periodic coefficients, i.e. parametric excitation (1+cos(ωt)) with frequency ω. For a realistic MEM system the coefficients of the non-linear set of differential equations need to be scaled for efficient numerical treatment. The final mathematical model is a set of four non-linear time-periodic homogeneous differential equations of first order. Numerical results are obtained from two different methods. The linearized time-periodic (LTP) system is studied by calculating the Monodromy matrix of the system. The eigenvalues of this matrix decide on the stability of the LTP-system. To study the unabridged non-linear system, the bifurcation software ManLab is employed. Continuation analysis including stability evaluations are executed and show the frequency ranges for which the 2-dof system becomes unstable due to parametric resonances. Moreover, the existence of frequency intervals are shown where enhanced damping for the system is observed for this MEMS. The results from the stability studies are confirmed by simulation results.

  19. A Strategy for a Parametric Flood Insurance Using Proxies

    NASA Astrophysics Data System (ADS)

    Haraguchi, M.; Lall, U.

    2017-12-01

    Traditionally, the design of flood control infrastructure and flood plain zoning require the estimation of return periods, which have been calculated by river hydraulic models with rainfall-runoff models. However, this multi-step modeling process leads to significant uncertainty to assess inundation. In addition, land use change and changing climate alter the potential losses, as well as make the modeling results obsolete. For these reasons, there is a strong need to create parametric indexes for the financial risk transfer for large flood events, to enable rapid response and recovery. Hence, this study examines the possibility of developing a parametric flood index at the national or regional level in Asia, which can be quickly mobilized after catastrophic floods. Specifically, we compare a single trigger based on rainfall index with multiple triggers using rainfall and streamflow indices by conducting case studies in Bangladesh and Thailand. The proposed methodology is 1) selecting suitable indices of rainfall and streamflow (if available), 2) identifying trigger levels for specified return periods for losses using stepwise and logistic regressions, 3) measuring the performance of indices, and 4) deriving return periods of selected windows and trigger levels. Based on the methodology, actual trigger levels were identified for Bangladesh and Thailand. Models based on multiple triggers reduced basis risks, an inherent problem in an index insurance. The proposed parametric flood index can be applied to countries with similar geographic and meteorological characteristics, and serve as a promising method for ex-ante risk financing for developing countries. This work is intended to be a preliminary work supporting future work on pricing risk transfer mechanisms in ex-ante risk finance.

  20. Thermal effects in an ultrafast BiB 3O 6 optical parametric oscillator at high average powers

    DOE PAGES

    Petersen, T.; Zuegel, J. D.; Bromage, J.

    2017-08-15

    An ultrafast, high-average-power, extended-cavity, femtosecond BiB 3O 6 optical parametric oscillator was constructed as a test bed for investigating the scalability of infrared parametric devices. Despite the high pulse energies achieved by this system, the reduction in slope efficiency near the maximum-available pump power prompted the investigation of thermal effects in the crystal during operation. Furthermore, the local heating effects in the crystal were used to determine the impact on both phase matching and thermal lensing to understand limitations that must be overcome to achieve microjoule-level pulse energies at high repetition rates.

  1. Thermal effects in an ultrafast BiB 3O 6 optical parametric oscillator at high average powers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Petersen, T.; Zuegel, J. D.; Bromage, J.

    An ultrafast, high-average-power, extended-cavity, femtosecond BiB 3O 6 optical parametric oscillator was constructed as a test bed for investigating the scalability of infrared parametric devices. Despite the high pulse energies achieved by this system, the reduction in slope efficiency near the maximum-available pump power prompted the investigation of thermal effects in the crystal during operation. Furthermore, the local heating effects in the crystal were used to determine the impact on both phase matching and thermal lensing to understand limitations that must be overcome to achieve microjoule-level pulse energies at high repetition rates.

  2. Enhanced force sensitivity and noise squeezing in an electromechanical resonator coupled to a nanotransistor

    NASA Astrophysics Data System (ADS)

    Mahboob, I.; Flurin, E.; Nishiguchi, K.; Fujiwara, A.; Yamaguchi, H.

    2010-12-01

    A nanofield-effect transistor (nano-FET) is coupled to a massive piezoelectricity based electromechanical resonator integrated with a parametric amplifier. The mechanical parametric amplifier can enhance the resonator's displacement and the resulting electrical signal is further amplified by the nano-FET. This hybrid amplification scheme yields an increase in the mechanical displacement signal by 70 dB resulting in a force sensitivity of 200 aN Hz-1/2 at 3 K. The mechanical parametric amplifier can also squeeze the displacement noise in one oscillation phase by 5 dB enabling a factor of 4 reduction in the thermomechanical noise force level.

  3. Evaluating Parametrization Protocols for Hydration Free Energy Calculations with the AMOEBA Polarizable Force Field.

    PubMed

    Bradshaw, Richard T; Essex, Jonathan W

    2016-08-09

    Hydration free energy (HFE) calculations are often used to assess the performance of biomolecular force fields and the quality of assigned parameters. The AMOEBA polarizable force field moves beyond traditional pairwise additive models of electrostatics and may be expected to improve upon predictions of thermodynamic quantities such as HFEs over and above fixed-point-charge models. The recent SAMPL4 challenge evaluated the AMOEBA polarizable force field in this regard but showed substantially worse results than those using the fixed-point-charge GAFF model. Starting with a set of automatically generated AMOEBA parameters for the SAMPL4 data set, we evaluate the cumulative effects of a series of incremental improvements in parametrization protocol, including both solute and solvent model changes. Ultimately, the optimized AMOEBA parameters give a set of results that are not statistically significantly different from those of GAFF in terms of signed and unsigned error metrics. This allows us to propose a number of guidelines for new molecule parameter derivation with AMOEBA, which we expect to have benefits for a range of biomolecular simulation applications such as protein-ligand binding studies.

  4. Parametric analysis of parameters for electrical-load forecasting using artificial neural networks

    NASA Astrophysics Data System (ADS)

    Gerber, William J.; Gonzalez, Avelino J.; Georgiopoulos, Michael

    1997-04-01

    Accurate total system electrical load forecasting is a necessary part of resource management for power generation companies. The better the hourly load forecast, the more closely the power generation assets of the company can be configured to minimize the cost. Automating this process is a profitable goal and neural networks should provide an excellent means of doing the automation. However, prior to developing such a system, the optimal set of input parameters must be determined. The approach of this research was to determine what those inputs should be through a parametric study of potentially good inputs. Input parameters tested were ambient temperature, total electrical load, the day of the week, humidity, dew point temperature, daylight savings time, length of daylight, season, forecast light index and forecast wind velocity. For testing, a limited number of temperatures and total electrical loads were used as a basic reference input parameter set. Most parameters showed some forecasting improvement when added individually to the basic parameter set. Significantly, major improvements were exhibited with the day of the week, dew point temperatures, additional temperatures and loads, forecast light index and forecast wind velocity.

  5. Towards the generation of a parametric foot model using principal component analysis: A pilot study.

    PubMed

    Scarton, Alessandra; Sawacha, Zimi; Cobelli, Claudio; Li, Xinshan

    2016-06-01

    There have been many recent developments in patient-specific models with their potential to provide more information on the human pathophysiology and the increase in computational power. However they are not yet successfully applied in a clinical setting. One of the main challenges is the time required for mesh creation, which is difficult to automate. The development of parametric models by means of the Principle Component Analysis (PCA) represents an appealing solution. In this study PCA has been applied to the feet of a small cohort of diabetic and healthy subjects, in order to evaluate the possibility of developing parametric foot models, and to use them to identify variations and similarities between the two populations. Both the skin and the first metatarsal bones have been examined. Besides the reduced sample of subjects considered in the analysis, results demonstrated that the method adopted herein constitutes a first step towards the realization of a parametric foot models for biomechanical analysis. Furthermore the study showed that the methodology can successfully describe features in the foot, and evaluate differences in the shape of healthy and diabetic subjects. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.

  6. Preprocessing: Geocoding of AVIRIS data using navigation, engineering, DEM, and radar tracking system data

    NASA Technical Reports Server (NTRS)

    Meyer, Peter; Larson, Steven A.; Hansen, Earl G.; Itten, Klaus I.

    1993-01-01

    Remotely sensed data have geometric characteristics and representation which depend on the type of the acquisition system used. To correlate such data over large regions with other real world representation tools like conventional maps or Geographic Information System (GIS) for verification purposes, or for further treatment within different data sets, a coregistration has to be performed. In addition to the geometric characteristics of the sensor there are two other dominating factors which affect the geometry: the stability of the platform and the topography. There are two basic approaches for a geometric correction on a pixel-by-pixel basis: (1) A parametric approach using the location of the airplane and inertial navigation system data to simulate the observation geometry; and (2) a non-parametric approach using tie points or ground control points. It is well known that the non-parametric approach is not reliable enough for the unstable flight conditions of airborne systems, and is not satisfying in areas with significant topography, e.g. mountains and hills. The present work describes a parametric preprocessing procedure which corrects effects of flight line and attitude variation as well as topographic influences and is described in more detail by Meyer.

  7. Tau-REx: A new look at the retrieval of exoplanetary atmospheres

    NASA Astrophysics Data System (ADS)

    Waldmann, Ingo

    2014-11-01

    The field of exoplanetary spectroscopy is as fast moving as it is new. With an increasing amount of space and ground based instruments obtaining data on a large set of extrasolar planets we are indeed entering the era of exoplanetary characterisation. Permanently at the edge of instrument feasibility, it is as important as it is difficult to find the most optimal and objective methodologies to analysing and interpreting current data. This is particularly true for smaller and fainter Earth and Super-Earth type planets.For low to mid signal to noise (SNR) observations, we are prone to two sources of biases: 1) Prior selection in the data reduction and analysis; 2) Prior constraints on the spectral retrieval. In Waldmann et al. (2013), Morello et al. (2014) and Waldmann (2012, 2014) we have shown a prior-free approach to data analysis based on non-parametric machine learning techniques. Following these approaches we will present a new take on the spectral retrieval of extrasolar planets. Tau-REx (tau-retrieval of exoplanets) is a new line-by-line, atmospheric retrieval framework. In the past the decision on what opacity sources go into an atmospheric model were usually user defined. Manual input can lead to model biases and poor convergence of the atmospheric model to the data. In Tau-REx we have set out to solve this. Through custom built pattern recognition software, Tau-REx is able to rapidly identify the most likely atmospheric opacities from a large number of possible absorbers/emitters (ExoMol or HiTran data bases) and non-parametrically constrain the prior space for the Bayesian retrieval. Unlike other (MCMC based) techniques, Tau-REx is able to fully integrate high-dimensional log-likelihood spaces and to calculate the full Bayesian Evidence of the atmospheric models. We achieve this through a combination of Nested Sampling and a high degree of code parallelisation. This allows for an exact and unbiased Bayesian model selection and a fully mapping of potential model-data degeneracies. Together with non-parametric data de-trending of exoplanetary spectra, we can reach an un- precedented level of objectivity in our atmospheric characterisation of these foreign worlds.

  8. Numerical and analytical investigation towards performance enhancement of a newly developed rockfall protective cable-net structure

    NASA Astrophysics Data System (ADS)

    Dhakal, S.; Bhandary, N. P.; Yatabe, R.; Kinoshita, N.

    2012-04-01

    In a previous companion paper, we presented a three-tier modelling of a particular type of rockfall protective cable-net structure (barrier), developed newly in Japan. Therein, we developed a three-dimensional, Finite Element based, nonlinear numerical model having been calibrated/back-calculated and verified with the element- and structure-level physical tests. Moreover, using a very simple, lumped-mass, single-degree-of-freedom, equivalently linear analytical model, a global-displacement-predictive correlation was devised by modifying the basic equation - obtained by combining the principles of conservation of linear momentum and energy - based on the back-analysis of the tests on the numerical model. In this paper, we use the developed models to explore the performance enhancement potential of the structure in terms of (a) the control of global displacement - possibly the major performance criterion for the proposed structure owing to a narrow space available in the targeted site, and (b) the increase in energy dissipation by the existing U-bolt-type Friction-brake Devices - which are identified to have performed weakly when integrated into the structure. A set of parametric investigations have revealed correlations to achieve the first objective in terms of the structure's mass, particularly by manipulating the wire-net's characteristics, and has additionally disclosed the effects of the impacting-block's parameters. Towards achieving the second objective, another set of parametric investigations have led to a proposal of a few innovative improvements in the constitutive behaviour (model) of the studied brake device (dissipator), in addition to an important recommendation of careful handling of the device based on the identified potential flaw.

  9. Potency control of modified live viral vaccines for veterinary use.

    PubMed

    Terpstra, C; Kroese, A H

    1996-04-01

    This paper reviews various aspects of efficacy, and methods for assaying the potency of modified live viral vaccines. The pros and cons of parametric versus non-parametric methods for analysis of potency assays are discussed and critical levels of protection, as determined by the target(s) of vaccination, are exemplified. Recommendations are presented for designing potency assays on master virus seeds and vaccine batches.

  10. Potency control of modified live viral vaccines for veterinary use.

    PubMed

    Terpstra, C; Kroese, A H

    1996-01-01

    This paper reviews various aspects of efficacy, and methods for assaying the potency of modified live viral vaccines. The pros and cons of parametric versus non-parametric methods for analysis of potency assays are discussed and critical levels of protection, as determined by the target(s) of vaccination, are exemplified. Recommendations are presented for designing potency assays on master virus seeds and vaccine batches.

  11. Organizing Space Shuttle parametric data for maintainability

    NASA Technical Reports Server (NTRS)

    Angier, R. C.

    1983-01-01

    A model of organization and management of Space Shuttle data is proposed. Shuttle avionics software is parametrically altered by a reconfiguration process for each flight. As the flight rate approaches an operational level, current methods of data management would become increasingly complex. An alternative method is introduced, using modularized standard data, and its implications for data collection, integration, validation, and reconfiguration processes are explored. Information modules are cataloged for later use, and may be combined in several levels for maintenance. For each flight, information modules can then be selected from the catalog at a high level. These concepts take advantage of the reusability of Space Shuttle information to reduce the cost of reconfiguration as flight experience increases.

  12. New planetary boundary layer parametrization in ECHAM5-HAM: Dynamical refinement of the vertical resolution

    NASA Astrophysics Data System (ADS)

    Siegenthaler-Le Drian, C.; Spichtinger, P.; Lohmann, U.

    2010-09-01

    Marine stratocumulus-capped boundary layers exhibit a strong net cooling impact on the Earth-Atmosphere system. Moreover, they are highly persistent over subtropical oceans. Therefore climate models need to represent them well in order to make reliable projections of future climate. One of the reasons for the absence of stratocumuli in the general circulation model ECHAM5-HAM (Roeckner et al., 2003; Stier et al., 2005) is due to the limited vertical resolution. In the current model version, no vertical sub-grid scale variability of clouds is taken into account, such that clouds occupy the full vertical layer. Around the inversion on top of the planetary boundary layer (PBL), conserved variables often have a steep gradient, which in a GCM may produce large discretization errors (Bretherton and Park, 2009). This inversion has a large diurnal cycle and varies with location around the globe, which is difficult to represent in a classical, coarse Eulerian approach. Furthermore, Lenderink and Holtslag (2000) and Lock (2001) showed that an inconsistent numerical representation between the entrainment parametrization and the other schemes, particularly with the vertical advection can lead to the occurrence of 'numerical entrainment'. The problem can be resolved by introducing a dynamical inversion as introduced by Grenier and Bretherton (2001) and Lock (2001). As these features can be seen in our version of ECHAM5-HAM, our implementation is aimed to reduce the numerical entrainment and to better represent stratocumuli in ECHAM5-HAM. To better resolve stratocumulus clouds, their inversion and the interaction between the turbulent diffusion and the vertical advection, the vertical grid is dynamically refined. The new grid is based on the reconstruction of the profiles of variables experiencing a sharp gradient (temperature, mixing ratio) applying the method presented in Grenier and Bretherton (2001). In typical stratocumulus regions, an additional grid level is thus associated with the PBL top. In case a cloud can be formed, a new level is associated with the lifting condensation level as well. The regular grid plus the two additional levels define the new dynamical grid, which varies geographically and temporally. The physical processes are computed on this new dynamical grid, Consequently, the sharp gradients and the interaction between the different processes can be better resolved. Some results of this new parametrization will be presented. On a single column model set-up, the reconstruction method accurately finds the inversion at the PBL top for the EPIC stratocumulus case. Also, on a global scale, the occurrence of a successful reconstruction, which is restricted in typical stratocumulus regions, occurs with a high frequency. The impact of the new dynamical grid on clouds and the radiation balance will be presented in the talk. References [Bretherton and Park, 2009] Bretherton, C. S. and Park, S. (2009). A new moist turbulence parametrization in the community atmosphere model. J. Climate, 22:3422-3448. [Grenier and Bretherton, 2001] Grenier, H. and Bretherton, C. S. (2001). A moist parametrization for large-scale models and its application to subtropical cloud-topped marine boundary layers. Mon. Wea. Rev., 129:357-377. [Lenderink and Holtslag, 2000] Lenderink, G. and Holtslag, A. M. (2000). Evaluation of the kinetic energy approach for modeling turbulent fluxes in stratocumulus. Mon. Wea. Rev., 128:244-258. [Lock, 2001] Lock, A. P. (2001). The numerical representation of entrainment in parametrizations of boundary layer turbulent mixing. Mon. Wea. Rev., 129:1148-1163. [Roeckner et al., 2003] Roeckner, E., Bäuml, G., Bonaventura, L. et al. (2003). The atmospheric general circulation model echam5, part I: Model description. Technical Report 349, Max-Planck-Institute for Meteorology, Hamburg,Germany. [Stier et al., 2005] Stier, P., Feichter, J., Kinne, S. et al. (2005). The aerosol-climate model ECHAM5-HAM. Atmos. Chem. Phys., 5:1125-1156.

  13. Effect of Micro-Bubbles in Water on Beam Patterns of Parametric Array

    NASA Astrophysics Data System (ADS)

    Hashiba, Kunio; Masuzawa, Hiroshi

    2003-05-01

    The improvement in efficiency of a parametric array by nonlinear oscillation of micro-bubbles in water is studied in this paper. The micro-bubble oscillation can increase the nonlinear coefficient of the acoustic medium. The amplitude of the difference-frequency wave along the longitudinal axis and its beam patterns in the field including the layer with micro-bubbles were analyzed using a Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation. As a result, the largest improvement in efficiency was obtained and a narrow parametric beam was formed by forming a layer with micro-bubbles in front of a parametric sound radiator as thick as about the shock formation distance. If the layer becomes significantly thicker than the distance, the beam of the difference-frequency wave in the far-field will become broader. If the layer is significantly thinner than the distance, the intensity level of the wave in the far-field will be too low.

  14. Realization of an omnidirectional source of sound using parametric loudspeakers.

    PubMed

    Sayin, Umut; Artís, Pere; Guasch, Oriol

    2013-09-01

    Parametric loudspeakers are often used in beam forming applications where a high directivity is required. Withal, in this paper it is proposed to use such devices to build an omnidirectional source of sound. An initial prototype, the omnidirectional parametric loudspeaker (OPL), consisting of a sphere with hundreds of ultrasonic transducers placed on it has been constructed. The OPL emits audible sound thanks to the parametric acoustic array phenomenon, and the close proximity and the large number of transducers results in the generation of a highly omnidirectional sound field. Comparisons with conventional dodecahedron loudspeakers have been made in terms of directivity, frequency response, and in applications such as the generation of diffuse acoustic fields in reverberant chambers. The OPL prototype has performed better than the conventional loudspeaker especially for frequencies higher than 500 Hz, its main drawback being the difficulty to generate intense pressure levels at low frequencies.

  15. Visualization of a Large Set of Hydrogen Atomic Orbital Contours Using New and Expanded Sets of Parametric Equations

    ERIC Educational Resources Information Center

    Rhile, Ian J.

    2014-01-01

    Atomic orbitals are a theme throughout the undergraduate chemistry curriculum, and visualizing them has been a theme in this journal. Contour plots as isosurfaces or contour lines in a plane are the most familiar representations of the hydrogen wave functions. In these representations, a surface of a fixed value of the wave function ? is plotted…

  16. Parametric System Model for a Stirling Radioisotope Generator

    NASA Technical Reports Server (NTRS)

    Schmitz, Paul C.

    2015-01-01

    A Parametric System Model (PSM) was created in order to explore conceptual designs, the impact of component changes and power level on the performance of the Stirling Radioisotope Generator (SRG). Using the General Purpose Heat Source (GPHS approximately 250 Wth) modules as the thermal building block from which a SRG is conceptualized, trade studies are performed to understand the importance of individual component scaling on isotope usage. Mathematical relationships based on heat and power throughput, temperature, mass, and volume were developed for each of the required subsystems. The PSM uses these relationships to perform component- and system-level trades.

  17. Parametric System Model for a Stirling Radioisotope Generator

    NASA Technical Reports Server (NTRS)

    Schmitz, Paul C.

    2014-01-01

    A Parametric System Model (PSM) was created in order to explore conceptual designs, the impact of component changes and power level on the performance of Stirling Radioisotope Generator (SRG). Using the General Purpose Heat Source (GPHS approximately 250 watt thermal) modules as the thermal building block around which a SRG is conceptualized, trade studies are performed to understand the importance of individual component scaling on isotope usage. Mathematical relationships based on heat and power throughput, temperature, mass and volume were developed for each of the required subsystems. The PSM uses these relationships to perform component and system level trades.

  18. Sparse-grid, reduced-basis Bayesian inversion: Nonaffine-parametric nonlinear equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Peng, E-mail: peng@ices.utexas.edu; Schwab, Christoph, E-mail: christoph.schwab@sam.math.ethz.ch

    2016-07-01

    We extend the reduced basis (RB) accelerated Bayesian inversion methods for affine-parametric, linear operator equations which are considered in [16,17] to non-affine, nonlinear parametric operator equations. We generalize the analysis of sparsity of parametric forward solution maps in [20] and of Bayesian inversion in [48,49] to the fully discrete setting, including Petrov–Galerkin high-fidelity (“HiFi”) discretization of the forward maps. We develop adaptive, stochastic collocation based reduction methods for the efficient computation of reduced bases on the parametric solution manifold. The nonaffinity and nonlinearity with respect to (w.r.t.) the distributed, uncertain parameters and the unknown solution is collocated; specifically, by themore » so-called Empirical Interpolation Method (EIM). For the corresponding Bayesian inversion problems, computational efficiency is enhanced in two ways: first, expectations w.r.t. the posterior are computed by adaptive quadratures with dimension-independent convergence rates proposed in [49]; the present work generalizes [49] to account for the impact of the PG discretization in the forward maps on the convergence rates of the Quantities of Interest (QoI for short). Second, we propose to perform the Bayesian estimation only w.r.t. a parsimonious, RB approximation of the posterior density. Based on the approximation results in [49], the infinite-dimensional parametric, deterministic forward map and operator admit N-term RB and EIM approximations which converge at rates which depend only on the sparsity of the parametric forward map. In several numerical experiments, the proposed algorithms exhibit dimension-independent convergence rates which equal, at least, the currently known rate estimates for N-term approximation. We propose to accelerate Bayesian estimation by first offline construction of reduced basis surrogates of the Bayesian posterior density. The parsimonious surrogates can then be employed for online data assimilation and for Bayesian estimation. They also open a perspective for optimal experimental design.« less

  19. Dimensions of Learning Organizations Questionnaire (DLOQ) in a low-resource health care setting in Nepal.

    PubMed

    Leufvén, Mia; Vitrakoti, Ravi; Bergström, Anna; Ashish, K C; Målqvist, Mats

    2015-01-22

    Knowledge-based organizations, such as health care systems, need to be adaptive to change and able to facilitate uptake of new evidence. To be able to assess organizational capability to learn is therefore an important part of health systems strengthening. The aim of the present study is to assess context using the Dimensions of the Learning Organization Questionnaire (DLOQ) in a low-resource health setting in Nepal. DLOQ was translated and administered to 230 employees at all levels of the hospital. Data was analyzed using non-parametric tests. The DLOQ was able to detect variations across employee's perceptions of the organizational context. Nurses scored significantly lower than doctors on the dimension "Empowerment" while doctors scored lower than nurses on "Strategic leadership". These results suggest that the hospital's organization carries attributes of a centralized, hierarchical structure that might hinder a progress towards a learning organization. This study demonstrates that, despite the designing and developing of the DLOQ in the USA and its main utilization in company settings, it can be used and applied in hospital settings in low-income countries. The application of DLOQ provides valuable insights and understanding when designing and evaluating efforts for healthcare improvement.

  20. White-light parametric instabilities in plasmas.

    PubMed

    Santos, J E; Silva, L O; Bingham, R

    2007-06-08

    Parametric instabilities driven by partially coherent radiation in plasmas are described by a generalized statistical Wigner-Moyal set of equations, formally equivalent to the full wave equation, coupled to the plasma fluid equations. A generalized dispersion relation for stimulated Raman scattering driven by a partially coherent pump field is derived, revealing a growth rate dependence, with the coherence width sigma of the radiation field, scaling with 1/sigma for backscattering (three-wave process), and with 1/sigma1/2 for direct forward scattering (four-wave process). Our results demonstrate the possibility to control the growth rates of these instabilities by properly using broadband pump radiation fields.

  1. Simple performance evaluation of pulsed spontaneous parametric down-conversion sources for quantum communications.

    PubMed

    Smirr, Jean-Loup; Guilbaud, Sylvain; Ghalbouni, Joe; Frey, Robert; Diamanti, Eleni; Alléaume, Romain; Zaquine, Isabelle

    2011-01-17

    Fast characterization of pulsed spontaneous parametric down conversion (SPDC) sources is important for applications in quantum information processing and communications. We propose a simple method to perform this task, which only requires measuring the counts on the two output channels and the coincidences between them, as well as modeling the filter used to reduce the source bandwidth. The proposed method is experimentally tested and used for a complete evaluation of SPDC sources (pair emission probability, total losses, and fidelity) of various bandwidths. This method can find applications in the setting up of SPDC sources and in the continuous verification of the quality of quantum communication links.

  2. The Bayesian Cramér-Rao lower bound in Astrometry

    NASA Astrophysics Data System (ADS)

    Mendez, R. A.; Echeverria, A.; Silva, J.; Orchard, M.

    2018-01-01

    A determination of the highest precision that can be achieved in the measurement of the location of a stellar-like object has been a topic of permanent interest by the astrometric community. The so-called (parametric, or non-Bayesian) Cramér-Rao (CR hereafter) bound provides a lower bound for the variance with which one could estimate the position of a point source. This has been studied recently by Mendez et al. (2013, 2014, 2015). In this work we present a different approach to the same problem (Echeverria et al. 2016), using a Bayesian CR setting which has a number of advantages over the parametric scenario.

  3. The Bayesian Cramér-Rao lower bound in Astrometry

    NASA Astrophysics Data System (ADS)

    Mendez, R. A.; Echeverria, A.; Silva, J.; Orchard, M.

    2017-07-01

    A determination of the highest precision that can be achieved in the measurement of the location of a stellar-like object has been a topic of permanent interest by the astrometric community. The so-called (parametric, or non-Bayesian) Cramér-Rao (CR hereafter) bound provides a lower bound for the variance with which one could estimate the position of a point source. This has been studied recently by Mendez and collaborators (2014, 2015). In this work we present a different approach to the same problem (Echeverria et al. 2016), using a Bayesian CR setting which has a number of advantages over the parametric scenario.

  4. Using Spatial Correlations of SPDC Sources for Increasing the Signal to Noise Ratio in Images

    NASA Astrophysics Data System (ADS)

    Ruíz, A. I.; Caudillo, R.; Velázquez, V. M.; Barrios, E.

    2017-05-01

    We experimentally show that, by using spatial correlations of photon pairs produced by Spontaneous Parametric Down-Conversion, it is possible to increase the Signal to Noise Ratio in images of objects illuminated with those photons; in comparison, objects illuminated with light from a laser present a minor ratio. Our simple experimental set-up was capable to produce an average improvement in signal to noise ratio of 11dB of Parametric Down-Converted light over laser light. This simple method can be easily implemented for obtaining high contrast images of faint objects and for transmitting information with low noise.

  5. Parametric Cost Study of AC-DC Wayside Power Systems

    DOT National Transportation Integrated Search

    1975-09-01

    The wayside power system provides all the power requirements of an electric vehicle operating on a fixed guideway. For a given set of specifications there are numerous wayside power supply configurations which will be satisfactory from a technical st...

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, G.D.; Bharadwaj, R.K.

    The molecular geometries and conformational energies of octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocine (HMX) and 1,3-dimethyl-1,3-dinitro methyldiamine (DDMD) and have been determined from high-level quantum chemistry calculations and have been used in parametrizing a classical potential function for simulations of HMX. Geometry optimizations for HMX and DDMD and rotational energy barrier searches for DDMD were performed at the B3LYP/6-311G** level, with subsequent single-point energy calculations at the MP2/6-311G** level. Four unique low-energy conformers were found for HMX, two whose conformational geometries correspond closely to those found in HMX polymorphs from crystallographic studies and two additional, lower energy conformers that are not seen in the crystallinemore » phases. For DDMD, three unique low-energy conformers, and the rotational energy barriers between them, were located. In parametrizing the classical potential function for HMX, nonbonded repulsion/dispersion parameters, valence parameters, and parameters describing nitro group rotation and out-of-plane distortion at the amine nitrogen were taken from the previous studies of dimethylnitramine. Polar effects in HMX and DDMD were represented by sets of partial atomic charges that reproduce the electrostatic potential and dipole moments for the low-energy conformers of these molecules as determined from the quantum chemistry wave functions. Parameters describing conformational energetics for the C-N-C-N dihedrals were determined by fitting the classical potential function to reproduce relative conformational energies in HMX as found from quantum chemistry. The resulting potential was found to give a good representation of the conformer geometries and relative conformer energies in HMX and a reasonable description of the low-energy conformers and rotational energy barriers in DDMD.« less

  7. SPM analysis of parametric (R)-[11C]PK11195 binding images: plasma input versus reference tissue parametric methods.

    PubMed

    Schuitemaker, Alie; van Berckel, Bart N M; Kropholler, Marc A; Veltman, Dick J; Scheltens, Philip; Jonker, Cees; Lammertsma, Adriaan A; Boellaard, Ronald

    2007-05-01

    (R)-[11C]PK11195 has been used for quantifying cerebral microglial activation in vivo. In previous studies, both plasma input and reference tissue methods have been used, usually in combination with a region of interest (ROI) approach. Definition of ROIs, however, can be labourious and prone to interobserver variation. In addition, results are only obtained for predefined areas and (unexpected) signals in undefined areas may be missed. On the other hand, standard pharmacokinetic models are too sensitive to noise to calculate (R)-[11C]PK11195 binding on a voxel-by-voxel basis. Linearised versions of both plasma input and reference tissue models have been described, and these are more suitable for parametric imaging. The purpose of this study was to compare the performance of these plasma input and reference tissue parametric methods on the outcome of statistical parametric mapping (SPM) analysis of (R)-[11C]PK11195 binding. Dynamic (R)-[11C]PK11195 PET scans with arterial blood sampling were performed in 7 younger and 11 elderly healthy subjects. Parametric images of volume of distribution (Vd) and binding potential (BP) were generated using linearised versions of plasma input (Logan) and reference tissue (Reference Parametric Mapping) models. Images were compared at the group level using SPM with a two-sample t-test per voxel, both with and without proportional scaling. Parametric BP images without scaling provided the most sensitive framework for determining differences in (R)-[11C]PK11195 binding between younger and elderly subjects. Vd images could only demonstrate differences in (R)-[11C]PK11195 binding when analysed with proportional scaling due to intersubject variation in K1/k2 (blood-brain barrier transport and non-specific binding).

  8. Ultra-Broad-Band Optical Parametric Amplifier or Oscillator

    NASA Technical Reports Server (NTRS)

    Strekalov, Dmitry; Matsko, Andrey; Savchenkov, Anatolly; Maleki, Lute

    2009-01-01

    A concept for an ultra-broad-band optical parametric amplifier or oscillator has emerged as a by-product of a theoretical study in fundamental quantum optics. The study was originally intended to address the question of whether the two-photon temporal correlation function of light [in particular, light produced by spontaneous parametric down conversion (SPDC)] can be considerably narrower than the inverse of the spectral width (bandwidth) of the light. The answer to the question was found to be negative. More specifically, on the basis of the universal integral relations between the quantum two-photon temporal correlation and the classical spectrum of light, it was found that the lower limit of two-photon correlation time is set approximately by the inverse of the bandwidth. The mathematical solution for the minimum two-photon correlation time also provides the minimum relative frequency dispersion of the down-converted light components; in turn, the minimum relative frequency dispersion translates to the maximum bandwidth, which is important for the design of an ultra-broad-band optical parametric oscillator or amplifier. In the study, results of an analysis of the general integral relations were applied in the case of an optically nonlinear, frequency-dispersive crystal in which SPDC produces collinear photons. Equations were found for the crystal orientation and pump wavelength, specific for each parametric-down-converting crystal, that eliminate the relative frequency dispersion of collinear degenerate (equal-frequency) signal and idler components up to the fourth order in the frequency-detuning parameter

  9. Model and parametric uncertainty in source-based kinematic models of earthquake ground motion

    USGS Publications Warehouse

    Hartzell, Stephen; Frankel, Arthur; Liu, Pengcheng; Zeng, Yuehua; Rahman, Shariftur

    2011-01-01

    Four independent ground-motion simulation codes are used to model the strong ground motion for three earthquakes: 1994 Mw 6.7 Northridge, 1989 Mw 6.9 Loma Prieta, and 1999 Mw 7.5 Izmit. These 12 sets of synthetics are used to make estimates of the variability in ground-motion predictions. In addition, ground-motion predictions over a grid of sites are used to estimate parametric uncertainty for changes in rupture velocity. We find that the combined model uncertainty and random variability of the simulations is in the same range as the variability of regional empirical ground-motion data sets. The majority of the standard deviations lie between 0.5 and 0.7 natural-log units for response spectra and 0.5 and 0.8 for Fourier spectra. The estimate of model epistemic uncertainty, based on the different model predictions, lies between 0.2 and 0.4, which is about one-half of the estimates for the standard deviation of the combined model uncertainty and random variability. Parametric uncertainty, based on variation of just the average rupture velocity, is shown to be consistent in amplitude with previous estimates, showing percentage changes in ground motion from 50% to 300% when rupture velocity changes from 2.5 to 2.9 km/s. In addition, there is some evidence that mean biases can be reduced by averaging ground-motion estimates from different methods.

  10. A simple parametric model observer for quality assurance in computer tomography

    NASA Astrophysics Data System (ADS)

    Anton, M.; Khanin, A.; Kretz, T.; Reginatto, M.; Elster, C.

    2018-04-01

    Model observers are mathematical classifiers that are used for the quality assessment of imaging systems such as computer tomography. The quality of the imaging system is quantified by means of the performance of a selected model observer. For binary classification tasks, the performance of the model observer is defined by the area under its ROC curve (AUC). Typically, the AUC is estimated by applying the model observer to a large set of training and test data. However, the recording of these large data sets is not always practical for routine quality assurance. In this paper we propose as an alternative a parametric model observer that is based on a simple phantom, and we provide a Bayesian estimation of its AUC. It is shown that a limited number of repeatedly recorded images (10–15) is already sufficient to obtain results suitable for the quality assessment of an imaging system. A MATLAB® function is provided for the calculation of the results. The performance of the proposed model observer is compared to that of the established channelized Hotelling observer and the nonprewhitening matched filter for simulated images as well as for images obtained from a low-contrast phantom on an x-ray tomography scanner. The results suggest that the proposed parametric model observer, along with its Bayesian treatment, can provide an efficient, practical alternative for the quality assessment of CT imaging systems.

  11. Protein logic: a statistical mechanical study of signal integration at the single-molecule level.

    PubMed

    de Ronde, Wiet; Rein ten Wolde, Pieter; Mugler, Andrew

    2012-09-05

    Information processing and decision-making is based upon logic operations, which in cellular networks has been well characterized at the level of transcription. In recent years, however, both experimentalists and theorists have begun to appreciate that cellular decision-making can also be performed at the level of a single protein, giving rise to the notion of protein logic. Here we systematically explore protein logic using a well-known statistical mechanical model. As an example system, we focus on receptors that bind either one or two ligands, and their associated dimers. Notably, we find that a single heterodimer can realize any of the 16 possible logic gates, including the XOR gate, by variation of biochemical parameters. We then introduce what to our knowledge is a novel idea: that a set of receptors with fixed parameters can encode functionally unique logic gates simply by forming different dimeric combinations. An exhaustive search reveals that the simplest set of receptors (two single-ligand receptors and one double-ligand receptor) can realize several different groups of three unique gates, a result for which the parametric analysis of single receptors and dimers provides a clear interpretation. Both results underscore the surprising functional freedom readily available to cells at the single-protein level. Copyright © 2012 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  12. Space transfer vehicle concepts and requirements study. Volume 3, book 1: Program cost estimates

    NASA Technical Reports Server (NTRS)

    Peffley, Al F.

    1991-01-01

    The Space Transfer Vehicle (STV) Concepts and Requirements Study cost estimate and program planning analysis is presented. The cost estimating technique used to support STV system, subsystem, and component cost analysis is a mixture of parametric cost estimating and selective cost analogy approaches. The parametric cost analysis is aimed at developing cost-effective aerobrake, crew module, tank module, and lander designs with the parametric cost estimates data. This is accomplished using cost as a design parameter in an iterative process with conceptual design input information. The parametric estimating approach segregates costs by major program life cycle phase (development, production, integration, and launch support). These phases are further broken out into major hardware subsystems, software functions, and tasks according to the STV preliminary program work breakdown structure (WBS). The WBS is defined to a low enough level of detail by the study team to highlight STV system cost drivers. This level of cost visibility provided the basis for cost sensitivity analysis against various design approaches aimed at achieving a cost-effective design. The cost approach, methodology, and rationale are described. A chronological record of the interim review material relating to cost analysis is included along with a brief summary of the study contract tasks accomplished during that period of review and the key conclusions or observations identified that relate to STV program cost estimates. The STV life cycle costs are estimated on the proprietary parametric cost model (PCM) with inputs organized by a project WBS. Preliminary life cycle schedules are also included.

  13. Husimi coordinates of multipartite separable states

    NASA Astrophysics Data System (ADS)

    Parfionov, Georges; Zapatrin, Romàn R.

    2010-12-01

    A parametrization of multipartite separable states in a finite-dimensional Hilbert space is suggested. It is proved to be a diffeomorphism between the set of zero-trace operators and the interior of the set of separable density operators. The result is applicable to any tensor product decomposition of the state space. An analytical criterion for separability of density operators is established in terms of the boundedness of a sequence of operators.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Freitez, Juan A.; Sanchez, Morella; Ruette, Fernando

    Application of simulated annealing (SA) and simplified GSA (SGSA) techniques for parameter optimization of parametric quantum chemistry method (CATIVIC) was performed. A set of organic molecules were selected for test these techniques. Comparison of the algorithms was carried out for error function minimization with respect to experimental values. Results show that SGSA is more efficient than SA with respect to computer time. Accuracy is similar in both methods; however, there are important differences in the final set of parameters.

  15. Stability analysis of a controlled mechanical system with parametric uncertainties in LuGre friction model

    NASA Astrophysics Data System (ADS)

    Sun, Yun-Hsiang; Sun, Yuming; Wu, Christine Qiong; Sepehri, Nariman

    2018-04-01

    Parameters of friction model identified for a specific control system development are not constants. They vary over time and have a significant effect on the control system stability. Although much research has been devoted to the stability analysis under parametric uncertainty, less attention has been paid to incorporating a realistic friction model into their analysis. After reviewing the common friction models for controller design, a modified LuGre friction model is selected to carry out the stability analysis in this study. Two parameters of the LuGre model, namely σ0 and σ1, are critical to the demonstration of dynamic friction features, yet the identification of which is difficult to carry out, resulting in a high level of uncertainties in their values. Aiming at uncovering the effect of the σ0 and σ1 variations on the control system stability, a servomechanism with modified LuGre friction model is investigated. Two set-point position controllers are synthesised based on the servomechanism model to form two case studies. Through Lyapunov exponents, it is clear that the variation of σ0 and σ1 has an obvious effect on the stabiltiy of the studied systems and should not be overlooked in the design phase.

  16. Parametrization of Combined Quantum Mechanical and Molecular Mechanical Methods: Bond-Tuned Link Atoms.

    PubMed

    Wu, Xin-Ping; Gagliardi, Laura; Truhlar, Donald G

    2018-05-30

    Combined quantum mechanical and molecular mechanical (QM/MM) methods are the most powerful available methods for high-level treatments of subsystems of very large systems. The treatment of the QM-MM boundary strongly affects the accuracy of QM/MM calculations. For QM/MM calculations having covalent bonds cut by the QM-MM boundary, it has been proposed previously to use a scheme with system-specific tuned fluorine link atoms. Here, we propose a broadly parametrized scheme where the parameters of the tuned F link atoms depend only on the type of bond being cut. In the proposed new scheme, the F link atom is tuned for systems with a certain type of cut bond at the QM-MM boundary instead of for a specific target system, and the resulting link atoms are call bond-tuned link atoms. In principle, the bond-tuned link atoms can be as convenient as the popular H link atoms, and they are especially well adapted for high-throughput and accurate QM/MM calculations. Here, we present the parameters for several kinds of cut bonds along with a set of validation calculations that confirm that the proposed bond-tuned link-atom scheme can be as accurate as the system-specific tuned F link-atom scheme.

  17. Fast parametric relationships for the large-scale reservoir simulation of mixed CH 4-CO 2 gas hydrate systems

    DOE PAGES

    Reagan, Matthew T.; Moridis, George J.; Seim, Katie S.

    2017-03-27

    A recent Department of Energy field test on the Alaska North Slope has increased interest in the ability to simulate systems of mixed CO 2-CH 4 hydrates. However, the physically realistic simulation of mixed-hydrate simulation is not yet a fully solved problem. Limited quantitative laboratory data leads to the use of various ab initio, statistical mechanical, or other mathematic representations of mixed-hydrate phase behavior. Few of these methods are suitable for inclusion in reservoir simulations, particularly for systems with large number of grid elements, 3D systems, or systems with complex geometric configurations. In this paper, we present a set ofmore » fast parametric relationships describing the thermodynamic properties and phase behavior of a mixed methane-carbon dioxide hydrate system. We use well-known, off-the-shelf hydrate physical properties packages to generate a sufficiently large dataset, select the most convenient and efficient mathematical forms, and fit the data to those forms to create a physical properties package suitable for inclusion in the TOUGH+ family of codes. Finally, the mapping of the phase and thermodynamic space reveals the complexity of the mixed-hydrate system and allows understanding of the thermodynamics at a level beyond what much of the existing laboratory data and literature currently offer.« less

  18. Fast parametric relationships for the large-scale reservoir simulation of mixed CH4-CO2 gas hydrate systems

    NASA Astrophysics Data System (ADS)

    Reagan, Matthew T.; Moridis, George J.; Seim, Katie S.

    2017-06-01

    A recent Department of Energy field test on the Alaska North Slope has increased interest in the ability to simulate systems of mixed CO2-CH4 hydrates. However, the physically realistic simulation of mixed-hydrate simulation is not yet a fully solved problem. Limited quantitative laboratory data leads to the use of various ab initio, statistical mechanical, or other mathematic representations of mixed-hydrate phase behavior. Few of these methods are suitable for inclusion in reservoir simulations, particularly for systems with large number of grid elements, 3D systems, or systems with complex geometric configurations. In this work, we present a set of fast parametric relationships describing the thermodynamic properties and phase behavior of a mixed methane-carbon dioxide hydrate system. We use well-known, off-the-shelf hydrate physical properties packages to generate a sufficiently large dataset, select the most convenient and efficient mathematical forms, and fit the data to those forms to create a physical properties package suitable for inclusion in the TOUGH+ family of codes. The mapping of the phase and thermodynamic space reveals the complexity of the mixed-hydrate system and allows understanding of the thermodynamics at a level beyond what much of the existing laboratory data and literature currently offer.

  19. A permutation-based non-parametric analysis of CRISPR screen data.

    PubMed

    Jia, Gaoxiang; Wang, Xinlei; Xiao, Guanghua

    2017-07-19

    Clustered regularly-interspaced short palindromic repeats (CRISPR) screens are usually implemented in cultured cells to identify genes with critical functions. Although several methods have been developed or adapted to analyze CRISPR screening data, no single specific algorithm has gained popularity. Thus, rigorous procedures are needed to overcome the shortcomings of existing algorithms. We developed a Permutation-Based Non-Parametric Analysis (PBNPA) algorithm, which computes p-values at the gene level by permuting sgRNA labels, and thus it avoids restrictive distributional assumptions. Although PBNPA is designed to analyze CRISPR data, it can also be applied to analyze genetic screens implemented with siRNAs or shRNAs and drug screens. We compared the performance of PBNPA with competing methods on simulated data as well as on real data. PBNPA outperformed recent methods designed for CRISPR screen analysis, as well as methods used for analyzing other functional genomics screens, in terms of Receiver Operating Characteristics (ROC) curves and False Discovery Rate (FDR) control for simulated data under various settings. Remarkably, the PBNPA algorithm showed better consistency and FDR control on published real data as well. PBNPA yields more consistent and reliable results than its competitors, especially when the data quality is low. R package of PBNPA is available at: https://cran.r-project.org/web/packages/PBNPA/ .

  20. Fast parametric relationships for the large-scale reservoir simulation of mixed CH 4-CO 2 gas hydrate systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reagan, Matthew T.; Moridis, George J.; Seim, Katie S.

    A recent Department of Energy field test on the Alaska North Slope has increased interest in the ability to simulate systems of mixed CO 2-CH 4 hydrates. However, the physically realistic simulation of mixed-hydrate simulation is not yet a fully solved problem. Limited quantitative laboratory data leads to the use of various ab initio, statistical mechanical, or other mathematic representations of mixed-hydrate phase behavior. Few of these methods are suitable for inclusion in reservoir simulations, particularly for systems with large number of grid elements, 3D systems, or systems with complex geometric configurations. In this paper, we present a set ofmore » fast parametric relationships describing the thermodynamic properties and phase behavior of a mixed methane-carbon dioxide hydrate system. We use well-known, off-the-shelf hydrate physical properties packages to generate a sufficiently large dataset, select the most convenient and efficient mathematical forms, and fit the data to those forms to create a physical properties package suitable for inclusion in the TOUGH+ family of codes. Finally, the mapping of the phase and thermodynamic space reveals the complexity of the mixed-hydrate system and allows understanding of the thermodynamics at a level beyond what much of the existing laboratory data and literature currently offer.« less

  1. Modeling and replicating statistical topology and evidence for CMB nonhomogeneity

    PubMed Central

    Agami, Sarit

    2017-01-01

    Under the banner of “big data,” the detection and classification of structure in extremely large, high-dimensional, data sets are two of the central statistical challenges of our times. Among the most intriguing new approaches to this challenge is “TDA,” or “topological data analysis,” one of the primary aims of which is providing nonmetric, but topologically informative, preanalyses of data which make later, more quantitative, analyses feasible. While TDA rests on strong mathematical foundations from topology, in applications, it has faced challenges due to difficulties in handling issues of statistical reliability and robustness, often leading to an inability to make scientific claims with verifiable levels of statistical confidence. We propose a methodology for the parametric representation, estimation, and replication of persistence diagrams, the main diagnostic tool of TDA. The power of the methodology lies in the fact that even if only one persistence diagram is available for analysis—the typical case for big data applications—the replications permit conventional statistical hypothesis testing. The methodology is conceptually simple and computationally practical, and provides a broadly effective statistical framework for persistence diagram TDA analysis. We demonstrate the basic ideas on a toy example, and the power of the parametric approach to TDA modeling in an analysis of cosmic microwave background (CMB) nonhomogeneity. PMID:29078301

  2. Towards a petawatt-class few-cycle infrared laser system via dual-chirped optical parametric amplification.

    PubMed

    Fu, Yuxi; Midorikawa, Katsumi; Takahashi, Eiji J

    2018-05-16

    Expansion of the wavelength range for an ultrafast laser is an important ingredient for extending its range of applications. Conventionally, optical parametric amplification (OPA) has been employed to expand the laser wavelength to the infrared (IR) region. However, the achievable pulse energy and peak power have been limited to the mJ and the GW level, respectively. A major difficulty in the further energy scaling of OPA results from a lack of suitable large nonlinear crystals. Here, we circumvent this difficulty by employing a dual-chirped optical parametric amplification (DC-OPA) scheme. We successfully generate a multi-TW IR femtosecond laser pulse with an energy of 100 mJ order, which is higher than that reported in previous works. We also obtain excellent energy scaling ability, ultrashort pulses, flexiable wavelength tunability, and high-energy stability, which prove that DC-OPA is a superior method for the energy scaling of IR pulses to the 10 J/PW level.

  3. Hybrid pathwise sensitivity methods for discrete stochastic models of chemical reaction systems.

    PubMed

    Wolf, Elizabeth Skubak; Anderson, David F

    2015-01-21

    Stochastic models are often used to help understand the behavior of intracellular biochemical processes. The most common such models are continuous time Markov chains (CTMCs). Parametric sensitivities, which are derivatives of expectations of model output quantities with respect to model parameters, are useful in this setting for a variety of applications. In this paper, we introduce a class of hybrid pathwise differentiation methods for the numerical estimation of parametric sensitivities. The new hybrid methods combine elements from the three main classes of procedures for sensitivity estimation and have a number of desirable qualities. First, the new methods are unbiased for a broad class of problems. Second, the methods are applicable to nearly any physically relevant biochemical CTMC model. Third, and as we demonstrate on several numerical examples, the new methods are quite efficient, particularly if one wishes to estimate the full gradient of parametric sensitivities. The methods are rather intuitive and utilize the multilevel Monte Carlo philosophy of splitting an expectation into separate parts and handling each in an efficient manner.

  4. Bifurcation analysis of eight coupled degenerate optical parametric oscillators

    NASA Astrophysics Data System (ADS)

    Ito, Daisuke; Ueta, Tetsushi; Aihara, Kazuyuki

    2018-06-01

    A degenerate optical parametric oscillator (DOPO) network realized as a coherent Ising machine can be used to solve combinatorial optimization problems. Both theoretical and experimental investigations into the performance of DOPO networks have been presented previously. However a problem remains, namely that the dynamics of the DOPO network itself can lower the search success rates of globally optimal solutions for Ising problems. This paper shows that the problem is caused by pitchfork bifurcations due to the symmetry structure of coupled DOPOs. Some two-parameter bifurcation diagrams of equilibrium points express the performance deterioration. It is shown that the emergence of non-ground states regarding local minima hampers the system from reaching the ground states corresponding to the global minimum. We then describe a parametric strategy for leading a system to the ground state by actively utilizing the bifurcation phenomena. By adjusting the parameters to break particular symmetry, we find appropriate parameter sets that allow the coherent Ising machine to obtain the globally optimal solution alone.

  5. Evaluation of Second-Level Inference in fMRI Analysis

    PubMed Central

    Roels, Sanne P.; Loeys, Tom; Moerkerke, Beatrijs

    2016-01-01

    We investigate the impact of decisions in the second-level (i.e., over subjects) inferential process in functional magnetic resonance imaging on (1) the balance between false positives and false negatives and on (2) the data-analytical stability, both proxies for the reproducibility of results. Second-level analysis based on a mass univariate approach typically consists of 3 phases. First, one proceeds via a general linear model for a test image that consists of pooled information from different subjects. We evaluate models that take into account first-level (within-subjects) variability and models that do not take into account this variability. Second, one proceeds via inference based on parametrical assumptions or via permutation-based inference. Third, we evaluate 3 commonly used procedures to address the multiple testing problem: familywise error rate correction, False Discovery Rate (FDR) correction, and a two-step procedure with minimal cluster size. Based on a simulation study and real data we find that the two-step procedure with minimal cluster size results in most stable results, followed by the familywise error rate correction. The FDR results in most variable results, for both permutation-based inference and parametrical inference. Modeling the subject-specific variability yields a better balance between false positives and false negatives when using parametric inference. PMID:26819578

  6. Spatial hydrological drought characteristics in Karkheh River basin, southwest Iran using copulas

    NASA Astrophysics Data System (ADS)

    Dodangeh, Esmaeel; Shahedi, Kaka; Shiau, Jenq-Tzong; MirAkbari, Maryam

    2017-08-01

    Investigation on drought characteristics such as severity, duration, and frequency is crucial for water resources planning and management in a river basin. While the methodology for multivariate drought frequency analysis is well established by applying the copulas, the estimation on the associated parameters by various parameter estimation methods and the effects on the obtained results have not yet been investigated. This research aims at conducting a comparative analysis between the maximum likelihood parametric and non-parametric method of the Kendall τ estimation method for copulas parameter estimation. The methods were employed to study joint severity-duration probability and recurrence intervals in Karkheh River basin (southwest Iran) which is facing severe water-deficit problems. Daily streamflow data at three hydrological gauging stations (Tang Sazbon, Huleilan and Polchehr) near the Karkheh dam were used to draw flow duration curves (FDC) of these three stations. The Q_{75} index extracted from the FDC were set as threshold level to abstract drought characteristics such as drought duration and severity on the basis of the run theory. Drought duration and severity were separately modeled using the univariate probabilistic distributions and gamma-GEV, LN2-exponential, and LN2-gamma were selected as the best paired drought severity-duration inputs for copulas according to the Akaike Information Criteria (AIC), Kolmogorov-Smirnov and chi-square tests. Archimedean Clayton, Frank, and extreme value Gumbel copulas were employed to construct joint cumulative distribution functions (JCDF) of droughts for each station. Frank copula at Tang Sazbon and Gumbel at Huleilan and Polchehr stations were identified as the best copulas based on the performance evaluation criteria including AIC, BIC, log-likelihood and root mean square error (RMSE) values. Based on the RMSE values, nonparametric Kendall-τ is preferred to the parametric maximum likelihood estimation method. The results showed greater drought return periods by the parametric ML method in comparison to the nonparametric Kendall τ estimation method. The results also showed that stations located in tributaries (Huleilan and Polchehr) have close return periods, while the station along the main river (Tang Sazbon) has the smaller return periods for the drought events with identical drought duration and severity.

  7. A goal attainment pain management program for older adults with arthritis.

    PubMed

    Davis, Gail C; White, Terri L

    2008-12-01

    The purpose of this study was to test a pain management intervention that integrates goal setting with older adults (age > or =65) living independently in residential settings. This preliminary testing of the Goal Attainment Pain Management Program (GAPMAP) included a sample of 17 adults (mean age 79.29 years) with self-reported pain related to arthritis. Specific study aims were to: 1) explore the use of individual goal setting; 2) determine participants' levels of goal attainment; 3) determine whether changes occurred in the pain management methods used and found to be helpful by GAPMAP participants; and 4) determine whether changes occurred in selected pain-related variables (i.e., experience of living with persistent pain, the expected outcomes of pain management, pain management barriers, and global ratings of perceived pain intensity and success of pain management). Because of the small sample size, both parametric (t test) and nonparametric (Wilcoxon signed rank test) analyses were used to examine differences from pretest to posttest. Results showed that older individuals could successfully participate in setting and attaining individual goals. Thirteen of the 17 participants (76%) met their goals at the expected level or above. Two management methods (exercise and using a heated pool, tub, or shower) were used significantly more often after the intervention, and two methods (exercise and distraction) were identified as significantly more helpful. Two pain-related variables (experience of living with persistent pain and expected outcomes of pain management) revealed significant change, and all of those tested showed overall improvement.

  8. Analysis of the Bayesian Cramér-Rao lower bound in astrometry. Studying the impact of prior information in the location of an object

    NASA Astrophysics Data System (ADS)

    Echeverria, Alex; Silva, Jorge F.; Mendez, Rene A.; Orchard, Marcos

    2016-10-01

    Context. The best precision that can be achieved to estimate the location of a stellar-like object is a topic of permanent interest in the astrometric community. Aims: We analyze bounds for the best position estimation of a stellar-like object on a CCD detector array in a Bayesian setting where the position is unknown, but where we have access to a prior distribution. In contrast to a parametric setting where we estimate a parameter from observations, the Bayesian approach estimates a random object (I.e., the position is a random variable) from observations that are statistically dependent on the position. Methods: We characterize the Bayesian Cramér-Rao (CR) that bounds the minimum mean square error (MMSE) of the best estimator of the position of a point source on a linear CCD-like detector, as a function of the properties of detector, the source, and the background. Results: We quantify and analyze the increase in astrometric performance from the use of a prior distribution of the object position, which is not available in the classical parametric setting. This gain is shown to be significant for various observational regimes, in particular in the case of faint objects or when the observations are taken under poor conditions. Furthermore, we present numerical evidence that the MMSE estimator of this problem tightly achieves the Bayesian CR bound. This is a remarkable result, demonstrating that all the performance gains presented in our analysis can be achieved with the MMSE estimator. Conclusions: The Bayesian CR bound can be used as a benchmark indicator of the expected maximum positional precision of a set of astrometric measurements in which prior information can be incorporated. This bound can be achieved through the conditional mean estimator, in contrast to the parametric case where no unbiased estimator precisely reaches the CR bound.

  9. Comparison of least squares and exponential sine sweep methods for Parallel Hammerstein Models estimation

    NASA Astrophysics Data System (ADS)

    Rebillat, Marc; Schoukens, Maarten

    2018-05-01

    Linearity is a common assumption for many real-life systems, but in many cases the nonlinear behavior of systems cannot be ignored and must be modeled and estimated. Among the various existing classes of nonlinear models, Parallel Hammerstein Models (PHM) are interesting as they are at the same time easy to interpret as well as to estimate. One way to estimate PHM relies on the fact that the estimation problem is linear in the parameters and thus that classical least squares (LS) estimation algorithms can be used. In that area, this article introduces a regularized LS estimation algorithm inspired on some of the recently developed regularized impulse response estimation techniques. Another mean to estimate PHM consists in using parametric or non-parametric exponential sine sweeps (ESS) based methods. These methods (LS and ESS) are founded on radically different mathematical backgrounds but are expected to tackle the same issue. A methodology is proposed here to compare them with respect to (i) their accuracy, (ii) their computational cost, and (iii) their robustness to noise. Tests are performed on simulated systems for several values of methods respective parameters and of signal to noise ratio. Results show that, for a given set of data points, the ESS method is less demanding in computational resources than the LS method but that it is also less accurate. Furthermore, the LS method needs parameters to be set in advance whereas the ESS method is not subject to conditioning issues and can be fully non-parametric. In summary, for a given set of data points, ESS method can provide a first, automatic, and quick overview of a nonlinear system than can guide more computationally demanding and precise methods, such as the regularized LS one proposed here.

  10. Experiments in encoding multilevel images as quadtrees

    NASA Technical Reports Server (NTRS)

    Lansing, Donald L.

    1987-01-01

    Image storage requirements for several encoding methods are investigated and the use of quadtrees with multigray level or multicolor images are explored. The results of encoding a variety of images having up to 256 gray levels using three schemes (full raster, runlength and quadtree) are presented. Although there is considerable literature on the use of quadtrees to store and manipulate binary images, their application to multilevel images is relatively undeveloped. The potential advantage of quadtree encoding is that an entire area with a uniform gray level may be encoded as a unit. A pointerless quadtree encoding scheme is described. Data are presented on the size of the quadtree required to encode selected images and on the relative storage requirements of the three encoding schemes. A segmentation scheme based on the statistical variation of gray levels within a quadtree quadrant is described. This parametric scheme may be used to control the storage required by an encoded image and to preprocess a scene for feature identification. Several sets of black and white and pseudocolor images obtained by varying the segmentation parameter are shown.

  11. Off-line wafer level reliability control: unique measurement method to monitor the lifetime indicator of gate oxide validated within bipolar/CMOS/DMOS technology

    NASA Astrophysics Data System (ADS)

    Gagnard, Xavier; Bonnaud, Olivier

    2000-08-01

    We have recently published a paper on a new rapid method for the determination of the lifetime of the gate oxide involved in a Bipolar/CMOS/DMOS technology (BCD). Because this previous method was based on a current measurement with gate voltage as a parameter needing several stress voltages, it was applied only by lot sampling. Thus, we tried to find an indicator in order to monitor the gate oxide lifetime during the wafer level parametric test and involving only one measurement of the device on each wafer test cell. Using the Weibull law and Crook model, combined with our recent model, we have developed a new test method needing only one electrical measurement of MOS capacitor to monitor the quality of the gate oxide. Based also on a current measurement, the parameter is the lifetime indicator of the gate oxide. From the analysis of several wafers, we gave evidence of the possibility to detect a low performance wafer, which corresponds to the infantile failure on the Weibull plot. In order to insert this new method in the BCD parametric program, a parametric flowchart was established. This type of measurement is an important challenges, because the actual measurements, breakdown charge, Qbd, and breakdown electric field, Ebd, at parametric level and Ebd and interface states density, Dit during the process cannot guarantee the gate oxide lifetime all along fabrication process. This indicator measurement is the only one, which predicts the lifetime decrease.

  12. Numerical analysis of the dynamic interaction between wheel set and turnout crossing using the explicit finite element method

    NASA Astrophysics Data System (ADS)

    Xin, L.; Markine, V. L.; Shevtsov, I. Y.

    2016-03-01

    A three-dimensional (3-D) explicit dynamic finite element (FE) model is developed to simulate the impact of the wheel on the crossing nose. The model consists of a wheel set moving over the turnout crossing. Realistic wheel, wing rail and crossing geometries have been used in the model. Using this model the dynamic responses of the system such as the contact forces between the wheel and the crossing, crossing nose displacements and accelerations, stresses in rail material as well as in sleepers and ballast can be obtained. Detailed analysis of the wheel set and crossing interaction using the local contact stress state in the rail is possible as well, which provides a good basis for prediction of the long-term behaviour of the crossing (fatigue analysis). In order to tune and validate the FE model field measurements conducted on several turnouts in the railway network in the Netherlands are used here. The parametric study including variations of the crossing nose geometries performed here demonstrates the capabilities of the developed model. The results of the validation and parametric study are presented and discussed.

  13. Combining large number of weak biomarkers based on AUC.

    PubMed

    Yan, Li; Tian, Lili; Liu, Song

    2015-12-20

    Combining multiple biomarkers to improve diagnosis and/or prognosis accuracy is a common practice in clinical medicine. Both parametric and non-parametric methods have been developed for finding the optimal linear combination of biomarkers to maximize the area under the receiver operating characteristic curve (AUC), primarily focusing on the setting with a small number of well-defined biomarkers. This problem becomes more challenging when the number of observations is not order of magnitude greater than the number of variables, especially when the involved biomarkers are relatively weak. Such settings are not uncommon in certain applied fields. The first aim of this paper is to empirically evaluate the performance of existing linear combination methods under such settings. The second aim is to propose a new combination method, namely, the pairwise approach, to maximize AUC. Our simulation studies demonstrated that the performance of several existing methods can become unsatisfactory as the number of markers becomes large, while the newly proposed pairwise method performs reasonably well. Furthermore, we apply all the combination methods to real datasets used for the development and validation of MammaPrint. The implication of our study for the design of optimal linear combination methods is discussed. Copyright © 2015 John Wiley & Sons, Ltd.

  14. Combining large number of weak biomarkers based on AUC

    PubMed Central

    Yan, Li; Tian, Lili; Liu, Song

    2018-01-01

    Combining multiple biomarkers to improve diagnosis and/or prognosis accuracy is a common practice in clinical medicine. Both parametric and non-parametric methods have been developed for finding the optimal linear combination of biomarkers to maximize the area under the receiver operating characteristic curve (AUC), primarily focusing on the setting with a small number of well-defined biomarkers. This problem becomes more challenging when the number of observations is not order of magnitude greater than the number of variables, especially when the involved biomarkers are relatively weak. Such settings are not uncommon in certain applied fields. The first aim of this paper is to empirically evaluate the performance of existing linear combination methods under such settings. The second aim is to propose a new combination method, namely, the pairwise approach, to maximize AUC. Our simulation studies demonstrated that the performance of several existing methods can become unsatisfactory as the number of markers becomes large, while the newly proposed pairwise method performs reasonably well. Furthermore, we apply all the combination methods to real datasets used for the development and validation of MammaPrint. The implication of our study for the design of optimal linear combination methods is discussed. PMID:26227901

  15. Dark energy and fate of the Universe

    NASA Astrophysics Data System (ADS)

    Li, XiaoDong; Wang, Shuang; Huang, QingGuo; Zhang, Xin; Li, Miao

    2012-07-01

    We explore the ultimate fate of the Universe by using a divergence-free parametrization for dark energy w( z)= w 0+ w a [ln(2 + z) / (1 + z) - ln 2]. Unlike the Chevallier-Polarski-Linder parametrization, this parametrization has well behaved, bounded behavior for both high redshifts and negative redshifts, and thus can genuinely cover many theoretical dark energy models. After constraining the parameter space of this parametrization by using the current cosmological observations, we find that, at the 95.4% confidence level, our Universe can still exist at least 16.7 Gyr before it ends in a big rip. Moreover, for the phantom energy dominated Universe, we find that a gravitationally bound system will be destroyed at a time {{t ˜eq Psqrt {2| {1 + 3w( - 1)} |} } {/ {{t ˜eq Psqrt {2| {1 + 3w( - 1)} |} } {[ {6π | {1 + w( - 1)} |} ]}}} . } {[ {6π | {1 + w( - 1)} |} ]}}, where P is the period of a circular orbit around this system, before the big rip.

  16. Sensitivity enhancement of remotely coupled NMR detectors using wirelessly powered parametric amplification.

    PubMed

    Qian, Chunqi; Murphy-Boesch, Joseph; Dodd, Stephen; Koretsky, Alan

    2012-09-01

    A completely wireless detection coil with an integrated parametric amplifier has been constructed to provide local amplification and transmission of MR signals. The sample coil is one element of a parametric amplifier using a zero-bias diode that mixes the weak MR signal with a strong pump signal that is obtained from an inductively coupled external loop. The NMR sample coil develops current gain via reduction in the effective coil resistance. Higher gain can be obtained by adjusting the level of the pumping power closer to the oscillation threshold, but the gain is ultimately constrained by the bandwidth requirement of MRI experiments. A feasibility study here shows that on a NaCl/D(2) O phantom, (23) Na signals with 20 dB of gain can be readily obtained with a concomitant bandwidth of 144 kHz. This gain is high enough that the integrated coil with parametric amplifier, which is coupled inductively to external loops, can provide sensitivity approaching that of direct wire connection. Copyright © 2012 Wiley Periodicals, Inc.

  17. Brayton Power Conversion System Parametric Design Modelling for Nuclear Electric Propulsion

    NASA Technical Reports Server (NTRS)

    Ashe, Thomas L.; Otting, William D.

    1993-01-01

    The parametrically based closed Brayton cycle (CBC) computer design model was developed for inclusion into the NASA LeRC overall Nuclear Electric Propulsion (NEP) end-to-end systems model. The code is intended to provide greater depth to the NEP system modeling which is required to more accurately predict the impact of specific technology on system performance. The CBC model is parametrically based to allow for conducting detailed optimization studies and to provide for easy integration into an overall optimizer driver routine. The power conversion model includes the modeling of the turbines, alternators, compressors, ducting, and heat exchangers (hot-side heat exchanger and recuperator). The code predicts performance to significant detail. The system characteristics determined include estimates of mass, efficiency, and the characteristic dimensions of the major power conversion system components. These characteristics are parametrically modeled as a function of input parameters such as the aerodynamic configuration (axial or radial), turbine inlet temperature, cycle temperature ratio, power level, lifetime, materials, and redundancy.

  18. Parametric Cost Models for Space Telescopes

    NASA Technical Reports Server (NTRS)

    Stahl, H. Philip; Henrichs, Todd; Dollinger, Courtney

    2010-01-01

    Multivariable parametric cost models for space telescopes provide several benefits to designers and space system project managers. They identify major architectural cost drivers and allow high-level design trades. They enable cost-benefit analysis for technology development investment. And, they provide a basis for estimating total project cost. A survey of historical models found that there is no definitive space telescope cost model. In fact, published models vary greatly [1]. Thus, there is a need for parametric space telescopes cost models. An effort is underway to develop single variable [2] and multi-variable [3] parametric space telescope cost models based on the latest available data and applying rigorous analytical techniques. Specific cost estimating relationships (CERs) have been developed which show that aperture diameter is the primary cost driver for large space telescopes; technology development as a function of time reduces cost at the rate of 50% per 17 years; it costs less per square meter of collecting aperture to build a large telescope than a small telescope; and increasing mass reduces cost.

  19. Spectrally tunable, temporally shaped parametric front end to seed high-energy Nd:glass laser systems

    DOE PAGES

    Dorrer, C.; Consentino, A.; Cuffney, R.; ...

    2017-10-18

    Here, we describe a parametric-amplification–based front end for seeding high-energy Nd:glass laser systems. The front end delivers up to 200 mJ by parametric amplification in 2.5-ns flat-in-time pulses tunable over more than 15 nm. Spectral tunability over a range larger than what is typically achieved by laser media at similar energy levels is implemented to investigate cross-beam energy transfer in multibeam target experiments. The front-end operation is simulated to explain the amplified signal’s sensitivity to the input pump and signal. A large variety of amplified waveforms are generated by closed-loop pulse shaping. Various properties and limitations of this front endmore » are discussed.« less

  20. Spectrally tunable, temporally shaped parametric front end to seed high-energy Nd:glass laser systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dorrer, C.; Consentino, A.; Cuffney, R.

    Here, we describe a parametric-amplification–based front end for seeding high-energy Nd:glass laser systems. The front end delivers up to 200 mJ by parametric amplification in 2.5-ns flat-in-time pulses tunable over more than 15 nm. Spectral tunability over a range larger than what is typically achieved by laser media at similar energy levels is implemented to investigate cross-beam energy transfer in multibeam target experiments. The front-end operation is simulated to explain the amplified signal’s sensitivity to the input pump and signal. A large variety of amplified waveforms are generated by closed-loop pulse shaping. Various properties and limitations of this front endmore » are discussed.« less

  1. Multi Response Optimization of Laser Micro Marking Process:A Grey- Fuzzy Approach

    NASA Astrophysics Data System (ADS)

    Shivakoti, I.; Das, P. P.; Kibria, G.; Pradhan, B. B.; Mustafa, Z.; Ghadai, R. K.

    2017-07-01

    The selection of optimal parametric combination for efficient machining has always become a challenging issue for the manufacturing researcher. The optimal parametric combination always provides a better machining which improves the productivity, product quality and subsequently reduces the production cost and time. The paper presents the hybrid approach of Grey relational analysis and Fuzzy logic to obtain the optimal parametric combination for better laser beam micro marking on the Gallium Nitride (GaN) work material. The response surface methodology has been implemented for design of experiment considering three parameters with their five levels. The parameter such as current, frequency and scanning speed has been considered and the mark width, mark depth and mark intensity has been considered as the process response.

  2. Acoustic characterization of a nonlinear vibroacoustic absorber at low frequencies and high sound levels

    NASA Astrophysics Data System (ADS)

    Chauvin, A.; Monteil, M.; Bellizzi, S.; Côte, R.; Herzog, Ph.; Pachebat, M.

    2018-03-01

    A nonlinear vibroacoustic absorber (Nonlinear Energy Sink: NES), involving a clamped thin membrane made in Latex, is assessed in the acoustic domain. This NES is here considered as an one-port acoustic system, analyzed at low frequencies and for increasing excitation levels. This dynamic and frequency range requires a suitable experimental technique, which is presented first. It involves a specific impedance tube able to deal with samples of sufficient size, and reaching high sound levels with a guaranteed linear response thank's to a specific acoustic source. The identification method presented here requires a single pressure measurement, and is calibrated from a set of known acoustic loads. The NES reflection coefficient is then estimated at increasing source levels, showing its strong level dependency. This is presented as a mean to understand energy dissipation. The results of the experimental tests are first compared to a nonlinear viscoelastic model of the membrane absorber. In a second step, a family of one degree of freedom models, treated as equivalent Helmholtz resonators is identified from the measurements, allowing a parametric description of the NES behavior over a wide range of levels.

  3. A linear parameter-varying multiobjective control law design based on youla parametrization for a flexible blended wing body aircraft

    NASA Astrophysics Data System (ADS)

    Demourant, F.; Ferreres, G.

    2013-12-01

    This article presents a methodology for a linear parameter-varying (LPV) multiobjective flight control law design for a blended wing body (BWB) aircraft and results. So, the method is a direct design of a parametrized control law (with respect to some measured flight parameters) through a multimodel convex design to optimize a set of specifications on the full-flight domain and different mass cases. The methodology is based on the Youla parameterization which is very useful since closed loop specifications are affine with respect to Youla parameter. The LPV multiobjective design method is detailed and applied to the BWB flexible aircraft example.

  4. Shape sensing using multi-core fiber optic cable and parametric curve solutions.

    PubMed

    Moore, Jason P; Rogge, Matthew D

    2012-01-30

    The shape of a multi-core optical fiber is calculated by numerically solving a set of Frenet-Serret equations describing the path of the fiber in three dimensions. Included in the Frenet-Serret equations are curvature and bending direction functions derived from distributed fiber Bragg grating strain measurements in each core. The method offers advantages over prior art in that it determines complex three-dimensional fiber shape as a continuous parametric solution rather than an integrated series of discrete planar bends. Results and error analysis of the method using a tri-core optical fiber is presented. Maximum error expressed as a percentage of fiber length was found to be 7.2%.

  5. First integrals and parametric solutions of third-order ODEs admitting {\\mathfrak{sl}(2, {R})}

    NASA Astrophysics Data System (ADS)

    Ruiz, A.; Muriel, C.

    2017-05-01

    A complete set of first integrals for any third-order ordinary differential equation admitting a Lie symmetry algebra isomorphic to sl(2, {R}) is explicitly computed. These first integrals are derived from two linearly independent solutions of a linear second-order ODE, without additional integration. The general solution in parametric form can be obtained by using the computed first integrals. The study includes a parallel analysis of the four inequivalent realizations of sl(2, {R}) , and it is applied to several particular examples. These include the generalized Chazy equation, as well as an example of an equation which admits the most complicated of the four inequivalent realizations.

  6. Two-sample tests and one-way MANOVA for multivariate biomarker data with nondetects.

    PubMed

    Thulin, M

    2016-09-10

    Testing whether the mean vector of a multivariate set of biomarkers differs between several populations is an increasingly common problem in medical research. Biomarker data is often left censored because some measurements fall below the laboratory's detection limit. We investigate how such censoring affects multivariate two-sample and one-way multivariate analysis of variance tests. Type I error rates, power and robustness to increasing censoring are studied, under both normality and non-normality. Parametric tests are found to perform better than non-parametric alternatives, indicating that the current recommendations for analysis of censored multivariate data may have to be revised. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  7. Relationship of awards in multiple choice questions and structured answer questions in the undergraduate years and their effectiveness in evaluation.

    PubMed

    Khan, Junaid Sarfraz; Mukhtar, Osama; Tabasum, Saima; Shaheen, Naveed; Farooq, M; Irfan, M Abdul; Sattar, Ajmal; Nabeel, M; Imran, M; Rafique, Sadia; Iqbal, Maryam; Afzal, M Sheraz; Hameed, M Shahbaz; Habib, Maryam; Jabeen, Uzma; Mubbashar, Malik Hussain

    2010-01-01

    A number of evaluation tools for assessing the cognitive and affective domains in accordance with Bloom's taxonomy are available for summative assessment. At the University of Health Sciences, Lahore, Multiple Choice Questions (MCQs) and Structured Answer Questions (SAQs) are used for the evaluation of the cognitive domain at all six hierarch levels of taxonomy using the tables of specifications to ensure content validity. The rationale of having two evaluation tools seemingly similar in their evaluative competency yet differing in feasibility of construction, administration and marking is being challenged in this study. The MCQ and SAQ awards of the ten percent sample population amounting to 985 students in fifteen Medical and Dental Colleges across Punjab were entered into SPSS-15 and correlated according to the cognitive and affective level of assessment in relation to the Bloom's taxonomy and their grouping in the Tables of Specifications, using parametric tests. 3494 anonymously administered questionnaires were analyzed using ethnograph. No statistically significant difference was found in the mean marks obtained by the students when MCQs and SAQs were compared according to their groupings in the Tables of Specifications at all levels of cognitive hierarchical testing. End-of-yearcognitive level testing targets set were not met and more questions were set at the lower cognitive testing levels. Expenses incurred in setting MCQs and SAQs were comparable but conduct and assessment costs for MCQs and SAQs were 6% and 94% of the total respectively. In both MCQs and SAQs students performed better at higher cognitive testing levels whereas the SAQs and MCQs were able to marginally test the lower levels of affective domain only. Student's feedback showed that attempting MCQs required critical thinking, experience and practice. MCQs are more cost effective means at levels of cognitive domain assessment.

  8. Influence of signal intensity non-uniformity on brain volumetry using an atlas-based method.

    PubMed

    Goto, Masami; Abe, Osamu; Miyati, Tosiaki; Kabasawa, Hiroyuki; Takao, Hidemasa; Hayashi, Naoto; Kurosu, Tomomi; Iwatsubo, Takeshi; Yamashita, Fumio; Matsuda, Hiroshi; Mori, Harushi; Kunimatsu, Akira; Aoki, Shigeki; Ino, Kenji; Yano, Keiichi; Ohtomo, Kuni

    2012-01-01

    Many studies have reported pre-processing effects for brain volumetry; however, no study has investigated whether non-parametric non-uniform intensity normalization (N3) correction processing results in reduced system dependency when using an atlas-based method. To address this shortcoming, the present study assessed whether N3 correction processing provides reduced system dependency in atlas-based volumetry. Contiguous sagittal T1-weighted images of the brain were obtained from 21 healthy participants, by using five magnetic resonance protocols. After image preprocessing using the Statistical Parametric Mapping 5 software, we measured the structural volume of the segmented images with the WFU-PickAtlas software. We applied six different bias-correction levels (Regularization 10, Regularization 0.0001, Regularization 0, Regularization 10 with N3, Regularization 0.0001 with N3, and Regularization 0 with N3) to each set of images. The structural volume change ratio (%) was defined as the change ratio (%) = (100 × [measured volume - mean volume of five magnetic resonance protocols] / mean volume of five magnetic resonance protocols) for each bias-correction level. A low change ratio was synonymous with lower system dependency. The results showed that the images with the N3 correction had a lower change ratio compared with those without the N3 correction. The present study is the first atlas-based volumetry study to show that the precision of atlas-based volumetry improves when using N3-corrected images. Therefore, correction for signal intensity non-uniformity is strongly advised for multi-scanner or multi-site imaging trials.

  9. Influence of Signal Intensity Non-Uniformity on Brain Volumetry Using an Atlas-Based Method

    PubMed Central

    Abe, Osamu; Miyati, Tosiaki; Kabasawa, Hiroyuki; Takao, Hidemasa; Hayashi, Naoto; Kurosu, Tomomi; Iwatsubo, Takeshi; Yamashita, Fumio; Matsuda, Hiroshi; Mori, Harushi; Kunimatsu, Akira; Aoki, Shigeki; Ino, Kenji; Yano, Keiichi; Ohtomo, Kuni

    2012-01-01

    Objective Many studies have reported pre-processing effects for brain volumetry; however, no study has investigated whether non-parametric non-uniform intensity normalization (N3) correction processing results in reduced system dependency when using an atlas-based method. To address this shortcoming, the present study assessed whether N3 correction processing provides reduced system dependency in atlas-based volumetry. Materials and Methods Contiguous sagittal T1-weighted images of the brain were obtained from 21 healthy participants, by using five magnetic resonance protocols. After image preprocessing using the Statistical Parametric Mapping 5 software, we measured the structural volume of the segmented images with the WFU-PickAtlas software. We applied six different bias-correction levels (Regularization 10, Regularization 0.0001, Regularization 0, Regularization 10 with N3, Regularization 0.0001 with N3, and Regularization 0 with N3) to each set of images. The structural volume change ratio (%) was defined as the change ratio (%) = (100 × [measured volume - mean volume of five magnetic resonance protocols] / mean volume of five magnetic resonance protocols) for each bias-correction level. Results A low change ratio was synonymous with lower system dependency. The results showed that the images with the N3 correction had a lower change ratio compared with those without the N3 correction. Conclusion The present study is the first atlas-based volumetry study to show that the precision of atlas-based volumetry improves when using N3-corrected images. Therefore, correction for signal intensity non-uniformity is strongly advised for multi-scanner or multi-site imaging trials. PMID:22778560

  10. Multi-parametric variational data assimilation for hydrological forecasting

    NASA Astrophysics Data System (ADS)

    Alvarado-Montero, R.; Schwanenberg, D.; Krahe, P.; Helmke, P.; Klein, B.

    2017-12-01

    Ensemble forecasting is increasingly applied in flow forecasting systems to provide users with a better understanding of forecast uncertainty and consequently to take better-informed decisions. A common practice in probabilistic streamflow forecasting is to force deterministic hydrological model with an ensemble of numerical weather predictions. This approach aims at the representation of meteorological uncertainty but neglects uncertainty of the hydrological model as well as its initial conditions. Complementary approaches use probabilistic data assimilation techniques to receive a variety of initial states or represent model uncertainty by model pools instead of single deterministic models. This paper introduces a novel approach that extends a variational data assimilation based on Moving Horizon Estimation to enable the assimilation of observations into multi-parametric model pools. It results in a probabilistic estimate of initial model states that takes into account the parametric model uncertainty in the data assimilation. The assimilation technique is applied to the uppermost area of River Main in Germany. We use different parametric pools, each of them with five parameter sets, to assimilate streamflow data, as well as remotely sensed data from the H-SAF project. We assess the impact of the assimilation in the lead time performance of perfect forecasts (i.e. observed data as forcing variables) as well as deterministic and probabilistic forecasts from ECMWF. The multi-parametric assimilation shows an improvement of up to 23% for CRPS performance and approximately 20% in Brier Skill Scores with respect to the deterministic approach. It also improves the skill of the forecast in terms of rank histogram and produces a narrower ensemble spread.

  11. Ten-watt level picosecond parametric mid-IR source broadly tunable in wavelength

    NASA Astrophysics Data System (ADS)

    Vyvlečka, Michal; Novák, Ondřej; Roškot, Lukáscaron; Smrž, Martin; Mužík, Jiří; Endo, Akira; Mocek, Tomáš

    2018-02-01

    Mid-IR wavelength range (between 2 and 8 μm) offers perspective applications, such as minimally-invasive neurosurgery, gas sensing, or plastic and polymer processing. Maturity of high average power near-IR lasers is beneficial for powerful mid-IR generation by optical parametric conversion. We utilize in-house developed Yb:YAG thin-disk laser of 100 W average power at 77 kHz repetition rate, wavelength of 1030 nm, and about 2 ps pulse width for pumping of a ten-watt level picosecond mid-IR source. Seed beam is obtained by optical parametric generation in a double-pass 10 mm long PPLN crystal pumped by a part of the fundamental near-IR beam. Tunability of the signal wavelength between 1.46 μm and 1.95 μm was achieved with power of several tens of miliwatts. Main part of the fundamental beam pumps an optical parametric amplification stage, which includes a walk-off compensating pair of 10 mm long KTP crystals. We already demonstrated the OPA output signal and idler beam tunability between 1.70-1.95 μm and 2.18-2.62 μm, respectively. The signal and idler beams were amplified up to 8.5 W and 5 W, respectively, at 42 W pump without evidence of strong saturation. Thus, increase in signal and idler output power is expected for pump power increase.

  12. Fuzzy interval Finite Element/Statistical Energy Analysis for mid-frequency analysis of built-up systems with mixed fuzzy and interval parameters

    NASA Astrophysics Data System (ADS)

    Yin, Hui; Yu, Dejie; Yin, Shengwen; Xia, Baizhan

    2016-10-01

    This paper introduces mixed fuzzy and interval parametric uncertainties into the FE components of the hybrid Finite Element/Statistical Energy Analysis (FE/SEA) model for mid-frequency analysis of built-up systems, thus an uncertain ensemble combining non-parametric with mixed fuzzy and interval parametric uncertainties comes into being. A fuzzy interval Finite Element/Statistical Energy Analysis (FIFE/SEA) framework is proposed to obtain the uncertain responses of built-up systems, which are described as intervals with fuzzy bounds, termed as fuzzy-bounded intervals (FBIs) in this paper. Based on the level-cut technique, a first-order fuzzy interval perturbation FE/SEA (FFIPFE/SEA) and a second-order fuzzy interval perturbation FE/SEA method (SFIPFE/SEA) are developed to handle the mixed parametric uncertainties efficiently. FFIPFE/SEA approximates the response functions by the first-order Taylor series, while SFIPFE/SEA improves the accuracy by considering the second-order items of Taylor series, in which all the mixed second-order items are neglected. To further improve the accuracy, a Chebyshev fuzzy interval method (CFIM) is proposed, in which the Chebyshev polynomials is used to approximate the response functions. The FBIs are eventually reconstructed by assembling the extrema solutions at all cut levels. Numerical results on two built-up systems verify the effectiveness of the proposed methods.

  13. Model-free aftershock forecasts constructed from similar sequences in the past

    NASA Astrophysics Data System (ADS)

    van der Elst, N.; Page, M. T.

    2017-12-01

    The basic premise behind aftershock forecasting is that sequences in the future will be similar to those in the past. Forecast models typically use empirically tuned parametric distributions to approximate past sequences, and project those distributions into the future to make a forecast. While parametric models do a good job of describing average outcomes, they are not explicitly designed to capture the full range of variability between sequences, and can suffer from over-tuning of the parameters. In particular, parametric forecasts may produce a high rate of "surprises" - sequences that land outside the forecast range. Here we present a non-parametric forecast method that cuts out the parametric "middleman" between training data and forecast. The method is based on finding past sequences that are similar to the target sequence, and evaluating their outcomes. We quantify similarity as the Poisson probability that the observed event count in a past sequence reflects the same underlying intensity as the observed event count in the target sequence. Event counts are defined in terms of differential magnitude relative to the mainshock. The forecast is then constructed from the distribution of past sequences outcomes, weighted by their similarity. We compare the similarity forecast with the Reasenberg and Jones (RJ95) method, for a set of 2807 global aftershock sequences of M≥6 mainshocks. We implement a sequence-specific RJ95 forecast using a global average prior and Bayesian updating, but do not propagate epistemic uncertainty. The RJ95 forecast is somewhat more precise than the similarity forecast: 90% of observed sequences fall within a factor of two of the median RJ95 forecast value, whereas the fraction is 85% for the similarity forecast. However, the surprise rate is much higher for the RJ95 forecast; 10% of observed sequences fall in the upper 2.5% of the (Poissonian) forecast range. The surprise rate is less than 3% for the similarity forecast. The similarity forecast may be useful to emergency managers and non-specialists when confidence or expertise in parametric forecasting may be lacking. The method makes over-tuning impossible, and minimizes the rate of surprises. At the least, this forecast constitutes a useful benchmark for more precisely tuned parametric forecasts.

  14. TU-H-CAMPUS-IeP3-02: Neurovascular 4D Parametric Imaging Using Co-Registration of Biplane DSA Sequences with 3D Vascular Geometry Obtained From Cone Beam CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balasubramoniam, A; Bednarek, D; Rudin, S

    Purpose: To create 4D parametric images using biplane Digital Subtraction Angiography (DSA) sequences co-registered with the 3D vascular geometry obtained from Cone Beam-CT (CBCT). Methods: We investigated a method to derive multiple 4D Parametric Imaging (PI) maps using only one CBCT acquisition. During this procedure a 3D-DSA geometry is stored and used subsequently for all 4D images. Each time a biplane DSA is acquired, we calculate 2D parametric maps of Bolus Arrival Time (BAT), Mean Transit Time (MTT) and Time to Peak (TTP). Arterial segments which are nearly parallel with one of the biplane imaging planes in the 2D parametricmore » maps are co-registered with the 3D geometry. The values in the remaining vascular network are found using spline interpolation since the points chosen for co-registration on the vasculature are discrete and remaining regions need to be interpolated. To evaluate the method we used a patient CT volume data set for 3D printing a neurovascular phantom containing a complete Circle of Willis. We connected the phantom to a flow loop with a peristaltic pump, simulating physiological flow conditions. Contrast media was injected with an automatic injector at 10 ml/sec. Images were acquired with a Toshiba Infinix C-arm and 4D parametric image maps of the vasculature were calculated. Results: 4D BAT, MTT, and TTP parametric image maps of the Circle of Willis were derived. We generated color-coded 3D geometries which avoided artifacts due to vessel overlap or foreshortening in the projection direction. Conclusion: The software was tested successfully and multiple 4D parametric images were obtained from biplane DSA sequences without the need to acquire additional 3D-DSA runs. This can benefit the patient by reducing the contrast media and the radiation dose normally associated with these procedures. Partial support from NIH Grant R01-EB002873 and Toshiba Medical Systems Corp.« less

  15. The impact of parametrized convection on cloud feedback.

    PubMed

    Webb, Mark J; Lock, Adrian P; Bretherton, Christopher S; Bony, Sandrine; Cole, Jason N S; Idelkadi, Abderrahmane; Kang, Sarah M; Koshiro, Tsuyoshi; Kawai, Hideaki; Ogura, Tomoo; Roehrig, Romain; Shin, Yechul; Mauritsen, Thorsten; Sherwood, Steven C; Vial, Jessica; Watanabe, Masahiro; Woelfle, Matthew D; Zhao, Ming

    2015-11-13

    We investigate the sensitivity of cloud feedbacks to the use of convective parametrizations by repeating the CMIP5/CFMIP-2 AMIP/AMIP + 4K uniform sea surface temperature perturbation experiments with 10 climate models which have had their convective parametrizations turned off. Previous studies have suggested that differences between parametrized convection schemes are a leading source of inter-model spread in cloud feedbacks. We find however that 'ConvOff' models with convection switched off have a similar overall range of cloud feedbacks compared with the standard configurations. Furthermore, applying a simple bias correction method to allow for differences in present-day global cloud radiative effects substantially reduces the differences between the cloud feedbacks with and without parametrized convection in the individual models. We conclude that, while parametrized convection influences the strength of the cloud feedbacks substantially in some models, other processes must also contribute substantially to the overall inter-model spread. The positive shortwave cloud feedbacks seen in the models in subtropical regimes associated with shallow clouds are still present in the ConvOff experiments. Inter-model spread in shortwave cloud feedback increases slightly in regimes associated with trade cumulus in the ConvOff experiments but is quite similar in the most stable subtropical regimes associated with stratocumulus clouds. Inter-model spread in longwave cloud feedbacks in strongly precipitating regions of the tropics is substantially reduced in the ConvOff experiments however, indicating a considerable local contribution from differences in the details of convective parametrizations. In both standard and ConvOff experiments, models with less mid-level cloud and less moist static energy near the top of the boundary layer tend to have more positive tropical cloud feedbacks. The role of non-convective processes in contributing to inter-model spread in cloud feedback is discussed. © 2015 The Authors.

  16. Estimating and modelling cure in population-based cancer studies within the framework of flexible parametric survival models.

    PubMed

    Andersson, Therese M L; Dickman, Paul W; Eloranta, Sandra; Lambert, Paul C

    2011-06-22

    When the mortality among a cancer patient group returns to the same level as in the general population, that is, the patients no longer experience excess mortality, the patients still alive are considered "statistically cured". Cure models can be used to estimate the cure proportion as well as the survival function of the "uncured". One limitation of parametric cure models is that the functional form of the survival of the "uncured" has to be specified. It can sometimes be hard to find a survival function flexible enough to fit the observed data, for example, when there is high excess hazard within a few months from diagnosis, which is common among older age groups. This has led to the exclusion of older age groups in population-based cancer studies using cure models. Here we have extended the flexible parametric survival model to incorporate cure as a special case to estimate the cure proportion and the survival of the "uncured". Flexible parametric survival models use splines to model the underlying hazard function, and therefore no parametric distribution has to be specified. We have compared the fit from standard cure models to our flexible cure model, using data on colon cancer patients in Finland. This new method gives similar results to a standard cure model, when it is reliable, and better fit when the standard cure model gives biased estimates. Cure models within the framework of flexible parametric models enables cure modelling when standard models give biased estimates. These flexible cure models enable inclusion of older age groups and can give stage-specific estimates, which is not always possible from parametric cure models. © 2011 Andersson et al; licensee BioMed Central Ltd.

  17. Estimating and modelling cure in population-based cancer studies within the framework of flexible parametric survival models

    PubMed Central

    2011-01-01

    Background When the mortality among a cancer patient group returns to the same level as in the general population, that is, the patients no longer experience excess mortality, the patients still alive are considered "statistically cured". Cure models can be used to estimate the cure proportion as well as the survival function of the "uncured". One limitation of parametric cure models is that the functional form of the survival of the "uncured" has to be specified. It can sometimes be hard to find a survival function flexible enough to fit the observed data, for example, when there is high excess hazard within a few months from diagnosis, which is common among older age groups. This has led to the exclusion of older age groups in population-based cancer studies using cure models. Methods Here we have extended the flexible parametric survival model to incorporate cure as a special case to estimate the cure proportion and the survival of the "uncured". Flexible parametric survival models use splines to model the underlying hazard function, and therefore no parametric distribution has to be specified. Results We have compared the fit from standard cure models to our flexible cure model, using data on colon cancer patients in Finland. This new method gives similar results to a standard cure model, when it is reliable, and better fit when the standard cure model gives biased estimates. Conclusions Cure models within the framework of flexible parametric models enables cure modelling when standard models give biased estimates. These flexible cure models enable inclusion of older age groups and can give stage-specific estimates, which is not always possible from parametric cure models. PMID:21696598

  18. The impact of parametrized convection on cloud feedback

    PubMed Central

    Webb, Mark J.; Lock, Adrian P.; Bretherton, Christopher S.; Bony, Sandrine; Cole, Jason N. S.; Idelkadi, Abderrahmane; Kang, Sarah M.; Koshiro, Tsuyoshi; Kawai, Hideaki; Ogura, Tomoo; Roehrig, Romain; Shin, Yechul; Mauritsen, Thorsten; Sherwood, Steven C.; Vial, Jessica; Watanabe, Masahiro; Woelfle, Matthew D.; Zhao, Ming

    2015-01-01

    We investigate the sensitivity of cloud feedbacks to the use of convective parametrizations by repeating the CMIP5/CFMIP-2 AMIP/AMIP + 4K uniform sea surface temperature perturbation experiments with 10 climate models which have had their convective parametrizations turned off. Previous studies have suggested that differences between parametrized convection schemes are a leading source of inter-model spread in cloud feedbacks. We find however that ‘ConvOff’ models with convection switched off have a similar overall range of cloud feedbacks compared with the standard configurations. Furthermore, applying a simple bias correction method to allow for differences in present-day global cloud radiative effects substantially reduces the differences between the cloud feedbacks with and without parametrized convection in the individual models. We conclude that, while parametrized convection influences the strength of the cloud feedbacks substantially in some models, other processes must also contribute substantially to the overall inter-model spread. The positive shortwave cloud feedbacks seen in the models in subtropical regimes associated with shallow clouds are still present in the ConvOff experiments. Inter-model spread in shortwave cloud feedback increases slightly in regimes associated with trade cumulus in the ConvOff experiments but is quite similar in the most stable subtropical regimes associated with stratocumulus clouds. Inter-model spread in longwave cloud feedbacks in strongly precipitating regions of the tropics is substantially reduced in the ConvOff experiments however, indicating a considerable local contribution from differences in the details of convective parametrizations. In both standard and ConvOff experiments, models with less mid-level cloud and less moist static energy near the top of the boundary layer tend to have more positive tropical cloud feedbacks. The role of non-convective processes in contributing to inter-model spread in cloud feedback is discussed. PMID:26438278

  19. Statistical Techniques to Analyze Pesticide Data Program Food Residue Observations.

    PubMed

    Szarka, Arpad Z; Hayworth, Carol G; Ramanarayanan, Tharacad S; Joseph, Robert S I

    2018-06-26

    The U.S. EPA conducts dietary-risk assessments to ensure that levels of pesticides on food in the U.S. food supply are safe. Often these assessments utilize conservative residue estimates, maximum residue levels (MRLs), and a high-end estimate derived from registrant-generated field-trial data sets. A more realistic estimate of consumers' pesticide exposure from food may be obtained by utilizing residues from food-monitoring programs, such as the Pesticide Data Program (PDP) of the U.S. Department of Agriculture. A substantial portion of food-residue concentrations in PDP monitoring programs are below the limits of detection (left-censored), which makes the comparison of regulatory-field-trial and PDP residue levels difficult. In this paper, we present a novel adaption of established statistical techniques, the Kaplan-Meier estimator (K-M), the robust regression on ordered statistic (ROS), and the maximum-likelihood estimator (MLE), to quantify the pesticide-residue concentrations in the presence of heavily censored data sets. The examined statistical approaches include the most commonly used parametric and nonparametric methods for handling left-censored data that have been used in the fields of medical and environmental sciences. This work presents a case study in which data of thiamethoxam residue on bell pepper generated from registrant field trials were compared with PDP-monitoring residue values. The results from the statistical techniques were evaluated and compared with commonly used simple substitution methods for the determination of summary statistics. It was found that the maximum-likelihood estimator (MLE) is the most appropriate statistical method to analyze this residue data set. Using the MLE technique, the data analyses showed that the median and mean PDP bell pepper residue levels were approximately 19 and 7 times lower, respectively, than the corresponding statistics of the field-trial residues.

  20. Thoracic Injury Risk Curves for Rib Deflections of the SID-IIs Build Level D.

    PubMed

    Irwin, Annette L; Crawford, Greg; Gorman, David; Wang, Sikui; Mertz, Harold J

    2016-11-01

    Injury risk curves for SID-IIs thorax and abdomen rib deflections proposed for future NCAP side impact evaluations were developed from tests conducted with the SID-IIs FRG. Since the floating rib guide is known to reduce the magnitude of the peak rib deflections, injury risk curves developed from SID-IIs FRG data are not appropriate for use with SID-IIs build level D. PMHS injury data from three series of sled tests and one series of whole-body drop tests are paired with thoracic rib deflections from equivalent tests with SID-IIs build level D. Where possible, the rib deflections of SID-IIs build level D were scaled to adjust for differences in impact velocity between the PMHS and SID-IIs tests. Injury risk curves developed by the Mertz-Weber modified median rank method are presented and compared to risk curves developed by other parametric and non-parametric methods.

  1. A Feature-based Approach to Big Data Analysis of Medical Images

    PubMed Central

    Toews, Matthew; Wachinger, Christian; Estepar, Raul San Jose; Wells, William M.

    2015-01-01

    This paper proposes an inference method well-suited to large sets of medical images. The method is based upon a framework where distinctive 3D scale-invariant features are indexed efficiently to identify approximate nearest-neighbor (NN) feature matches in O(log N) computational complexity in the number of images N. It thus scales well to large data sets, in contrast to methods based on pair-wise image registration or feature matching requiring O(N) complexity. Our theoretical contribution is a density estimator based on a generative model that generalizes kernel density estimation and K-nearest neighbor (KNN) methods. The estimator can be used for on-the-fly queries, without requiring explicit parametric models or an off-line training phase. The method is validated on a large multi-site data set of 95,000,000 features extracted from 19,000 lung CT scans. Subject-level classification identifies all images of the same subjects across the entire data set despite deformation due to breathing state, including unintentional duplicate scans. State-of-the-art performance is achieved in predicting chronic pulmonary obstructive disorder (COPD) severity across the 5-category GOLD clinical rating, with an accuracy of 89% if both exact and one-off predictions are considered correct. PMID:26221685

  2. A Feature-Based Approach to Big Data Analysis of Medical Images.

    PubMed

    Toews, Matthew; Wachinger, Christian; Estepar, Raul San Jose; Wells, William M

    2015-01-01

    This paper proposes an inference method well-suited to large sets of medical images. The method is based upon a framework where distinctive 3D scale-invariant features are indexed efficiently to identify approximate nearest-neighbor (NN) feature matches-in O (log N) computational complexity in the number of images N. It thus scales well to large data sets, in contrast to methods based on pair-wise image registration or feature matching requiring O(N) complexity. Our theoretical contribution is a density estimator based on a generative model that generalizes kernel density estimation and K-nearest neighbor (KNN) methods.. The estimator can be used for on-the-fly queries, without requiring explicit parametric models or an off-line training phase. The method is validated on a large multi-site data set of 95,000,000 features extracted from 19,000 lung CT scans. Subject-level classification identifies all images of the same subjects across the entire data set despite deformation due to breathing state, including unintentional duplicate scans. State-of-the-art performance is achieved in predicting chronic pulmonary obstructive disorder (COPD) severity across the 5-category GOLD clinical rating, with an accuracy of 89% if both exact and one-off predictions are considered correct.

  3. Technology needs for lunar and Mars space transfer systems

    NASA Technical Reports Server (NTRS)

    Woodcock, Gordon R.; Cothran, Bradley C.; Donahue, Benjamin; Mcghee, Jerry

    1991-01-01

    The determination of appropriate space transportation technologies and operating modes is discussed with respect to both lunar and Mars missions. Three levels of activity are set forth to examine the sensitivity of transportation preferences including 'minimum,' 'full science,' and 'industrialization and settlement' categories. High-thrust-profile missions for lunar and Mars transportation are considered in terms of their relative advantages, and transportation options are defined in terms of propulsion and braking technologies. Costs and life-cycle cost estimates are prepared for the transportation preferences by using a parametric cost model, and a return-on-investment summary is given. Major technological needs for the programs are listed and include storable propulsion systems; cryogenic engines and fluids management; aerobraking; and nuclear thermal, nuclear electric, electric, and solar electric propulsion technologies.

  4. [How to start a neuroimaging study].

    PubMed

    Narumoto, Jin

    2012-06-01

    In order to help researchers understand how to start a neuroimaging study, several tips are described in this paper. These include 1) Choice of an imaging modality, 2) Statistical method, and 3) Interpretation of the results. 1) There are several imaging modalities available in clinical research. Advantages and disadvantages of each modality are described. 2) Statistical Parametric Mapping, which is the most common statistical software for neuroimaging analysis, is described in terms of parameter setting in normalization and level of significance. 3) In the discussion section, the region which shows a significant difference between patients and normal controls should be discussed in relation to the neurophysiology of the disease, making reference to previous reports from neuroimaging studies in normal controls, lesion studies and animal studies. A typical pattern of discussion is described.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kanna, T.; Sakkaravarthi, K.; Kumar, C. Senthil

    In this paper, we have studied the integrability nature of a system of three-coupled Gross-Pitaevskii type nonlinear evolution equations arising in the context of spinor Bose-Einstein condensates by applying the Painleve singularity structure analysis. We show that only for two sets of parametric choices, corresponding to the known integrable cases, the system passes the Painleve test.

  6. Toward complete pion nucleon amplitudes

    DOE PAGES

    Mathieu, Vincent; Danilkin, Igor V.; Fernández-Ramírez, Cesar; ...

    2015-10-05

    We compare the low-energy partial wave analyses πN scattering with a high-energy data via finite energy sum rules. We also construct a new set of amplitudes by matching the imaginary part from the low-energy analysis with the high-energy, Regge parametrization and then reconstruct the real parts using dispersion relations.

  7. Is Morphosyntactic Change Really Rare?

    ERIC Educational Resources Information Center

    Thomason, Sarah G.

    2011-01-01

    Jurgen Meisel argues that "grammatical variation...can be described...in terms of parametric variation", and--crucially for his arguments in this paper--that "parameter settings do not change across the lifespan". To this extent he adopts the standard generative view, but he then departs from what he calls "the literature on historical…

  8. Self-induced parametric amplification arising from nonlinear elastic coupling in a micromechanical resonating disk gyroscope

    PubMed Central

    Nitzan, Sarah H.; Zega, Valentina; Li, Mo; Ahn, Chae H.; Corigliano, Alberto; Kenny, Thomas W.; Horsley, David A.

    2015-01-01

    Parametric amplification, resulting from intentionally varying a parameter in a resonator at twice its resonant frequency, has been successfully employed to increase the sensitivity of many micro- and nano-scale sensors. Here, we introduce the concept of self-induced parametric amplification, which arises naturally from nonlinear elastic coupling between the degenerate vibration modes in a micromechanical disk-resonator, and is not externally applied. The device functions as a gyroscope wherein angular rotation is detected from Coriolis coupling of elastic vibration energy from a driven vibration mode into a second degenerate sensing mode. While nonlinear elasticity in silicon resonators is extremely weak, in this high quality-factor device, ppm-level nonlinear elastic effects result in an order-of-magnitude increase in the observed sensitivity to Coriolis force relative to linear theory. Perfect degeneracy of the primary and secondary vibration modes is achieved through electrostatic frequency tuning, which also enables the phase and frequency of the parametric coupling to be varied, and we show that the resulting phase and frequency dependence of the amplification follow the theory of parametric resonance. We expect that this phenomenon will be useful for both fundamental studies of dynamic systems with low dissipation and for increasing signal-to-noise ratio in practical applications such as gyroscopes. PMID:25762243

  9. Self-induced parametric amplification arising from nonlinear elastic coupling in a micromechanical resonating disk gyroscope.

    PubMed

    Nitzan, Sarah H; Zega, Valentina; Li, Mo; Ahn, Chae H; Corigliano, Alberto; Kenny, Thomas W; Horsley, David A

    2015-03-12

    Parametric amplification, resulting from intentionally varying a parameter in a resonator at twice its resonant frequency, has been successfully employed to increase the sensitivity of many micro- and nano-scale sensors. Here, we introduce the concept of self-induced parametric amplification, which arises naturally from nonlinear elastic coupling between the degenerate vibration modes in a micromechanical disk-resonator, and is not externally applied. The device functions as a gyroscope wherein angular rotation is detected from Coriolis coupling of elastic vibration energy from a driven vibration mode into a second degenerate sensing mode. While nonlinear elasticity in silicon resonators is extremely weak, in this high quality-factor device, ppm-level nonlinear elastic effects result in an order-of-magnitude increase in the observed sensitivity to Coriolis force relative to linear theory. Perfect degeneracy of the primary and secondary vibration modes is achieved through electrostatic frequency tuning, which also enables the phase and frequency of the parametric coupling to be varied, and we show that the resulting phase and frequency dependence of the amplification follow the theory of parametric resonance. We expect that this phenomenon will be useful for both fundamental studies of dynamic systems with low dissipation and for increasing signal-to-noise ratio in practical applications such as gyroscopes.

  10. A Parametric Sizing Model for Molten Regolith Electrolysis Reactors to Produce Oxygen from Lunar Regolith

    NASA Technical Reports Server (NTRS)

    Schreiner, Samuel S.; Dominguez, Jesus A.; Sibille, Laurent; Hoffman, Jeffrey A.

    2015-01-01

    We present a parametric sizing model for a Molten Electrolysis Reactor that produces oxygen and molten metals from lunar regolith. The model has a foundation of regolith material properties validated using data from Apollo samples and simulants. A multiphysics simulation of an MRE reactor is developed and leveraged to generate a vast database of reactor performance and design trends. A novel design methodology is created which utilizes this database to parametrically design an MRE reactor that 1) can sustain the required mass of molten regolith, current, and operating temperature to meet the desired oxygen production level, 2) can operate for long durations via joule heated, cold wall operation in which molten regolith does not touch the reactor side walls, 3) can support a range of electrode separations to enable operational flexibility. Mass, power, and performance estimates for an MRE reactor are presented for a range of oxygen production levels. The effects of several design variables are explored, including operating temperature, regolith type/composition, batch time, and the degree of operational flexibility.

  11. High-power Femtosecond Optical Parametric Amplification at 1 kHz in BiB(3)O(6) pumped at 800 nm.

    PubMed

    Petrov, Valentin; Noack, Frank; Tzankov, Pancho; Ghotbi, Masood; Ebrahim-Zadeh, Majid; Nikolov, Ivailo; Buchvarov, Ivan

    2007-01-22

    Substantial power scaling of a travelling-wave femtosecond optical parametric amplifier, pumped near 800 nm by a 1 kHz Ti:sapphire laser amplifier, is demonstrated using monoclinic BiB(3)O(6) in a two stage scheme with continuum seeding. Total energy output (signal plus idler) exceeding 1 mJ is achieved, corresponding to an intrinsic conversion efficiency of approximately 32% for the second stage. The tunability extends from 1.1 to 2.9 microm. The high parametric gain and broad amplification bandwidth of this crystal allowed the maintenance of the pump pulse duration, leading to pulse lengths less than 140 fs, both for the signal and idler pulses, even at such high output levels.

  12. Experimental demonstration of spatially coherent beam combining using optical parametric amplification.

    PubMed

    Kurita, Takashi; Sueda, Keiichi; Tsubakimoto, Koji; Miyanaga, Noriaki

    2010-07-05

    We experimentally demonstrated coherent beam combining using optical parametric amplification with a nonlinear crystal pumped by random-phased multiple-beam array of the second harmonic of a Nd:YAG laser at 10-Hz repetition rate. In the proof-of-principle experiment, the phase jump between two pump beams was precisely controlled by a motorized actuator. For the demonstration of multiple-beam combining a random phase plate was used to create random-phased beamlets as a pump pulse. Far-field patterns of the pump, the signal, and the idler indicated that the spatially coherent signal beams were obtained on both cases. This approach allows scaling of the intensity of optical parametric chirped pulse amplification up to the exa-watt level while maintaining diffraction-limited beam quality.

  13. Approximation Model Building for Reliability & Maintainability Characteristics of Reusable Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Unal, Resit; Morris, W. Douglas; White, Nancy H.; Lepsch, Roger A.; Brown, Richard W.

    2000-01-01

    This paper describes the development of parametric models for estimating operational reliability and maintainability (R&M) characteristics for reusable vehicle concepts, based on vehicle size and technology support level. A R&M analysis tool (RMAT) and response surface methods are utilized to build parametric approximation models for rapidly estimating operational R&M characteristics such as mission completion reliability. These models that approximate RMAT, can then be utilized for fast analysis of operational requirements, for lifecycle cost estimating and for multidisciplinary sign optimization.

  14. The urban heat island in Rio de Janeiro, Brazil, in the last 30 years using remote sensing data

    NASA Astrophysics Data System (ADS)

    Peres, Leonardo de Faria; Lucena, Andrews José de; Rotunno Filho, Otto Corrêa; França, José Ricardo de Almeida

    2018-02-01

    The aim of this work is to study urban heat island (UHI) in Metropolitan Area of Rio de Janeiro (MARJ) based on the analysis of land-surface temperature (LST) and land-use patterns retrieved from Landsat-5/Thematic Mapper (TM), Landsat-7/Enhanced Thematic Mapper Plus (ETM+) and Landsat-8/Operational Land Imager (OLI) and Thermal Infrared Sensors (TIRS) data covering a 32-year period between 1984 and 2015. LST temporal evolution is assessed by comparing the average LST composites for 1984-1999 and 2000-2015 where the parametric Student t-test was conducted at 5% significance level to map the pixels where LST for the more recent period is statistically significantly greater than the previous one. The non-parametric Mann-Whitney-Wilcoxon rank sum test has also confirmed at the same 5% significance level that the more recent period (2000-2015) has higher LST values. UHI intensity between ;urban; and ;rural/urban low density; (;vegetation;) areas for 1984-1999 and 2000-2015 was established and confirmed by both parametric and non-parametric tests at 1% significance level as 3.3 °C (5.1 °C) and 4.4 °C (7.1 °C), respectively. LST has statistically significantly (p-value < 0.01) increased over time in two of three land cover classes (;urban; and ;urban low density;), respectively by 1.9 °C and 0.9 °C, except in ;vegetation; class. A spatial analysis was also performed to identify the urban pixels within MARJ where UHI is more intense by subtracting the LST of these pixels from the LST mean value of ;vegetation; land-use class.

  15. Impact Response Comparison Between Parametric Human Models and Postmortem Human Subjects with a Wide Range of Obesity Levels.

    PubMed

    Zhang, Kai; Cao, Libo; Wang, Yulong; Hwang, Eunjoo; Reed, Matthew P; Forman, Jason; Hu, Jingwen

    2017-10-01

    Field data analyses have shown that obesity significantly increases the occupant injury risks in motor vehicle crashes, but the injury assessment tools for people with obesity are largely lacking. The objectives of this study were to use a mesh morphing method to rapidly generate parametric finite element models with a wide range of obesity levels and to evaluate their biofidelity against impact tests using postmortem human subjects (PMHS). Frontal crash tests using three PMHS seated in a vehicle rear seat compartment with body mass index (BMI) from 24 to 40 kg/m 2 were selected. To develop the human models matching the PMHS geometry, statistical models of external body shape, rib cage, pelvis, and femur were applied to predict the target geometry using age, sex, stature, and BMI. A mesh morphing method based on radial basis functions was used to rapidly morph a baseline human model into the target geometry. The model-predicted body excursions and injury measures were compared to the PMHS tests. Comparisons of occupant kinematics and injury measures between the tests and simulations showed reasonable correlations across the wide range of BMI levels. The parametric human models have the capability to account for the obesity effects on the occupant impact responses and injury risks. © 2017 The Obesity Society.

  16. Uncertainty importance analysis using parametric moment ratio functions.

    PubMed

    Wei, Pengfei; Lu, Zhenzhou; Song, Jingwen

    2014-02-01

    This article presents a new importance analysis framework, called parametric moment ratio function, for measuring the reduction of model output uncertainty when the distribution parameters of inputs are changed, and the emphasis is put on the mean and variance ratio functions with respect to the variances of model inputs. The proposed concepts efficiently guide the analyst to achieve a targeted reduction on the model output mean and variance by operating on the variances of model inputs. The unbiased and progressive unbiased Monte Carlo estimators are also derived for the parametric mean and variance ratio functions, respectively. Only a set of samples is needed for implementing the proposed importance analysis by the proposed estimators, thus the computational cost is free of input dimensionality. An analytical test example with highly nonlinear behavior is introduced for illustrating the engineering significance of the proposed importance analysis technique and verifying the efficiency and convergence of the derived Monte Carlo estimators. Finally, the moment ratio function is applied to a planar 10-bar structure for achieving a targeted 50% reduction of the model output variance. © 2013 Society for Risk Analysis.

  17. Broadly tunable picosecond ir source

    DOEpatents

    Campillo, A.J.; Hyer, R.C.; Shapiro, S.L.

    1980-04-23

    A picosecond traveling-wave parametric device capable of controlled spectral bandwidth and wavelength in the infrared is reported. Intense 1.064 ..mu..m picosecond pulses (1) pass through a 4.5 cm long LiNbO/sub 3/ optical parametric oscillator crystal (2) set at its degeneracy angle. A broad band emerges, and a simple grating (3) and mirror (4) arrangement is used to inject a selected narrow-band into a 2 cm long LiNbO/sub 3/ optical parametric amplifier crystal (5) along a second pump line. Typical input energies at 1.064 ..mu..m along both pump lines are 6 to 8 mJ for the oscillator and 10 mJ for the amplifier. This yields 1 mJ of tunable output in the range 1.98 to 2.38 ..mu..m which when down-converted in a 1 cm long CdSe crystal mixer (6) gives 2 ..mu..J of tunable radiation over the 14.8 to 18.5 ..mu..m region. The bandwidth and wavelength of both the 2 and 16 ..mu..m radiation output are controlled solely by the diffraction grating.

  18. Hybrid pathwise sensitivity methods for discrete stochastic models of chemical reaction systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wolf, Elizabeth Skubak, E-mail: ewolf@saintmarys.edu; Anderson, David F., E-mail: anderson@math.wisc.edu

    2015-01-21

    Stochastic models are often used to help understand the behavior of intracellular biochemical processes. The most common such models are continuous time Markov chains (CTMCs). Parametric sensitivities, which are derivatives of expectations of model output quantities with respect to model parameters, are useful in this setting for a variety of applications. In this paper, we introduce a class of hybrid pathwise differentiation methods for the numerical estimation of parametric sensitivities. The new hybrid methods combine elements from the three main classes of procedures for sensitivity estimation and have a number of desirable qualities. First, the new methods are unbiased formore » a broad class of problems. Second, the methods are applicable to nearly any physically relevant biochemical CTMC model. Third, and as we demonstrate on several numerical examples, the new methods are quite efficient, particularly if one wishes to estimate the full gradient of parametric sensitivities. The methods are rather intuitive and utilize the multilevel Monte Carlo philosophy of splitting an expectation into separate parts and handling each in an efficient manner.« less

  19. Theory of parametrically amplified electron-phonon superconductivity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Babadi, Mehrtash; Knap, Michael; Martin, Ivar

    2017-07-01

    Ultrafast optical manipulation of ordered phases in strongly correlated materials is a topic of significant theoretical, experimental, and technological interest. Inspired by a recent experiment on light-induced superconductivity in fullerenes [M. Mitrano et al., Nature (London) 530, 461 (2016)], we develop a comprehensive theory of light-induced superconductivity in driven electron-phonon systemswith lattice nonlinearities. In analogy with the operation of parametric amplifiers, we show how the interplay between the external drive and lattice nonlinearities lead to significantly enhanced effective electron-phonon couplings. We provide a detailed and unbiased study of the nonequilibrium dynamics of the driven system using the real-time Green's functionmore » technique. To this end, we develop a Floquet generalization of the Migdal-Eliashberg theory and derive a numerically tractable set of quantum Floquet-Boltzmann kinetic equations for the coupled electron-phonon system. We study the role of parametric phonon generation and electronic heating in destroying the transient superconducting state. Finally, we predict the transient formation of electronic Floquet bands in time-and angle-resolved photoemission spectroscopy experiments as a consequence of the proposed mechanism.« less

  20. Quantification of variability and uncertainty for air toxic emission inventories with censored emission factor data.

    PubMed

    Frey, H Christopher; Zhao, Yuchao

    2004-11-15

    Probabilistic emission inventories were developed for urban air toxic emissions of benzene, formaldehyde, chromium, and arsenic for the example of Houston. Variability and uncertainty in emission factors were quantified for 71-97% of total emissions, depending upon the pollutant and data availability. Parametric distributions for interunit variability were fit using maximum likelihood estimation (MLE), and uncertainty in mean emission factors was estimated using parametric bootstrap simulation. For data sets containing one or more nondetected values, empirical bootstrap simulation was used to randomly sample detection limits for nondetected values and observations for sample values, and parametric distributions for variability were fit using MLE estimators for censored data. The goodness-of-fit for censored data was evaluated by comparison of cumulative distributions of bootstrap confidence intervals and empirical data. The emission inventory 95% uncertainty ranges are as small as -25% to +42% for chromium to as large as -75% to +224% for arsenic with correlated surrogates. Uncertainty was dominated by only a few source categories. Recommendations are made for future improvements to the analysis.

  1. Application of the LSQR algorithm in non-parametric estimation of aerosol size distribution

    NASA Astrophysics Data System (ADS)

    He, Zhenzong; Qi, Hong; Lew, Zhongyuan; Ruan, Liming; Tan, Heping; Luo, Kun

    2016-05-01

    Based on the Least Squares QR decomposition (LSQR) algorithm, the aerosol size distribution (ASD) is retrieved in non-parametric approach. The direct problem is solved by the Anomalous Diffraction Approximation (ADA) and the Lambert-Beer Law. An optimal wavelength selection method is developed to improve the retrieval accuracy of the ASD. The proposed optimal wavelength set is selected by the method which can make the measurement signals sensitive to wavelength and decrease the degree of the ill-condition of coefficient matrix of linear systems effectively to enhance the anti-interference ability of retrieval results. Two common kinds of monomodal and bimodal ASDs, log-normal (L-N) and Gamma distributions, are estimated, respectively. Numerical tests show that the LSQR algorithm can be successfully applied to retrieve the ASD with high stability in the presence of random noise and low susceptibility to the shape of distributions. Finally, the experimental measurement ASD over Harbin in China is recovered reasonably. All the results confirm that the LSQR algorithm combined with the optimal wavelength selection method is an effective and reliable technique in non-parametric estimation of ASD.

  2. Broadly tunable picosecond IR source

    DOEpatents

    Campillo, Anthony J.; Hyer, Ronald C.; Shapiro, Stanley J.

    1982-01-01

    A picosecond traveling-wave parametric device capable of controlled spectral bandwidth and wavelength in the infrared is reported. Intense 1.064 .mu.m picosecond pulses (1) pass through a 4.5 cm long LiNbO.sub.3 optical parametric oscillator crystal (2) set at its degeneracy angle. A broad band emerges, and a simple grating (3) and mirror (4) arrangement is used to inject a selected narrow-band into a 2 cm long LiNbO.sub.3 optical parametric amplifier crystal (5) along a second pump line. Typical input energies at 1.064 .mu.m along both pump lines are 6-8 mJ for the oscillator and 10 mJ for the amplifier. This yields 1 mJ of tunable output in the range 1.98 to 2.38 .mu.m which when down-converted in a 1 cm long CdSe crystal mixer (6) gives 2 .mu.J of tunable radiation over the 14.8 to 18.5 .mu.m region. The bandwidth and wavelength of both the 2 and 16 .mu.m radiation output are controlled solely by the diffraction grating.

  3. Intracavity KTP optical parametric oscillator driven by a KLM Nd:GGG laser with a single AO modulator

    NASA Astrophysics Data System (ADS)

    Chu, Hongwei; Zhao, Shengzhi; Yang, Kejian; Zhao, Jia; Li, Yufei; Li, Tao; Li, Guiqiu; Li, Dechun; Qiao, Wenchao

    2015-05-01

    An intracavity KTiOPO4 (KTP) optical parametric oscillator (OPO) pumped by a Kerr lens mode-locking (KLM) Nd:GGG laser near 1062 nm with a single AO modulator was realized for the first time. The mode-locking pulses of the signal wave were obtained with a short duration of subnanosecond and a repetition rate of several kilohertz (kHz). Under a diode pump power of 8.25 W, a maximum output power of 104 mW at signal wavelength near 1569 nm was obtained at a repetition rate of 2 kHz. The highest pulse energy and peak power were estimated to be 80 μJ and 102 kW at a repetition rate of 1 kHz, respectively. The shortest pulse duration was measured to be 749 ps. By considering the Gaussian spatial distribution of the photon density and the Kerr-lens effect in the gain medium, a set of the coupled rate equations for QML intracavity optical parametric oscillator are given and the numerical simulations are basically fitted with the experimental results.

  4. Influencing agent group behavior by adjusting cultural trait values.

    PubMed

    Tuli, Gaurav; Hexmoor, Henry

    2010-10-01

    Social reasoning and norms among individuals that share cultural traits are largely fashioned by those traits. We have explored predominant sociological and cultural traits. We offer a methodology for parametrically adjusting relevant traits. This exploratory study heralds a capability to deliberately tune cultural group traits in order to produce a desired group behavior. To validate our methodology, we implemented a prototypical-agent-based simulated test bed for demonstrating an exemplar from intelligence, surveillance, and reconnaissance scenario. A group of simulated agents traverses a hostile territory while a user adjusts their cultural group trait settings. Group and individual utilities are dynamically observed against parametric values for the selected traits. Uncertainty avoidance index and individualism are the cultural traits we examined in depth. Upon the user's training of the correspondence between cultural values and system utilities, users deliberately produce the desired system utilities by issuing changes to trait. Specific cultural traits are without meaning outside of their context. Efficacy and timely application of traits in a given context do yield desirable results. This paper heralds a path for the control of large systems via parametric cultural adjustments.

  5. Parametric Model Based On Imputations Techniques for Partly Interval Censored Data

    NASA Astrophysics Data System (ADS)

    Zyoud, Abdallah; Elfaki, F. A. M.; Hrairi, Meftah

    2017-12-01

    The term ‘survival analysis’ has been used in a broad sense to describe collection of statistical procedures for data analysis. In this case, outcome variable of interest is time until an event occurs where the time to failure of a specific experimental unit might be censored which can be right, left, interval, and Partly Interval Censored data (PIC). In this paper, analysis of this model was conducted based on parametric Cox model via PIC data. Moreover, several imputation techniques were used, which are: midpoint, left & right point, random, mean, and median. Maximum likelihood estimate was considered to obtain the estimated survival function. These estimations were then compared with the existing model, such as: Turnbull and Cox model based on clinical trial data (breast cancer data), for which it showed the validity of the proposed model. Result of data set indicated that the parametric of Cox model proved to be more superior in terms of estimation of survival functions, likelihood ratio tests, and their P-values. Moreover, based on imputation techniques; the midpoint, random, mean, and median showed better results with respect to the estimation of survival function.

  6. Nonlinear modulation of an extraordinary wave under the conditions of parametric decay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dorofeenko, V. G.; Krasovitskiy, V. B.; Turikov, V. A.

    2012-06-15

    A self-consistent set of Hamilton equations describing nonlinear saturation of the amplitude of oscillations excited under the conditions of parametric decay of an elliptically polarized extraordinary wave in cold plasma is solved analytically and numerically. It is shown that the exponential increase in the amplitude of the secondary wave excited at the half-frequency of the primary wave changes into a reverse process in which energy is returned to the primary wave and nonlinear oscillations propagating across the external magnetic field are generated. The system of 'slow' equations for the amplitudes, obtained by averaging the initial equations over the high-frequency period,more » is used to describe steady-state nonlinear oscillations in plasma.« less

  7. Parameter Estimation with Entangled Photons Produced by Parametric Down-Conversion

    NASA Technical Reports Server (NTRS)

    Cable, Hugo; Durkin, Gabriel A.

    2010-01-01

    We explore the advantages offered by twin light beams produced in parametric down-conversion for precision measurement. The symmetry of these bipartite quantum states, even under losses, suggests that monitoring correlations between the divergent beams permits a high-precision inference of any symmetry-breaking effect, e.g., fiber birefringence. We show that the quantity of entanglement is not the key feature for such an instrument. In a lossless setting, scaling of precision at the ultimate "Heisenberg" limit is possible with photon counting alone. Even as photon losses approach 100% the precision is shot-noise limited, and we identify the crossover point between quantum and classical precision as a function of detected flux. The predicted hypersensitivity is demonstrated with a Bayesian simulation.

  8. Parameter estimation with entangled photons produced by parametric down-conversion.

    PubMed

    Cable, Hugo; Durkin, Gabriel A

    2010-07-02

    We explore the advantages offered by twin light beams produced in parametric down-conversion for precision measurement. The symmetry of these bipartite quantum states, even under losses, suggests that monitoring correlations between the divergent beams permits a high-precision inference of any symmetry-breaking effect, e.g., fiber birefringence. We show that the quantity of entanglement is not the key feature for such an instrument. In a lossless setting, scaling of precision at the ultimate "Heisenberg" limit is possible with photon counting alone. Even as photon losses approach 100% the precision is shot-noise limited, and we identify the crossover point between quantum and classical precision as a function of detected flux. The predicted hypersensitivity is demonstrated with a Bayesian simulation.

  9. A non-parametric consistency test of the ΛCDM model with Planck CMB data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aghamousa, Amir; Shafieloo, Arman; Hamann, Jan, E-mail: amir@aghamousa.com, E-mail: jan.hamann@unsw.edu.au, E-mail: shafieloo@kasi.re.kr

    Non-parametric reconstruction methods, such as Gaussian process (GP) regression, provide a model-independent way of estimating an underlying function and its uncertainty from noisy data. We demonstrate how GP-reconstruction can be used as a consistency test between a given data set and a specific model by looking for structures in the residuals of the data with respect to the model's best-fit. Applying this formalism to the Planck temperature and polarisation power spectrum measurements, we test their global consistency with the predictions of the base ΛCDM model. Our results do not show any serious inconsistencies, lending further support to the interpretation ofmore » the base ΛCDM model as cosmology's gold standard.« less

  10. A Nonparametric Approach to Automated S-Wave Picking

    NASA Astrophysics Data System (ADS)

    Rawles, C.; Thurber, C. H.

    2014-12-01

    Although a number of very effective P-wave automatic pickers have been developed over the years, automatic picking of S waves has remained more challenging. Most automatic pickers take a parametric approach, whereby some characteristic function (CF), e.g. polarization or kurtosis, is determined from the data and the pick is estimated from the CF. We have adopted a nonparametric approach, estimating the pick directly from the waveforms. For a particular waveform to be auto-picked, the method uses a combination of similarity to a set of seismograms with known S-wave arrivals and dissimilarity to a set of seismograms that do not contain S-wave arrivals. Significant effort has been made towards dealing with the problem of S-to-P conversions. We have evaluated the effectiveness of our method by testing it on multiple sets of microearthquake seismograms with well-determined S-wave arrivals for several areas around the world, including fault zones and volcanic regions. In general, we find that the results from our auto-picker are consistent with reviewed analyst picks 90% of the time at the 0.2 s level and 80% of the time at the 0.1 s level, or better. For most of the large datasets we have analyzed, our auto-picker also makes far more S-wave picks than were made previously by analysts. We are using these enlarged sets of high-quality S-wave picks to refine tomographic inversions for these areas, resulting in substantial improvement in the quality of the S-wave images. We will show examples from New Zealand, Hawaii, and California.

  11. Parametric Net Influx Rate Images of 68Ga-DOTATOC and 68Ga-DOTATATE: Quantitative Accuracy and Improved Image Contrast.

    PubMed

    Ilan, Ezgi; Sandström, Mattias; Velikyan, Irina; Sundin, Anders; Eriksson, Barbro; Lubberink, Mark

    2017-05-01

    68 Ga-DOTATOC and 68 Ga-DOTATATE are radiolabeled somatostatin analogs used for the diagnosis of somatostatin receptor-expressing neuroendocrine tumors (NETs), and SUV measurements are suggested for treatment monitoring. However, changes in net influx rate ( K i ) may better reflect treatment effects than those of the SUV, and accordingly there is a need to compute parametric images showing K i at the voxel level. The aim of this study was to evaluate parametric methods for computation of parametric K i images by comparison to volume of interest (VOI)-based methods and to assess image contrast in terms of tumor-to-liver ratio. Methods: Ten patients with metastatic NETs underwent a 45-min dynamic PET examination followed by whole-body PET/CT at 1 h after injection of 68 Ga-DOTATOC and 68 Ga-DOTATATE on consecutive days. Parametric K i images were computed using a basis function method (BFM) implementation of the 2-tissue-irreversible-compartment model and the Patlak method using a descending aorta image-derived input function, and mean tumor K i values were determined for 50% isocontour VOIs and compared with K i values based on nonlinear regression (NLR) of the whole-VOI time-activity curve. A subsample of healthy liver was delineated in the whole-body and K i images, and tumor-to-liver ratios were calculated to evaluate image contrast. Correlation ( R 2 ) and agreement between VOI-based and parametric K i values were assessed using regression and Bland-Altman analysis. Results: The R 2 between NLR-based and parametric image-based (BFM) tumor K i values was 0.98 (slope, 0.81) and 0.97 (slope, 0.88) for 68 Ga-DOTATOC and 68 Ga-DOTATATE, respectively. For Patlak analysis, the R 2 between NLR-based and parametric-based (Patlak) tumor K i was 0.95 (slope, 0.71) and 0.92 (slope, 0.74) for 68 Ga-DOTATOC and 68 Ga-DOTATATE, respectively. There was no bias between NLR and parametric-based K i values. Tumor-to-liver contrast was 1.6 and 2.0 times higher in the parametric BFM K i images and 2.3 and 3.0 times in the Patlak images than in the whole-body images for 68 Ga-DOTATOC and 68 Ga-DOTATATE, respectively. Conclusion: A high R 2 and agreement between NLR- and parametric-based K i values was found, showing that K i images are quantitatively accurate. In addition, tumor-to-liver contrast was superior in the parametric K i images compared with whole-body images for both 68 Ga-DOTATOC and 68 Ga DOTATATE. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.

  12. A Robust Semi-Parametric Test for Detecting Trait-Dependent Diversification.

    PubMed

    Rabosky, Daniel L; Huang, Huateng

    2016-03-01

    Rates of species diversification vary widely across the tree of life and there is considerable interest in identifying organismal traits that correlate with rates of speciation and extinction. However, it has been challenging to develop methodological frameworks for testing hypotheses about trait-dependent diversification that are robust to phylogenetic pseudoreplication and to directionally biased rates of character change. We describe a semi-parametric test for trait-dependent diversification that explicitly requires replicated associations between character states and diversification rates to detect effects. To use the method, diversification rates are reconstructed across a phylogenetic tree with no consideration of character states. A test statistic is then computed to measure the association between species-level traits and the corresponding diversification rate estimates at the tips of the tree. The empirical value of the test statistic is compared to a null distribution that is generated by structured permutations of evolutionary rates across the phylogeny. The test is applicable to binary discrete characters as well as continuous-valued traits and can accommodate extremely sparse sampling of character states at the tips of the tree. We apply the test to several empirical data sets and demonstrate that the method has acceptable Type I error rates. © The Author(s) 2015. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  13. Evaluation of circularity error in drilling of syntactic foam composites

    NASA Astrophysics Data System (ADS)

    Ashrith H., S.; Doddamani, Mrityunjay; Gaitonde, Vinayak

    2018-04-01

    Syntactic foams are widely used in structural applications of automobiles, aircrafts and underwater vehicles due to their lightweight properties combined with high compression strength and low moisture absorption. Structural application requires drilling of holes for assembly purpose. In this investigation response surface methodology based mathematical models are used to analyze the effects of cutting speed, feed, drill diameter and filler content on circularity error both at entry and exit level in drilling of glass microballoon reinforced epoxy syntactic foam. Experiments are conducted based on full factorial design using solid coated tungsten carbide twist drills. The parametric analysis reveals that circularity error is highly influenced by drill diameter followed by spindle speed at the entry and exit level. Parametric analysis also reveals that increasing filler content decreases circularity error by 13.65 and 11.96% respectively at entry and exit levels. Average circularity error at the entry level is found to be 23.73% higher than at the exit level.

  14. A Bayesian goodness of fit test and semiparametric generalization of logistic regression with measurement data.

    PubMed

    Schörgendorfer, Angela; Branscum, Adam J; Hanson, Timothy E

    2013-06-01

    Logistic regression is a popular tool for risk analysis in medical and population health science. With continuous response data, it is common to create a dichotomous outcome for logistic regression analysis by specifying a threshold for positivity. Fitting a linear regression to the nondichotomized response variable assuming a logistic sampling model for the data has been empirically shown to yield more efficient estimates of odds ratios than ordinary logistic regression of the dichotomized endpoint. We illustrate that risk inference is not robust to departures from the parametric logistic distribution. Moreover, the model assumption of proportional odds is generally not satisfied when the condition of a logistic distribution for the data is violated, leading to biased inference from a parametric logistic analysis. We develop novel Bayesian semiparametric methodology for testing goodness of fit of parametric logistic regression with continuous measurement data. The testing procedures hold for any cutoff threshold and our approach simultaneously provides the ability to perform semiparametric risk estimation. Bayes factors are calculated using the Savage-Dickey ratio for testing the null hypothesis of logistic regression versus a semiparametric generalization. We propose a fully Bayesian and a computationally efficient empirical Bayesian approach to testing, and we present methods for semiparametric estimation of risks, relative risks, and odds ratios when parametric logistic regression fails. Theoretical results establish the consistency of the empirical Bayes test. Results from simulated data show that the proposed approach provides accurate inference irrespective of whether parametric assumptions hold or not. Evaluation of risk factors for obesity shows that different inferences are derived from an analysis of a real data set when deviations from a logistic distribution are permissible in a flexible semiparametric framework. © 2013, The International Biometric Society.

  15. A Unified and Comprehensible View of Parametric and Kernel Methods for Genomic Prediction with Application to Rice.

    PubMed

    Jacquin, Laval; Cao, Tuong-Vi; Ahmadi, Nourollah

    2016-01-01

    One objective of this study was to provide readers with a clear and unified understanding of parametric statistical and kernel methods, used for genomic prediction, and to compare some of these in the context of rice breeding for quantitative traits. Furthermore, another objective was to provide a simple and user-friendly R package, named KRMM, which allows users to perform RKHS regression with several kernels. After introducing the concept of regularized empirical risk minimization, the connections between well-known parametric and kernel methods such as Ridge regression [i.e., genomic best linear unbiased predictor (GBLUP)] and reproducing kernel Hilbert space (RKHS) regression were reviewed. Ridge regression was then reformulated so as to show and emphasize the advantage of the kernel "trick" concept, exploited by kernel methods in the context of epistatic genetic architectures, over parametric frameworks used by conventional methods. Some parametric and kernel methods; least absolute shrinkage and selection operator (LASSO), GBLUP, support vector machine regression (SVR) and RKHS regression were thereupon compared for their genomic predictive ability in the context of rice breeding using three real data sets. Among the compared methods, RKHS regression and SVR were often the most accurate methods for prediction followed by GBLUP and LASSO. An R function which allows users to perform RR-BLUP of marker effects, GBLUP and RKHS regression, with a Gaussian, Laplacian, polynomial or ANOVA kernel, in a reasonable computation time has been developed. Moreover, a modified version of this function, which allows users to tune kernels for RKHS regression, has also been developed and parallelized for HPC Linux clusters. The corresponding KRMM package and all scripts have been made publicly available.

  16. Brain segmentation and the generation of cortical surfaces

    NASA Technical Reports Server (NTRS)

    Joshi, M.; Cui, J.; Doolittle, K.; Joshi, S.; Van Essen, D.; Wang, L.; Miller, M. I.

    1999-01-01

    This paper describes methods for white matter segmentation in brain images and the generation of cortical surfaces from the segmentations. We have developed a system that allows a user to start with a brain volume, obtained by modalities such as MRI or cryosection, and constructs a complete digital representation of the cortical surface. The methodology consists of three basic components: local parametric modeling and Bayesian segmentation; surface generation and local quadratic coordinate fitting; and surface editing. Segmentations are computed by parametrically fitting known density functions to the histogram of the image using the expectation maximization algorithm [DLR77]. The parametric fits are obtained locally rather than globally over the whole volume to overcome local variations in gray levels. To represent the boundary of the gray and white matter we use triangulated meshes generated using isosurface generation algorithms [GH95]. A complete system of local parametric quadratic charts [JWM+95] is superimposed on the triangulated graph to facilitate smoothing and geodesic curve tracking. Algorithms for surface editing include extraction of the largest closed surface. Results for several macaque brains are presented comparing automated and hand surface generation. Copyright 1999 Academic Press.

  17. A Parametric Oscillator Experiment for Undergraduates

    NASA Astrophysics Data System (ADS)

    Huff, Alison; Thompson, Johnathon; Pate, Jacob; Kim, Hannah; Chiao, Raymond; Sharping, Jay

    We describe an upper-division undergraduate-level analytic mechanics experiment or classroom demonstration of a weakly-damped pendulum driven into parametric resonance. Students can derive the equations of motion from first principles and extract key oscillator features, such as quality factor and parametric gain, from experimental data. The apparatus is compact, portable and easily constructed from inexpensive components. Motion control and data acquisition are accomplished using an Arduino micro-controller incorporating a servo motor, laser sensor, and data logger. We record the passage time of the pendulum through its equilibrium position and obtain the maximum speed per oscillation as a function of time. As examples of the interesting physics which the experiment reveals, we present contour plots depicting the energy of the system as functions of driven frequency and modulation depth. We observe the transition to steady state oscillation and compare the experimental oscillation threshold with theoretical expectations. A thorough understanding of this hands-on laboratory exercise provides a foundation for current research in quantum information and opto-mechanics, where damped harmonic motion, quality factor, and parametric amplification are central.

  18. Possible signals of vacuum dynamics in the Universe

    NASA Astrophysics Data System (ADS)

    Peracaula, Joan Solà; de Cruz Pérez, Javier; Gómez-Valent, Adrià

    2018-05-01

    We study a generic class of time-evolving vacuum models which can provide a better phenomenological account of the overall cosmological observations as compared to the ΛCDM. Among these models, the running vacuum model (RVM) appears to be the most motivated and favored one, at a confidence level of ˜3σ. We further support these results by computing the Akaike and Bayesian information criteria. Our analysis also shows that we can extract fair signals of dynamical dark energy (DDE) by confronting the same set of data to the generic XCDM and CPL parametrizations. In all cases we confirm that the combined triad of modern observations on Baryonic Acoustic Oscillations, Large Scale Structure formation, and the Cosmic Microwave Background, provide the bulk of the signal sustaining a possible vacuum dynamics. In the absence of any of these three crucial data sources, the DDE signal can not be perceived at a significant confidence level. Its possible existence could be a cure for some of the tensions existing in the ΛCDM when confronted to observations.

  19. Increasing contextual demand modulates anterior and lateral prefrontal brain regions associated with proactive interference.

    PubMed

    Wolf, Robert Christian; Walter, Henrik; Vasic, Nenad

    2010-01-01

    Using a parametric version of a modified item-recognition paradigm with three different load levels and by means of event-related functional magnetic resonance imaging, this study tested the hypothesis that cerebral activation associated with intratrial proactive interference (PI) during working memory retrieval is influenced by increased context processing. We found activation of left BA 45 during interference trials across all levels of cognitive processing, and left lateralized activation of the dorsolateral prefrontal cortex (DLPFC, BA 9/46) and the frontopolar cortex (FPC, BA 10) with increasing contextual load. Compared with high susceptibility to PI, low susceptibility was associated with activation of the left DLPFC. These results suggest that an intratrial PI effect can be modulated by increasing context processing of a transiently relevant stimulus set. Moreover, PI resolution associated with increasing context load involves multiple prefrontal regions including the ventro- and dorsolateral prefrontal cortex as well as frontopolar brain areas. Furthermore, low susceptibility to PI might be influenced by increased executive control exerted by the DLPFC.

  20. Full hyperfine structure analysis of singly ionized molybdenum

    NASA Astrophysics Data System (ADS)

    Bouazza, Safa

    2017-03-01

    For a first time a parametric study of hyperfine structure of Mo II configuration levels is presented. The newly measured A and B hyperfine structure (hfs) constants values of Mo II 4d5, 4d45s and 4d35s2 configuration levels, for both 95 and 97 isotopes, using Fast-ion-beam laser-induced fluorescence spectroscopy [1] are gathered with other few data available in literature. A fitting procedure of an isolated set of these three lowest even-parity configuration levels has been performed by taking into account second-order of perturbation theory including the effects of closed shell-open shell excitations. Moreover the same study was done for Mo II odd-parity levels; for both parities two sets of fine structure parameters as well as the leading eigenvector percentages of levels and Landé-factor gJ, relevant for this paper are given. We present also predicted singlet, triplet and quintet positions of missing experimental levels up to 85000 cm-1. The single-electron hfs parameter values were extracted in their entirety for 97Mo II and for 95Mo II: for instance for 95Mo II, a4d 01 =-133.37 MHz and a5p 01 =-160.25 MHz for 4d45p; a4d 01 =-140.84 MHz, a5p 01 =-170.18 MHz and a5s 10 =-2898 MHz for 4d35s5p; a5s 10 =-2529 (2) MHz and a4d 01 =-135.17 (0.44) MHz for the 4d45s. These parameter values were analysed and compared with diverse ab-initio calculations. We closed this work with giving predicted values of magnetic dipole and electric quadrupole hfs constants of all known levels, whose splitting are not yet measured.

  1. Automatic firearm class identification from cartridge cases

    NASA Astrophysics Data System (ADS)

    Kamalakannan, Sridharan; Mann, Christopher J.; Bingham, Philip R.; Karnowski, Thomas P.; Gleason, Shaun S.

    2011-03-01

    We present a machine vision system for automatic identification of the class of firearms by extracting and analyzing two significant properties from spent cartridge cases, namely the Firing Pin Impression (FPI) and the Firing Pin Aperture Outline (FPAO). Within the framework of the proposed machine vision system, a white light interferometer is employed to image the head of the spent cartridge cases. As a first step of the algorithmic procedure, the Primer Surface Area (PSA) is detected using a circular Hough transform. Once the PSA is detected, a customized statistical region-based parametric active contour model is initialized around the center of the PSA and evolved to segment the FPI. Subsequently, the scaled version of the segmented FPI is used to initialize a customized Mumford-Shah based level set model in order to segment the FPAO. Once the shapes of FPI and FPAO are extracted, a shape-based level set method is used in order to compare these extracted shapes to an annotated dataset of FPIs and FPAOs from varied firearm types. A total of 74 cartridge case images non-uniformly distributed over five different firearms are processed using the aforementioned scheme and the promising nature of the results (95% classification accuracy) demonstrate the efficacy of the proposed approach.

  2. On the Advanced Wave Model of Parametric Down-Conversion

    NASA Astrophysics Data System (ADS)

    Lvovsky, A. I.; Aichele, T.

    The spatiotemporal optical mode of the single-photon Fock state prepared by conditional measurements on a biphoton is investigated and found to be identical to that of a classical wave due to a nonlinear interaction of the pump wave and Klyshko's advanced wave. We discuss the applicability of this identity in various experimental settings.

  3. Tsallis p, q-deformed Touchard polynomials and Stirling numbers

    NASA Astrophysics Data System (ADS)

    Herscovici, O.; Mansour, T.

    2017-01-01

    In this paper, we develop and investigate a new two-parametrized deformation of the Touchard polynomials, based on the definition of the NEXT q-exponential function of Tsallis. We obtain new generalizations of the Stirling numbers of the second kind and of the binomial coefficients and represent two new statistics for the set partitions.

  4. Period Estimation for Sparsely-sampled Quasi-periodic Light Curves Applied to Miras

    NASA Astrophysics Data System (ADS)

    He, Shiyuan; Yuan, Wenlong; Huang, Jianhua Z.; Long, James; Macri, Lucas M.

    2016-12-01

    We develop a nonlinear semi-parametric Gaussian process model to estimate periods of Miras with sparsely sampled light curves. The model uses a sinusoidal basis for the periodic variation and a Gaussian process for the stochastic changes. We use maximum likelihood to estimate the period and the parameters of the Gaussian process, while integrating out the effects of other nuisance parameters in the model with respect to a suitable prior distribution obtained from earlier studies. Since the likelihood is highly multimodal for period, we implement a hybrid method that applies the quasi-Newton algorithm for Gaussian process parameters and search the period/frequency parameter space over a dense grid. A large-scale, high-fidelity simulation is conducted to mimic the sampling quality of Mira light curves obtained by the M33 Synoptic Stellar Survey. The simulated data set is publicly available and can serve as a testbed for future evaluation of different period estimation methods. The semi-parametric model outperforms an existing algorithm on this simulated test data set as measured by period recovery rate and quality of the resulting period-luminosity relations.

  5. Parametric adaptive filtering and data validation in the bar GW detector AURIGA

    NASA Astrophysics Data System (ADS)

    Ortolan, A.; Baggio, L.; Cerdonio, M.; Prodi, G. A.; Vedovato, G.; Vitale, S.

    2002-04-01

    We report on our experience gained in the signal processing of the resonant GW detector AURIGA. Signal amplitude and arrival time are estimated by means of a matched-adaptive Wiener filter. The detector noise, entering in the filter set-up, is modelled as a parametric ARMA process; to account for slow non-stationarity of the noise, the ARMA parameters are estimated on an hourly basis. A requirement of the set-up of an unbiased Wiener filter is the separation of time spans with 'almost Gaussian' noise from non-Gaussian and/or strongly non-stationary time spans. The separation algorithm consists basically of a variance estimate with the Chauvenet convergence method and a threshold on the Curtosis index. The subsequent validation of data is strictly connected with the separation procedure: in fact, by injecting a large number of artificial GW signals into the 'almost Gaussian' part of the AURIGA data stream, we have demonstrated that the effective probability distributions of the signal-to-noise ratio χ2 and the time of arrival are those that are expected.

  6. Experimental parametric study of jet vortex generators for flow separation control

    NASA Technical Reports Server (NTRS)

    Selby, Gregory

    1991-01-01

    A parametric wind-tunnel study was performed with jet vortex generators to determine their effectiveness in controlling flow separation associated with low-speed turbulence flow over a two-dimensional rearward-facing ramp. Results indicate that flow-separation control can be accomplished, with the level of control achieved being a function of jet speed, jet orientation (with respect to the free-stream direction), and orifice pattern (double row of jets vs. single row). Compared to slot blowing, jet vortex generators can provide an equivalent level of flow control over a larger spanwise region (for constant jet flow area and speed). Dye flow visualization tests in a water tunnel indicated that the most effective jet vortex generator configurations produced streamwise co-rotating vortices.

  7. The challenge for genetic epidemiologists: how to analyze large numbers of SNPs in relation to complex diseases.

    PubMed

    Heidema, A Geert; Boer, Jolanda M A; Nagelkerke, Nico; Mariman, Edwin C M; van der A, Daphne L; Feskens, Edith J M

    2006-04-21

    Genetic epidemiologists have taken the challenge to identify genetic polymorphisms involved in the development of diseases. Many have collected data on large numbers of genetic markers but are not familiar with available methods to assess their association with complex diseases. Statistical methods have been developed for analyzing the relation between large numbers of genetic and environmental predictors to disease or disease-related variables in genetic association studies. In this commentary we discuss logistic regression analysis, neural networks, including the parameter decreasing method (PDM) and genetic programming optimized neural networks (GPNN) and several non-parametric methods, which include the set association approach, combinatorial partitioning method (CPM), restricted partitioning method (RPM), multifactor dimensionality reduction (MDR) method and the random forests approach. The relative strengths and weaknesses of these methods are highlighted. Logistic regression and neural networks can handle only a limited number of predictor variables, depending on the number of observations in the dataset. Therefore, they are less useful than the non-parametric methods to approach association studies with large numbers of predictor variables. GPNN on the other hand may be a useful approach to select and model important predictors, but its performance to select the important effects in the presence of large numbers of predictors needs to be examined. Both the set association approach and random forests approach are able to handle a large number of predictors and are useful in reducing these predictors to a subset of predictors with an important contribution to disease. The combinatorial methods give more insight in combination patterns for sets of genetic and/or environmental predictor variables that may be related to the outcome variable. As the non-parametric methods have different strengths and weaknesses we conclude that to approach genetic association studies using the case-control design, the application of a combination of several methods, including the set association approach, MDR and the random forests approach, will likely be a useful strategy to find the important genes and interaction patterns involved in complex diseases.

  8. Isolated effect of geometry on mitral valve function for in silico model development.

    PubMed

    Siefert, Andrew William; Rabbah, Jean-Pierre Michel; Saikrishnan, Neelakantan; Kunzelman, Karyn Susanne; Yoganathan, Ajit Prithivaraj

    2015-01-01

    Computational models for the heart's mitral valve (MV) exhibit several uncertainties that may be reduced by further developing these models using ground-truth data-sets. This study generated a ground-truth data-set by quantifying the effects of isolated mitral annular flattening, symmetric annular dilatation, symmetric papillary muscle (PM) displacement and asymmetric PM displacement on leaflet coaptation, mitral regurgitation (MR) and anterior leaflet strain. MVs were mounted in an in vitro left heart simulator and tested under pulsatile haemodynamics. Mitral leaflet coaptation length, coaptation depth, tenting area, MR volume, MR jet direction and anterior leaflet strain in the radial and circumferential directions were successfully quantified at increasing levels of geometric distortion. From these data, increase in the levels of isolated PM displacement resulted in the greatest mean change in coaptation depth (70% increase), tenting area (150% increase) and radial leaflet strain (37% increase) while annular dilatation resulted in the largest mean change in coaptation length (50% decrease) and regurgitation volume (134% increase). Regurgitant jets were centrally located for symmetric annular dilatation and symmetric PM displacement. Asymmetric PM displacement resulted in asymmetrically directed jets. Peak changes in anterior leaflet strain in the circumferential direction were smaller and exhibited non-significant differences across the tested conditions. When used together, this ground-truth data-set may be used to parametrically evaluate and develop modelling assumptions for both the MV leaflets and subvalvular apparatus. This novel data may improve MV computational models and provide a platform for the development of future surgical planning tools.

  9. Problems of low-parameter equations of state

    NASA Astrophysics Data System (ADS)

    Petrik, G. G.

    2017-11-01

    The paper focuses on the system approach to problems of low-parametric equations of state (EOS). It is a continuation of the investigations in the field of substantiated prognosis of properties on two levels, molecular and thermodynamic. Two sets of low-parameter EOS have been considered based on two very simple molecular-level models. The first one consists of EOS of van der Waals type (a modification of van der Waals EOS proposed for spheres). The main problem of these EOS is a weak connection with the micro-level, which raise many uncertainties. The second group of EOS has been derived by the author independently of the ideas of van der Waals based on the model of interacting point centers (IPC). All the parameters of the EOS have a meaning and are associated with the manifestation of attractive and repulsive forces. The relationship between them is found to be the control parameter of the thermodynamic level. In this case, EOS IPC passes into a one-parameter family. It is shown that many EOS of vdW-type can be included in the framework of the PC model. Simultaneously, all their parameters acquire a physical meaning.

  10. Characterizing Heterogeneity within Head and Neck Lesions Using Cluster Analysis of Multi-Parametric MRI Data.

    PubMed

    Borri, Marco; Schmidt, Maria A; Powell, Ceri; Koh, Dow-Mu; Riddell, Angela M; Partridge, Mike; Bhide, Shreerang A; Nutting, Christopher M; Harrington, Kevin J; Newbold, Katie L; Leach, Martin O

    2015-01-01

    To describe a methodology, based on cluster analysis, to partition multi-parametric functional imaging data into groups (or clusters) of similar functional characteristics, with the aim of characterizing functional heterogeneity within head and neck tumour volumes. To evaluate the performance of the proposed approach on a set of longitudinal MRI data, analysing the evolution of the obtained sub-sets with treatment. The cluster analysis workflow was applied to a combination of dynamic contrast-enhanced and diffusion-weighted imaging MRI data from a cohort of squamous cell carcinoma of the head and neck patients. Cumulative distributions of voxels, containing pre and post-treatment data and including both primary tumours and lymph nodes, were partitioned into k clusters (k = 2, 3 or 4). Principal component analysis and cluster validation were employed to investigate data composition and to independently determine the optimal number of clusters. The evolution of the resulting sub-regions with induction chemotherapy treatment was assessed relative to the number of clusters. The clustering algorithm was able to separate clusters which significantly reduced in voxel number following induction chemotherapy from clusters with a non-significant reduction. Partitioning with the optimal number of clusters (k = 4), determined with cluster validation, produced the best separation between reducing and non-reducing clusters. The proposed methodology was able to identify tumour sub-regions with distinct functional properties, independently separating clusters which were affected differently by treatment. This work demonstrates that unsupervised cluster analysis, with no prior knowledge of the data, can be employed to provide a multi-parametric characterization of functional heterogeneity within tumour volumes.

  11. Stress concentration factors at saddle and crown positions on the central brace of two-planar welded CHS DKT-connections

    NASA Astrophysics Data System (ADS)

    Ahmadi, Hamid; Lotfollahi-Yaghin, Mohammad Ali; Aminfar, Mohammad H.

    2012-03-01

    A set of parametric stress analyses was carried out for two-planar tubular DKT-joints under different axial loading conditions. The analysis results were used to present general remarks on the effects of the geometrical parameters on stress concentration factors (SCFs) at the inner saddle, outer saddle, and crown positions on the central brace. Based on results of finite element (FE) analysis and through nonlinear regression analysis, a new set of SCF parametric equations was established for fatigue design purposes. An assessment study of equations was conducted against the experimental data and original SCF database. The satisfaction of acceptance criteria proposed by the UK Department of Energy (UK DoE) was also checked. Results of parametric study showed that highly remarkable differences exist between the SCF values in a multi-planar DKT-joint and the corresponding SCFs in an equivalent uni-planar KT-joint having the same geometrical properties. It can be clearly concluded from this observation that using the equations proposed for uni-planar KT-connections to compute the SCFs in multi-planar DKT-joints will lead to either considerably under-predicting or over-predicting results. Hence, it is necessary to develop SCF formulae specially designed for multi-planar DKT-joints. Good results of equation assessment according to UK DoE acceptance criteria, high values of correlation coefficients, and the satisfactory agreement between the predictions of the proposed equations and the experimental data guarantee the accuracy of the equations. Therefore, the developed equations can be reliably used for fatigue design of offshore structures.

  12. Parametric approaches to micro-scale characterization of tissue volumes in vivo and ex vivo: Imaging microvasculature, attenuation, birefringence, and stiffness (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Sampson, David D.; Chin, Lixin; Gong, Peijun; Wijesinghe, Philip; Es'haghian, Shaghayegh; Allen, Wesley M.; Klyen, Blake R.; Kirk, Rodney W.; Kennedy, Brendan F.; McLaughlin, Robert A.

    2016-03-01

    INVITED TALK Advances in imaging tissue microstructure in living subjects, or in freshly excised tissue with minimum preparation and processing, are important for future diagnosis and surgical guidance in the clinical setting, particularly for application to cancer. Whilst microscopy methods continue to advance on the cellular scale and medical imaging is well established on the scale of the whole tumor or organ, it is attractive to consider imaging the tumor environment on the micro-scale, between that of cells and whole tissues. Such a scenario is ideally suited to optical coherence tomography (OCT), with the twin attractions of requiring little or no tissue preparation, and in vivo capability. OCT's intrinsic scattering contrast reveals many morphological features of tumors, but is frequently ineffective in revealing other important aspects, such as microvasculature, or in reliably distinguishing tumor from uninvolved stroma. To address these shortcomings, we are developing several advances on the basic OCT approach. We are exploring speckle fluctuations to image tissue microvasculature and we have been developing several parametric approaches to tissue micro-scale characterization. Our approaches extract, from a three-dimensional OCT data set, a two-dimensional image of an optical parameter, such as attenuation or birefringence, or a mechanical parameter, such as stiffness, that aids in characterizing the tissue. This latter method, termed optical coherence elastography, parallels developments in ultrasound and magnetic resonance imaging. Parametric imaging of birefringence and of stiffness both show promise in addressing the important issue of differentiating cancer from uninvolved stroma in breast tissue.

  13. Parametric modulation of neural activity by emotion in youth with bipolar disorder, severe mood dysregulation, and healthy subjects

    PubMed Central

    Thomas, Laura A.; Brotman, Melissa A.; Muhrer, Eli M.; Rosen, Brooke H.; Bones, Brian L.; Reynolds, Richard C.; Deveney, Christen; Pine, Daniel S.; Leibenluft, Ellen

    2012-01-01

    Context Youth with bipolar disorder (BD) and those with severe, non-episodic irritability (severe mood dysregulation, SMD) show amygdala dysfunction during face emotion processing. However, studies have not compared such patients to each other and to comparison subjects in neural responsiveness to subtle changes in face emotion; the ability to process such changes is important for social cognition. We employed a novel parametrically designed faces paradigm. Objective Using a parametrically morphed emotional faces task, we compared activation in the amygdala and across the brain in BD, SMD, and healthy volunteers (HV). Design Case-control study. Setting Government research institute. Participants 57 youths (19 BD, 15 SMD, 23 HV). Main Outcome Measure Blood oxygenated level dependent (BOLD) data. Neutral faces were morphed with angry and happy faces in 25% intervals; static face stimuli appeared for 3000ms. Subjects performed hostility or non-emotional facial feature (i.e., nose width) ratings. Slope of BOLD activity was calculated across neutral-to-angry (N→A) and neutral-to-happy (N→H) face stimuli. Results In HV, but not BD or SMD, there was a positive association between left amygdala activity and anger on the face. In the N→H whole brain analysis, BD and SMD modulated parietal, temporal, and medial-frontal areas differently from each other and from HV; with increasing facial-happiness, SMD increased, while BD decreased, activity in parietal, temporal, and frontal regions. Conclusions Youth with BD or SMD differ from HV in modulation of amygdala activity in response to small changes in facial anger displays. In contrast, BD and SMD show distinct perturbations in regions mediating attention and face processing in association with changes in the emotional intensity of facial happiness displays. These findings demonstrate similarities and differences in the neural correlates of face emotion processing in BD and SMD, suggesting these distinct clinical presentations may reflect differing pathologies along a mood disorders spectrum. PMID:23026912

  14. Prepositioning emergency supplies under uncertainty: a parametric optimization method

    NASA Astrophysics Data System (ADS)

    Bai, Xuejie; Gao, Jinwu; Liu, Yankui

    2018-07-01

    Prepositioning of emergency supplies is an effective method for increasing preparedness for disasters and has received much attention in recent years. In this article, the prepositioning problem is studied by a robust parametric optimization method. The transportation cost, supply, demand and capacity are unknown prior to the extraordinary event, which are represented as fuzzy parameters with variable possibility distributions. The variable possibility distributions are obtained through the credibility critical value reduction method for type-2 fuzzy variables. The prepositioning problem is formulated as a fuzzy value-at-risk model to achieve a minimum total cost incurred in the whole process. The key difficulty in solving the proposed optimization model is to evaluate the quantile of the fuzzy function in the objective and the credibility in the constraints. The objective function and constraints can be turned into their equivalent parametric forms through chance constrained programming under the different confidence levels. Taking advantage of the structural characteristics of the equivalent optimization model, a parameter-based domain decomposition method is developed to divide the original optimization problem into six mixed-integer parametric submodels, which can be solved by standard optimization solvers. Finally, to explore the viability of the developed model and the solution approach, some computational experiments are performed on realistic scale case problems. The computational results reported in the numerical example show the credibility and superiority of the proposed parametric optimization method.

  15. Generation of high-energy sub-20 fs pulses tunable in the 250-310 nm region by frequency doubling of a high-power noncollinear optical parametric amplifier.

    PubMed

    Beutler, Marcus; Ghotbi, Masood; Noack, Frank; Brida, Daniele; Manzoni, Cristian; Cerullo, Giulio

    2009-03-15

    We report on the generation of powerful sub-20 fs deep UV pulses with 10 microJ level energy and broadly tunable in the 250-310 nm range. These pulses are produced by frequency doubling a high-power noncollinear optical parametric amplifier and compressed by a pair of MgF2 prisms to an almost transform-limited duration. Our results provide a power scaling by an order of magnitude with respect to previous works.

  16. When Will the Antarctic Ozone Hole Recover?

    NASA Technical Reports Server (NTRS)

    Newman, Paul A.

    2006-01-01

    The Antarctic ozone hole demonstrates large-scale, man-made affects on our atmosphere. Surface observations now show that human produced ozone depleting substances (ODSs) are declining. The ozone hole should soon start to diminish because of this decline. In this talk we will demonstrate an ozone hole parametric model. This model is based upon: 1) a new algorithm for estimating 61 and Br levels over Antarctica and 2) late-spring Antarctic stratospheric temperatures. This parametric model explains 95% of the ozone hole area's variance. We use future ODS levels to predict ozone hole recovery. Full recovery to 1980 levels will occur in approximately 2068. The ozone hole area will very slowly decline over the next 2 decades. Detection of a statistically significant decrease of area will not occur until approximately 2024. We further show that nominal Antarctic stratospheric greenhouse gas forced temperature change should have a small impact on the ozone hole.

  17. When Will the Antarctic Ozone Hole Recover?

    NASA Technical Reports Server (NTRS)

    Newman, Paul A.; Nash, Eric R.; Kawa, S. Randolph; Montzka, Stephen A.; Schauffler, Sue

    2006-01-01

    The Antarctic ozone hole demonstrates large-scale, man-made affects on our atmosphere. Surface observations now show that human produced ozone depleting substances (ODSs) are declining. The ozone hole should soon start to diminish because of this decline. Herein we demonstrate an ozone hole parametric model. This model is based upon: 1) a new algorithm for estimating C1 and Br levels over Antarctica and 2) late-spring Antarctic stratospheric temperatures. This parametric model explains 95% of the ozone hole area s variance. We use future ODS levels to predict ozone hole recovery. Full recovery to 1980 levels will occur in approximately 2068. The ozone hole area will very slowly decline over the next 2 decades. Detection of a statistically significant decrease of area will not occur until approximately 2024. We further show that nominal Antarctic stratospheric greenhouse gas forced temperature change should have a small impact on the ozone hole.

  18. Parametric studies with an atmospheric diffusion model that assesses toxic fuel hazards due to the ground clouds generated by rocket launches

    NASA Technical Reports Server (NTRS)

    Stewart, R. B.; Grose, W. L.

    1975-01-01

    Parametric studies were made with a multilayer atmospheric diffusion model to place quantitative limits on the uncertainty of predicting ground-level toxic rocket-fuel concentrations. Exhaust distributions in the ground cloud, cloud stabilized geometry, atmospheric coefficients, the effects of exhaust plume afterburning of carbon monoxide CO, assumed surface mixing-layer division in the model, and model sensitivity to different meteorological regimes were studied. Large-scale differences in ground-level predictions are quantitatively described. Cloud alongwind growth for several meteorological conditions is shown to be in error because of incorrect application of previous diffusion theory. In addition, rocket-plume calculations indicate that almost all of the rocket-motor carbon monoxide is afterburned to carbon dioxide CO2, thus reducing toxic hazards due to CO. The afterburning is also shown to have a significant effect on cloud stabilization height and on ground-level concentrations of exhaust products.

  19. TU-CD-BRB-09: Prediction of Chemo-Radiation Outcome for Rectal Cancer Based On Radiomics of Tumor Clinical Characteristics and Multi-Parametric MRI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nie, K; Yue, N; Shi, L

    2015-06-15

    Purpose: To evaluate the tumor clinical characteristics and quantitative multi-parametric MR imaging features for prediction of response to chemo-radiation treatment (CRT) in locally advanced rectal cancer (LARC). Methods: Forty-three consecutive patients (59.7±6.9 years, from 09/2013 – 06/2014) receiving neoadjuvant CRT followed by surgery were enrolled. All underwent MRI including anatomical T1/T2, Dynamic Contrast Enhanced (DCE)-MRI and Diffusion-Weighted MRI (DWI) prior to the treatment. A total of 151 quantitative features, including morphology/Gray Level Co-occurrence Matrix (GLCM) texture from T1/T2, enhancement kinetics and the voxelized distribution from DCE-MRI, apparent diffusion coefficient (ADC) from DWI, along with clinical information (carcinoembryonic antigen CEA level,more » TNM staging etc.), were extracted for each patient. Response groups were separated based on down-staging, good response and pathological complete response (pCR) status. Logistic regression analysis (LRA) was used to select the best predictors to classify different groups and the predictive performance were calculated using receiver operating characteristic (ROC) analysis. Results: Individual imaging category or clinical charateristics might yield certain level of power in assessing the response. However, the combined model outperformed than any category alone in prediction. With selected features as Volume, GLCM AutoCorrelation (T2), MaxEnhancementProbability (DCE-MRI), and MeanADC (DWI), the down-staging prediciton accuracy (area under the ROC curve, AUC) could be 0.95, better than individual tumor metrics with AUC from 0.53–0.85. While for the pCR prediction, the best set included CEA (clinical charateristics), Homogeneity (DCE-MRI) and MeanADC (DWI) with an AUC of 0.89, more favorable compared to conventional tumor metrics with an AUC ranging from 0.511–0.79. Conclusion: Through a systematic analysis of multi-parametric MR imaging features, we are able to build models with improved predictive value over conventional imaging or clinical metrics. This is encouraging, suggesting the wealth of imaging radiomics should be further explored to help tailor the treatment into the era of personalized medicine. This work is supported by the National Science Foundation of China (NSFC Grant No. 81201091), National High Technology Research and Development Program of China (863 program, Grant No. 2015AA020917), and Fund Project for Excellent Abroad Scholar Personnel in Science and Technology.« less

  20. Confidence Intervals for Laboratory Sonic Boom Annoyance Tests

    NASA Technical Reports Server (NTRS)

    Rathsam, Jonathan; Christian, Andrew

    2016-01-01

    Commercial supersonic flight is currently forbidden over land because sonic booms have historically caused unacceptable annoyance levels in overflown communities. NASA is providing data and expertise to noise regulators as they consider relaxing the ban for future quiet supersonic aircraft. One deliverable NASA will provide is a predictive model for indoor annoyance to aid in setting an acceptable quiet sonic boom threshold. A laboratory study was conducted to determine how indoor vibrations caused by sonic booms affect annoyance judgments. The test method required finding the point of subjective equality (PSE) between sonic boom signals that cause vibrations and signals not causing vibrations played at various amplitudes. This presentation focuses on a few statistical techniques for estimating the interval around the PSE. The techniques examined are the Delta Method, Parametric and Nonparametric Bootstrapping, and Bayesian Posterior Estimation.

  1. Surface settling in partially filled containers upon step reduction in gravity

    NASA Technical Reports Server (NTRS)

    Weislogel, Marl M.; Ross, Howard D.

    1990-01-01

    A large literature exists concerning the equilibrium configurations of free liquid/gas surfaces in reduced gravity environments. Such conditions generally yield surfaces of constant curvature meeting the container wall at a particular (contact) angle. The time required to reach and stabilize about this configuration is less studied for the case of sudden changes in gravity level, e.g. from normal- to low-gravity, as can occur in many drop tower experiments. The particular interest here was to determine the total reorientation time for such surfaces in cylinders (mainly), as a function primarily of contact angle and kinematic viscosity, in order to aid in the development of drop tower experiment design. A large parametric range of tests were performed and, based on an accompanying scale analysis, the complete data set was correlated. The results of other investigations are included for comparison.

  2. Correcting for the effects of pupil discontinuities with the ACAD method

    NASA Astrophysics Data System (ADS)

    Mazoyer, Johan; Pueyo, Laurent; N'Diaye, Mamadou; Mawet, Dimitri; Soummer, Rémi; Norman, Colin

    2016-07-01

    The current generation of ground-based coronagraphic instruments uses deformable mirrors to correct for phase errors and to improve contrast levels at small angular separations. Improving these techniques, several space and ground based instruments are currently developed using two deformable mirrors to correct for both phase and amplitude errors. However, as wavefront control techniques improve, more complex telescope pupil geometries (support structures, segmentation) will soon be a limiting factor for these next generation coronagraphic instruments. The technique presented in this proceeding, the Active Correction of Aperture Discontinuities method, is taking advantage of the fact that most future coronagraphic instruments will include two deformable mirrors, and is proposing to find the shapes and actuator movements to correct for the effect introduced by these complex pupil geometries. For any coronagraph previously designed for continuous apertures, this technique allow to obtain similar performance in contrast with a complex aperture (with segmented and secondary mirror support structures), with high throughput and flexibility to adapt to changing pupil geometry (e.g. in case of segment failure or maintenance of the segments). We here present the results of the parametric analysis realized on the WFIRST pupil for which we obtained high contrast levels with several deformable mirror setups (size, separation between them), coronagraphs (Vortex charge 2, vortex charge 4, APLC) and spectral bandwidths. However, because contrast levels and separation are not the only metrics to maximize the scientific return of an instrument, we also included in this study the influence of these deformable mirror shapes on the throughput of the instrument and sensitivity to pointing jitters. Finally, we present results obtained on another potential space based telescope segmented aperture. The main result of this proceeding is that we now obtain comparable performance than the coronagraphs previously designed for WFIRST. First result from the parametric analysis strongly suggest that the 2 deformable mirror set up (size and distance between them) have a important impact on the performance in contrast and throughput of the final instrument.

  3. An internally consistent set of thermodynamic data for twentyone CaO-Al2O3-SiO2- H2O phases by linear parametric programming

    NASA Astrophysics Data System (ADS)

    Halbach, Heiner; Chatterjee, Niranjan D.

    1984-11-01

    The technique of linear parametric programming has been applied to derive sets of internally consistent thermodynamic data for 21 condensed phases of the quaternary system CaO-Al2O3-SiO2-H2O (CASH) (Table 4). This was achieved by simultaneously processing: a) calorimetric data for 16 of these phases (Table 1), and b) experimental phase equilibria reversal brackets for 27 reactions (Table 3) involving these phases. Calculation of equilibrium P-T curves of several arbitrarily picked reactions employing the preferred set of internally consistent thermodynamic data from Table 4 shows that the input brackets are invariably satisfied by the calculations (Fig. 2a). By contrast, the same equilibria calculated on the basis of a set of thermodynamic data derived by applying statistical methods to a large body of comparable input data (Haas et al. 1981; Hemingway et al. 1982) do not necessarily agree with the experimental reversal brackets. Prediction of some experimentally investigated phase relations not included into the linear programming input database also appears to be remarkably successful. Indications are, therefore, that the thermodynamic data listed in Table 4 may be used with confidence to predict geologic phase relations in the CASH system with considerable accuracy. For such calculated phase diagrams and their petrological implications, the reader's attention is drawn to the paper by Chatterjee et al. (1984).

  4. Performance, static stability, and control effectiveness of a parametric space shuttle launch vehicle

    NASA Technical Reports Server (NTRS)

    Buchholz, R. E.; Gamble, M.

    1972-01-01

    This test was run as a continuation of a prior investigation of aerodynamic performance and static stability tests for a parametric space shuttle launch vehicle. The purposes of this test were: (1) to obtain a more complete set of data in the transonic flight region, (2) to investigate new H-0 tank noseshapes and tank diameters, (3) to obtain control effectiveness data for the orbiter at 0 degree incidence and with a smaller diameter H-0 tank, and (4) to determine the effects of varying solid rocket motor-to-H0 tank gap size. Experimental data were obtained for angles of attack from -10 to +10 degrees and for angles of sideslip from +10 to -10 degrees at Mach numbers ranging from .6 to 4.96.

  5. Confidence intervals for differences between volumes under receiver operating characteristic surfaces (VUS) and generalized Youden indices (GYIs).

    PubMed

    Yin, Jingjing; Nakas, Christos T; Tian, Lili; Reiser, Benjamin

    2018-03-01

    This article explores both existing and new methods for the construction of confidence intervals for differences of indices of diagnostic accuracy of competing pairs of biomarkers in three-class classification problems and fills the methodological gaps for both parametric and non-parametric approaches in the receiver operating characteristic surface framework. The most widely used such indices are the volume under the receiver operating characteristic surface and the generalized Youden index. We describe implementation of all methods and offer insight regarding the appropriateness of their use through a large simulation study with different distributional and sample size scenarios. Methods are illustrated using data from the Alzheimer's Disease Neuroimaging Initiative study, where assessment of cognitive function naturally results in a three-class classification setting.

  6. Pinching parameters for open (super) strings

    NASA Astrophysics Data System (ADS)

    Playle, Sam; Sciuto, Stefano

    2018-02-01

    We present an approach to the parametrization of (super) Schottky space obtained by sewing together three-punctured discs with strips. Different cubic ribbon graphs classify distinct sets of pinching parameters; we show how they are mapped onto each other. The parametrization is particularly well-suited to describing the region within (super) moduli space where open bosonic or Neveu-Schwarz string propagators become very long and thin, which dominates the IR behaviour of string theories. We show how worldsheet objects such as the Green's function converge to graph theoretic objects such as the Symanzik polynomials in the α ' → 0 limit, allowing us to see how string theory reproduces the sum over Feynman graphs. The (super) string measure takes on a simple and elegant form when expressed in terms of these parameters.

  7. Shape-driven 3D segmentation using spherical wavelets.

    PubMed

    Nain, Delphine; Haker, Steven; Bobick, Aaron; Tannenbaum, Allen

    2006-01-01

    This paper presents a novel active surface segmentation algorithm using a multiscale shape representation and prior. We define a parametric model of a surface using spherical wavelet functions and learn a prior probability distribution over the wavelet coefficients to model shape variations at different scales and spatial locations in a training set. Based on this representation, we derive a parametric active surface evolution using the multiscale prior coefficients as parameters for our optimization procedure to naturally include the prior in the segmentation framework. Additionally, the optimization method can be applied in a coarse-to-fine manner. We apply our algorithm to the segmentation of brain caudate nucleus, of interest in the study of schizophrenia. Our validation shows our algorithm is computationally efficient and outperforms the Active Shape Model algorithm by capturing finer shape details.

  8. A Maximum Entropy Method for Particle Filtering

    NASA Astrophysics Data System (ADS)

    Eyink, Gregory L.; Kim, Sangil

    2006-06-01

    Standard ensemble or particle filtering schemes do not properly represent states of low priori probability when the number of available samples is too small, as is often the case in practical applications. We introduce here a set of parametric resampling methods to solve this problem. Motivated by a general H-theorem for relative entropy, we construct parametric models for the filter distributions as maximum-entropy/minimum-information models consistent with moments of the particle ensemble. When the prior distributions are modeled as mixtures of Gaussians, our method naturally generalizes the ensemble Kalman filter to systems with highly non-Gaussian statistics. We apply the new particle filters presented here to two simple test cases: a one-dimensional diffusion process in a double-well potential and the three-dimensional chaotic dynamical system of Lorenz.

  9. Witnessing entanglement without entanglement witness operators.

    PubMed

    Pezzè, Luca; Li, Yan; Li, Weidong; Smerzi, Augusto

    2016-10-11

    Quantum mechanics predicts the existence of correlations between composite systems that, although puzzling to our physical intuition, enable technologies not accessible in a classical world. Notwithstanding, there is still no efficient general method to theoretically quantify and experimentally detect entanglement of many qubits. Here we propose to detect entanglement by measuring the statistical response of a quantum system to an arbitrary nonlocal parametric evolution. We witness entanglement without relying on the tomographic reconstruction of the quantum state, or the realization of witness operators. The protocol requires two collective settings for any number of parties and is robust against noise and decoherence occurring after the implementation of the parametric transformation. To illustrate its user friendliness we demonstrate multipartite entanglement in different experiments with ions and photons by analyzing published data on fidelity visibilities and variances of collective observables.

  10. Frequency-agile THz-wave generation and detection system using nonlinear frequency conversion at room temperature.

    PubMed

    Guo, Ruixiang; Ikar'i, Tomofumi; Zhang, Jun; Minamide, Hiroaki; Ito, Hiromasa

    2010-08-02

    A surface-emitting THz parametric oscillator is set up to generate a narrow-linewidth, nanosecond pulsed THz-wave radiation. The THz-wave radiation is coherently detected using the frequency up-conversion in MgO: LiNbO(3) crystal. Fast frequency tuning and automatic achromatic THz-wave detection are achieved through a special optical design, including a variable-angle mirror and 1:1 telescope devices in the pump and THz-wave beams. We demonstrate a frequency-agile THz-wave parametric generation and THz-wave coherent detection system. This system can be used as a frequency-domain THz-wave spectrometer operated at room-temperature, and there are a high possible to develop into a real-time two-dimensional THz spectral imaging system.

  11. Intensity and temporal noise characteristics in femtosecond optical parametric amplifiers.

    PubMed

    Chen, Wei; Fan, Jintao; Ge, Aichen; Song, Huanyu; Song, Youjian; Liu, Bowen; Chai, Lu; Wang, Chingyue; Hu, Minglie

    2017-12-11

    We characterize the relative intensity noise (RIN) and relative timing jitter (RTJ) between the signal and pump pulses of optical parametric amplifiers (OPAs) seeded by three different seed sources. Compared to a white-light continuum (WLC) seeded- and an optical parametric generator (OPG) seeded OPA, the narrowband CW seeded OPA exhibits the lowest root-mean-square (RMS) RIN and RTJ of 0.79% and 0.32 fs, respectively, integrated from 1 kHz to the Nyquist frequency of 1.25 MHz. An improved numerical model based on a forward Maxwell equation (FME) is built to investigate the transfers of the pump and seed's noise to the resulting OPAs' intensity and temporal fluctuation. Both the experimental and numerical study indicate that the low level of noise from the narrowband CW seeded OPA is attributed to the elimination of the RIN and RTJ coupled from the noise of seed source, being one of the important contributions to RIN and timing jitter in the other two OPAs. The approach to achieve lower level of noise from this CW seeded OPA by driving it close to saturation is also discussed with the same numerical model.

  12. Brain Signal Variability is Parametrically Modifiable

    PubMed Central

    Garrett, Douglas D.; McIntosh, Anthony R.; Grady, Cheryl L.

    2014-01-01

    Moment-to-moment brain signal variability is a ubiquitous neural characteristic, yet remains poorly understood. Evidence indicates that heightened signal variability can index and aid efficient neural function, but it is not known whether signal variability responds to precise levels of environmental demand, or instead whether variability is relatively static. Using multivariate modeling of functional magnetic resonance imaging-based parametric face processing data, we show here that within-person signal variability level responds to incremental adjustments in task difficulty, in a manner entirely distinct from results produced by examining mean brain signals. Using mixed modeling, we also linked parametric modulations in signal variability with modulations in task performance. We found that difficulty-related reductions in signal variability predicted reduced accuracy and longer reaction times within-person; mean signal changes were not predictive. We further probed the various differences between signal variance and signal means by examining all voxels, subjects, and conditions; this analysis of over 2 million data points failed to reveal any notable relations between voxel variances and means. Our results suggest that brain signal variability provides a systematic task-driven signal of interest from which we can understand the dynamic function of the human brain, and in a way that mean signals cannot capture. PMID:23749875

  13. Modelling present-day basal melt rates for Antarctic ice shelves using a parametrization of buoyant meltwater plumes

    NASA Astrophysics Data System (ADS)

    Lazeroms, Werner M. J.; Jenkins, Adrian; Hilmar Gudmundsson, G.; van de Wal, Roderik S. W.

    2018-01-01

    Basal melting below ice shelves is a major factor in mass loss from the Antarctic Ice Sheet, which can contribute significantly to possible future sea-level rise. Therefore, it is important to have an adequate description of the basal melt rates for use in ice-dynamical models. Most current ice models use rather simple parametrizations based on the local balance of heat between ice and ocean. In this work, however, we use a recently derived parametrization of the melt rates based on a buoyant meltwater plume travelling upward beneath an ice shelf. This plume parametrization combines a non-linear ocean temperature sensitivity with an inherent geometry dependence, which is mainly described by the grounding-line depth and the local slope of the ice-shelf base. For the first time, this type of parametrization is evaluated on a two-dimensional grid covering the entire Antarctic continent. In order to apply the essentially one-dimensional parametrization to realistic ice-shelf geometries, we present an algorithm that determines effective values for the grounding-line depth and basal slope in any point beneath an ice shelf. Furthermore, since detailed knowledge of temperatures and circulation patterns in the ice-shelf cavities is sparse or absent, we construct an effective ocean temperature field from observational data with the purpose of matching (area-averaged) melt rates from the model with observed present-day melt rates. Our results qualitatively replicate large-scale observed features in basal melt rates around Antarctica, not only in terms of average values, but also in terms of the spatial pattern, with high melt rates typically occurring near the grounding line. The plume parametrization and the effective temperature field presented here are therefore promising tools for future simulations of the Antarctic Ice Sheet requiring a more realistic oceanic forcing.

  14. Speeding Up Non-Parametric Bootstrap Computations for Statistics Based on Sample Moments in Small/Moderate Sample Size Applications

    PubMed Central

    Chaibub Neto, Elias

    2015-01-01

    In this paper we propose a vectorized implementation of the non-parametric bootstrap for statistics based on sample moments. Basically, we adopt the multinomial sampling formulation of the non-parametric bootstrap, and compute bootstrap replications of sample moment statistics by simply weighting the observed data according to multinomial counts instead of evaluating the statistic on a resampled version of the observed data. Using this formulation we can generate a matrix of bootstrap weights and compute the entire vector of bootstrap replications with a few matrix multiplications. Vectorization is particularly important for matrix-oriented programming languages such as R, where matrix/vector calculations tend to be faster than scalar operations implemented in a loop. We illustrate the application of the vectorized implementation in real and simulated data sets, when bootstrapping Pearson’s sample correlation coefficient, and compared its performance against two state-of-the-art R implementations of the non-parametric bootstrap, as well as a straightforward one based on a for loop. Our investigations spanned varying sample sizes and number of bootstrap replications. The vectorized bootstrap compared favorably against the state-of-the-art implementations in all cases tested, and was remarkably/considerably faster for small/moderate sample sizes. The same results were observed in the comparison with the straightforward implementation, except for large sample sizes, where the vectorized bootstrap was slightly slower than the straightforward implementation due to increased time expenditures in the generation of weight matrices via multinomial sampling. PMID:26125965

  15. Testing in semiparametric models with interaction, with applications to gene-environment interactions.

    PubMed

    Maity, Arnab; Carroll, Raymond J; Mammen, Enno; Chatterjee, Nilanjan

    2009-01-01

    Motivated from the problem of testing for genetic effects on complex traits in the presence of gene-environment interaction, we develop score tests in general semiparametric regression problems that involves Tukey style 1 degree-of-freedom form of interaction between parametrically and non-parametrically modelled covariates. We find that the score test in this type of model, as recently developed by Chatterjee and co-workers in the fully parametric setting, is biased and requires undersmoothing to be valid in the presence of non-parametric components. Moreover, in the presence of repeated outcomes, the asymptotic distribution of the score test depends on the estimation of functions which are defined as solutions of integral equations, making implementation difficult and computationally taxing. We develop profiled score statistics which are unbiased and asymptotically efficient and can be performed by using standard bandwidth selection methods. In addition, to overcome the difficulty of solving functional equations, we give easy interpretations of the target functions, which in turn allow us to develop estimation procedures that can be easily implemented by using standard computational methods. We present simulation studies to evaluate type I error and power of the method proposed compared with a naive test that does not consider interaction. Finally, we illustrate our methodology by analysing data from a case-control study of colorectal adenoma that was designed to investigate the association between colorectal adenoma and the candidate gene NAT2 in relation to smoking history.

  16. PSFGAN: a generative adversarial network system for separating quasar point sources and host galaxy light

    NASA Astrophysics Data System (ADS)

    Stark, Dominic; Launet, Barthelemy; Schawinski, Kevin; Zhang, Ce; Koss, Michael; Turp, M. Dennis; Sartori, Lia F.; Zhang, Hantian; Chen, Yiru; Weigel, Anna K.

    2018-06-01

    The study of unobscured active galactic nuclei (AGN) and quasars depends on the reliable decomposition of the light from the AGN point source and the extended host galaxy light. The problem is typically approached using parametric fitting routines using separate models for the host galaxy and the point spread function (PSF). We present a new approach using a Generative Adversarial Network (GAN) trained on galaxy images. We test the method using Sloan Digital Sky Survey r-band images with artificial AGN point sources added that are then removed using the GAN and with parametric methods using GALFIT. When the AGN point source is more than twice as bright as the host galaxy, we find that our method, PSFGAN, can recover point source and host galaxy magnitudes with smaller systematic error and a lower average scatter (49 per cent). PSFGAN is more tolerant to poor knowledge of the PSF than parametric methods. Our tests show that PSFGAN is robust against a broadening in the PSF width of ± 50 per cent if it is trained on multiple PSFs. We demonstrate that while a matched training set does improve performance, we can still subtract point sources using a PSFGAN trained on non-astronomical images. While initial training is computationally expensive, evaluating PSFGAN on data is more than 40 times faster than GALFIT fitting two components. Finally, PSFGAN is more robust and easy to use than parametric methods as it requires no input parameters.

  17. Effect of sample size on multi-parametric prediction of tissue outcome in acute ischemic stroke using a random forest classifier

    NASA Astrophysics Data System (ADS)

    Forkert, Nils Daniel; Fiehler, Jens

    2015-03-01

    The tissue outcome prediction in acute ischemic stroke patients is highly relevant for clinical and research purposes. It has been shown that the combined analysis of diffusion and perfusion MRI datasets using high-level machine learning techniques leads to an improved prediction of final infarction compared to single perfusion parameter thresholding. However, most high-level classifiers require a previous training and, until now, it is ambiguous how many subjects are required for this, which is the focus of this work. 23 MRI datasets of acute stroke patients with known tissue outcome were used in this work. Relative values of diffusion and perfusion parameters as well as the binary tissue outcome were extracted on a voxel-by- voxel level for all patients and used for training of a random forest classifier. The number of patients used for training set definition was iteratively and randomly reduced from using all 22 other patients to only one other patient. Thus, 22 tissue outcome predictions were generated for each patient using the trained random forest classifiers and compared to the known tissue outcome using the Dice coefficient. Overall, a logarithmic relation between the number of patients used for training set definition and tissue outcome prediction accuracy was found. Quantitatively, a mean Dice coefficient of 0.45 was found for the prediction using the training set consisting of the voxel information from only one other patient, which increases to 0.53 if using all other patients (n=22). Based on extrapolation, 50-100 patients appear to be a reasonable tradeoff between tissue outcome prediction accuracy and effort required for data acquisition and preparation.

  18. Cloud GPU-based simulations for SQUAREMR.

    PubMed

    Kantasis, George; Xanthis, Christos G; Haris, Kostas; Heiberg, Einar; Aletras, Anthony H

    2017-01-01

    Quantitative Magnetic Resonance Imaging (MRI) is a research tool, used more and more in clinical practice, as it provides objective information with respect to the tissues being imaged. Pixel-wise T 1 quantification (T 1 mapping) of the myocardium is one such application with diagnostic significance. A number of mapping sequences have been developed for myocardial T 1 mapping with a wide range in terms of measurement accuracy and precision. Furthermore, measurement results obtained with these pulse sequences are affected by errors introduced by the particular acquisition parameters used. SQUAREMR is a new method which has the potential of improving the accuracy of these mapping sequences through the use of massively parallel simulations on Graphical Processing Units (GPUs) by taking into account different acquisition parameter sets. This method has been shown to be effective in myocardial T 1 mapping; however, execution times may exceed 30min which is prohibitively long for clinical applications. The purpose of this study was to accelerate the construction of SQUAREMR's multi-parametric database to more clinically acceptable levels. The aim of this study was to develop a cloud-based cluster in order to distribute the computational load to several GPU-enabled nodes and accelerate SQUAREMR. This would accommodate high demands for computational resources without the need for major upfront equipment investment. Moreover, the parameter space explored by the simulations was optimized in order to reduce the computational load without compromising the T 1 estimates compared to a non-optimized parameter space approach. A cloud-based cluster with 16 nodes resulted in a speedup of up to 13.5 times compared to a single-node execution. Finally, the optimized parameter set approach allowed for an execution time of 28s using the 16-node cluster, without compromising the T 1 estimates by more than 10ms. The developed cloud-based cluster and optimization of the parameter set reduced the execution time of the simulations involved in constructing the SQUAREMR multi-parametric database thus bringing SQUAREMR's applicability within time frames that would be likely acceptable in the clinic. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Charm: Cosmic history agnostic reconstruction method

    NASA Astrophysics Data System (ADS)

    Porqueres, Natalia; Ensslin, Torsten A.

    2017-03-01

    Charm (cosmic history agnostic reconstruction method) reconstructs the cosmic expansion history in the framework of Information Field Theory. The reconstruction is performed via the iterative Wiener filter from an agnostic or from an informative prior. The charm code allows one to test the compatibility of several different data sets with the LambdaCDM model in a non-parametric way.

  20. Computations of Aerodynamic Performance Databases Using Output-Based Refinement

    NASA Technical Reports Server (NTRS)

    Nemec, Marian; Aftosmis, Michael J.

    2009-01-01

    Objectives: Handle complex geometry problems; Control discretization errors via solution-adaptive mesh refinement; Focus on aerodynamic databases of parametric and optimization studies: 1. Accuracy: satisfy prescribed error bounds 2. Robustness and speed: may require over 105 mesh generations 3. Automation: avoid user supervision Obtain "expert meshes" independent of user skill; and Run every case adaptively in production settings.

  1. Optimal Clustering in Graphs with Weighted Edges: A Unified Approach to the Threshold Problem.

    ERIC Educational Resources Information Center

    Goetschel, Roy; Voxman, William

    1987-01-01

    Relations on a finite set V are viewed as weighted graphs. Using the language of graph theory, two methods of partitioning V are examined: selecting threshold values and applying them to a maximal weighted spanning forest, and using a parametric linear program to obtain a most adhesive partition. (Author/EM)

  2. Spectral and energy characteristics of four-photon parametric scattering in sodium vapor

    NASA Astrophysics Data System (ADS)

    Vaicaitis, V.; Ignatavicius, M.; Kudriashov, V. A.; Pimenov, Iu. N.; Jakyte, R.

    1987-04-01

    Consideration is given to processes of four-photon interaction upon two-photon resonance excitation of the 3d level of sodium by two-frequency radiation from a monopulse picosecond YAG:Nd laser with frequency doubling and an optical parametric oscillator utilizing KDP crystrals. The spatial and frequency spectra of the four-photon parametric scattering (FPS) are recorded and studied at different sodium vapor concentrations (10 to the 15th to 10 to the 17th/cu cm) and upon both collinear and noncollinear excitation. It is shown that the observed conical structure of the FPS radiation can be interpreted from an analysis of the realization of the frequency and spatial phase-matching conditions. The dependences of the FPS radiation intensity on the exciting radiation intensity, the sodium vapor concentration, and the mismatching of the exciting radiation from the two-photon resonance are obtained.

  3. Advanced extravehicular protective systems study, volume 2

    NASA Technical Reports Server (NTRS)

    Sutton, J. G.; Heimlich, P. F.; Tepper, E. H.

    1972-01-01

    The results of the subsystem studies are presented. Initial identification and evaluation of candidate subsystem concepts in the area of thermal control, humidity control, CO2 control/O2 supply, contaminant control and power supply are discussed. The candidate concepts that were judged to be obviously noncompetitive were deleted from further consideration and the remaining candidate concepts were carried into the go/no go evaluation. A detailed parametric analysis of each of the thermal/humidity control and CO2 control/O2 supply subsystem concepts which passed the go/no go evaluation is described. Based upon the results of the parametric analyses, primary and secondary evaluations of the remaining candidate concepts were conducted. These results and the subsystem recommendations emanating from these results are discussed. In addition, the parametric analyses of the recommended subsystem concepts were updated to reflect the final AEPS specification requirements. A detailed discussion regarding the selection of the AEPS operating pressure level is presented.

  4. Comparison of radiation parametrizations within the HARMONIE-AROME NWP model

    NASA Astrophysics Data System (ADS)

    Rontu, Laura; Lindfors, Anders V.

    2018-05-01

    Downwelling shortwave radiation at the surface (SWDS, global solar radiation flux), given by three different parametrization schemes, was compared to observations in the HARMONIE-AROME numerical weather prediction (NWP) model experiments over Finland in spring 2017. Simulated fluxes agreed well with each other and with the observations in the clear-sky cases. In the cloudy-sky conditions, all schemes tended to underestimate SWDS at the daily level, as compared to the measurements. Large local and temporal differences between the model results and observations were seen, related to the variations and uncertainty of the predicted cloud properties. The results suggest a possibility to benefit from the use of different radiative transfer parametrizations in a NWP model to obtain perturbations for the fine-resolution ensemble prediction systems. In addition, we recommend usage of the global radiation observations for the standard validation of the NWP models.

  5. Optimizing Nanoscale Quantitative Optical Imaging of Subfield Scattering Targets

    PubMed Central

    Henn, Mark-Alexander; Barnes, Bryan M.; Zhou, Hui; Sohn, Martin; Silver, Richard M.

    2016-01-01

    The full 3-D scattered field above finite sets of features has been shown to contain a continuum of spatial frequency information, and with novel optical microscopy techniques and electromagnetic modeling, deep-subwavelength geometrical parameters can be determined. Similarly, by using simulations, scattering geometries and experimental conditions can be established to tailor scattered fields that yield lower parametric uncertainties while decreasing the number of measurements and the area of such finite sets of features. Such optimized conditions are reported through quantitative optical imaging in 193 nm scatterfield microscopy using feature sets up to four times smaller in area than state-of-the-art critical dimension targets. PMID:27805660

  6. Empirical Prediction of Aircraft Landing Gear Noise

    NASA Technical Reports Server (NTRS)

    Golub, Robert A. (Technical Monitor); Guo, Yue-Ping

    2005-01-01

    This report documents a semi-empirical/semi-analytical method for landing gear noise prediction. The method is based on scaling laws of the theory of aerodynamic noise generation and correlation of these scaling laws with current available test data. The former gives the method a sound theoretical foundation and the latter quantitatively determines the relations between the parameters of the landing gear assembly and the far field noise, enabling practical predictions of aircraft landing gear noise, both for parametric trends and for absolute noise levels. The prediction model is validated by wind tunnel test data for an isolated Boeing 737 landing gear and by flight data for the Boeing 777 airplane. In both cases, the predictions agree well with data, both in parametric trends and in absolute noise levels.

  7. Bim Automation: Advanced Modeling Generative Process for Complex Structures

    NASA Astrophysics Data System (ADS)

    Banfi, F.; Fai, S.; Brumana, R.

    2017-08-01

    The new paradigm of the complexity of modern and historic structures, which are characterised by complex forms, morphological and typological variables, is one of the greatest challenges for building information modelling (BIM). Generation of complex parametric models needs new scientific knowledge concerning new digital technologies. These elements are helpful to store a vast quantity of information during the life cycle of buildings (LCB). The latest developments of parametric applications do not provide advanced tools, resulting in time-consuming work for the generation of models. This paper presents a method capable of processing and creating complex parametric Building Information Models (BIM) with Non-Uniform to NURBS) with multiple levels of details (Mixed and ReverseLoD) based on accurate 3D photogrammetric and laser scanning surveys. Complex 3D elements are converted into parametric BIM software and finite element applications (BIM to FEA) using specific exchange formats and new modelling tools. The proposed approach has been applied to different case studies: the BIM of modern structure for the courtyard of West Block on Parliament Hill in Ottawa (Ontario) and the BIM of Masegra Castel in Sondrio (Italy), encouraging the dissemination and interaction of scientific results without losing information during the generative process.

  8. Parametric estimates for the receiver operating characteristic curve generalization for non-monotone relationships.

    PubMed

    Martínez-Camblor, Pablo; Pardo-Fernández, Juan C

    2017-01-01

    Diagnostic procedures are based on establishing certain conditions and then checking if those conditions are satisfied by a given individual. When the diagnostic procedure is based on a continuous marker, this is equivalent to fix a region or classification subset and then check if the observed value of the marker belongs to that region. Receiver operating characteristic curve is a valuable and popular tool to study and compare the diagnostic ability of a given marker. Besides, the area under the receiver operating characteristic curve is frequently used as an index of the global discrimination ability. This paper revises and widens the scope of the receiver operating characteristic curve definition by setting the classification subsets in which the final decision is based in the spotlight of the analysis. We revise the definition of the receiver operating characteristic curve in terms of particular classes of classification subsets and then focus on a receiver operating characteristic curve generalization for situations in which both low and high values of the marker are associated with more probability of having the studied characteristic. Parametric and non-parametric estimators of the receiver operating characteristic curve generalization are investigated. Monte Carlo studies and real data examples illustrate their practical performance.

  9. Software for Managing Parametric Studies

    NASA Technical Reports Server (NTRS)

    Yarrow, Maurice; McCann, Karen M.; DeVivo, Adrian

    2003-01-01

    The Information Power Grid Virtual Laboratory (ILab) is a Practical Extraction and Reporting Language (PERL) graphical-user-interface computer program that generates shell scripts to facilitate parametric studies performed on the Grid. (The Grid denotes a worldwide network of supercomputers used for scientific and engineering computations involving data sets too large to fit on desktop computers.) Heretofore, parametric studies on the Grid have been impeded by the need to create control language scripts and edit input data files painstaking tasks that are necessary for managing multiple jobs on multiple computers. ILab reflects an object-oriented approach to automation of these tasks: All data and operations are organized into packages in order to accelerate development and debugging. A container or document object in ILab, called an experiment, contains all the information (data and file paths) necessary to define a complex series of repeated, sequenced, and/or branching processes. For convenience and to enable reuse, this object is serialized to and from disk storage. At run time, the current ILab experiment is used to generate required input files and shell scripts, create directories, copy data files, and then both initiate and monitor the execution of all computational processes.

  10. Ionospheric modifications in high frequency heating experiments

    NASA Astrophysics Data System (ADS)

    Kuo, Spencer P.

    2015-01-01

    Featured observations in high-frequency (HF) heating experiments conducted at Arecibo, EISCAT, and high frequency active auroral research program are discussed. These phenomena appearing in the F region of the ionosphere include high-frequency heater enhanced plasma lines, airglow enhancement, energetic electron flux, artificial ionization layers, artificial spread-F, ionization enhancement, artificial cusp, wideband absorption, short-scale (meters) density irregularities, and stimulated electromagnetic emissions, which were observed when the O-mode HF heater waves with frequencies below foF2 were applied. The implication and associated physical mechanism of each observation are discussed and explained. It is shown that these phenomena caused by the HF heating are all ascribed directly or indirectly to the excitation of parametric instabilities which instigate anomalous heating. Formulation and analysis of parametric instabilities are presented. The results show that oscillating two stream instability and parametric decay instability can be excited by the O-mode HF heater waves, transmitted from all three heating facilities, in the regions near the HF reflection height and near the upper hybrid resonance layer. The excited Langmuir waves, upper hybrid waves, ion acoustic waves, lower hybrid waves, and field-aligned density irregularities set off subsequent wave-wave and wave-electron interactions, giving rise to the observed phenomena.

  11. Grain-scale modeling and splash parametrization for aeolian sand transport.

    PubMed

    Lämmel, Marc; Dzikowski, Kamil; Kroy, Klaus; Oger, Luc; Valance, Alexandre

    2017-02-01

    The collision of a spherical grain with a granular bed is commonly parametrized by the splash function, which provides the velocity of the rebounding grain and the velocity distribution and number of ejected grains. Starting from elementary geometric considerations and physical principles, like momentum conservation and energy dissipation in inelastic pair collisions, we derive a rebound parametrization for the collision of a spherical grain with a granular bed. Combined with a recently proposed energy-splitting model [Ho et al., Phys. Rev. E 85, 052301 (2012)PLEEE81539-375510.1103/PhysRevE.85.052301] that predicts how the impact energy is distributed among the bed grains, this yields a coarse-grained but complete characterization of the splash as a function of the impact velocity and the impactor-bed grain-size ratio. The predicted mean values of the rebound angle, total and vertical restitution, ejection speed, and number of ejected grains are in excellent agreement with experimental literature data and with our own discrete-element computer simulations. We extract a set of analytical asymptotic relations for shallow impact geometries, which can readily be used in coarse-grained analytical modeling or computer simulations of geophysical particle-laden flows.

  12. Convergent linkage evidence from two Latin-American population isolates supports the presence of a susceptibility locus for bipolar disorder in 5q31-34.

    PubMed

    Herzberg, Ibi; Jasinska, Anna; García, Jenny; Jawaheer, Damini; Service, Susan; Kremeyer, Barbara; Duque, Constanza; Parra, María V; Vega, Jorge; Ortiz, Daniel; Carvajal, Luis; Polanco, Guadalupe; Restrepo, Gabriel J; López, Carlos; Palacio, Carlos; Levinson, Matthew; Aldana, Ileana; Mathews, Carol; Davanzo, Pablo; Molina, Julio; Fournier, Eduardo; Bejarano, Julio; Ramírez, Magui; Ortiz, Carmen Araya; Araya, Xinia; Sabatti, Chiara; Reus, Victor; Macaya, Gabriel; Bedoya, Gabriel; Ospina, Jorge; Freimer, Nelson; Ruiz-Linares, Andrés

    2006-11-01

    We performed a whole genome microsatellite marker scan in six multiplex families with bipolar (BP) mood disorder ascertained in Antioquia, a historically isolated population from North West Colombia. These families were characterized clinically using the approach employed in independent ongoing studies of BP in the closely related population of the Central Valley of Costa Rica. The most consistent linkage results from parametric and non-parametric analyses of the Colombian scan involved markers on 5q31-33, a region implicated by the previous studies of BP in Costa Rica. Because of these concordant results, a follow-up study with additional markers was undertaken in an expanded set of Colombian and Costa Rican families; this provided a genome-wide significant evidence of linkage of BPI to a candidate region of approximately 10 cM in 5q31-33 (maximum non-parametric linkage score=4.395, P<0.00004). Interestingly, this region has been implicated in several previous genetic studies of schizophrenia and psychosis, including disease association with variants of the enthoprotin and gamma-aminobutyric acid receptor genes.

  13. Predicting astronaut radiation doses from major solar particle events using artificial intelligence

    NASA Astrophysics Data System (ADS)

    Tehrani, Nazila H.

    1998-06-01

    Space radiation is an important issue for manned space flight. For long missions outside of the Earth's magnetosphere, there are two major sources of exposure. Large Solar Particle Events (SPEs) consisting of numerous energetic protons and other heavy ions emitted by the Sun, and the Galactic Cosmic Rays (GCRs) that constitute an isotropic radiation field of low flux and high energy. In deep-space missions both SPEs and GCRs can be hazardous to the space crew. SPEs can provide an acute dose, which is a large dose over a short period of time. The acute doses from a large SPE that could be received by an astronaut with shielding as thick as a spacesuit maybe as large as 500 cGy. GCRs will not provide acute doses, but may increase the lifetime risk of cancer from prolonged exposures in a range of 40-50 cSv/yr. In this research, we are using artificial intelligence to model the dose-time profiles during a major solar particle event. Artificial neural networks are reliable approximators for nonlinear functions. In this study we design a dynamic network. This network has the ability to update its dose predictions as new input dose data is received while the event is occurring. To accomplish this temporal behavior of the system we use an innovative Sliding Time-Delay Neural Network (STDNN). By using a STDNN one can predict doses received from large SPEs while the event is happening. The parametric fits and actual calculated doses for the skin, eye and bone marrow are used. The parametric data set obtained by fitting the Weibull functional forms to the calculated dose points has been divided into two subsets. The STDNN has been trained using some of these parametric events. The other subset of parametric data and the actual doses are used for testing with the resulting weights and biases of the first set. This is done to show that the network can generalize. Results of this testing indicate that the STDNN is capable of predicting doses from events that it has not seen before.

  14. Data linkage of inpatient hospitalization and workers' claims data sets to characterize occupational falls.

    PubMed

    Bunn, Terry L; Slavova, Svetla; Bathke, Arne

    2007-07-01

    The identification of industry, occupation, and associated injury costs for worker falls in Kentucky have not been fully examined. The purpose of this study was to determine the associations between industry and occupation and 1) hospitalization length of stay; 2) hospitalization charges; and 3) workers' claims costs in workers suffering falls, using linked inpatient hospitalization discharge and workers' claims data sets. Hospitalization cases were selected with ICD-9-CM external cause of injury codes for falls and payer code of workers' claims for years 2000-2004. Selection criteria for workers'claims cases were International Association of Industrial Accident Boards and Commissions Electronic Data Interchange Nature (IAIABCEDIN) injuries coded as falls and/or slips. Common data variables between the two data sets such as date of birth, gender, date of injury, and hospital admission date were used to perform probabilistic data linkage using LinkSolv software. Statistical analysis was performed with non-parametric tests. Construction falls were the most prevalent for male workers and incurred the highest hospitalization and workers' compensation costs, whereas most female worker falls occurred in the services industry. The largest percentage of male worker falls was from one level to another, while the largest percentage of females experienced a fall, slip, or trip (not otherwise classified). When male construction worker falls were further analyzed, laborers and helpers had longer hospital stays as well as higher total charges when the worker fell from one level to another. Data linkage of hospitalization and workers' claims falls data provides additional information on industry, occupation, and costs that are not available when examining either data set alone.

  15. Cognitive control over learning: Creating, clustering and generalizing task-set structure

    PubMed Central

    Collins, Anne G.E.; Frank, Michael J.

    2013-01-01

    Executive functions and learning share common neural substrates essential for their expression, notably in prefrontal cortex and basal ganglia. Understanding how they interact requires studying how cognitive control facilitates learning, but also how learning provides the (potentially hidden) structure, such as abstract rules or task-sets, needed for cognitive control. We investigate this question from three complementary angles. First, we develop a new computational “C-TS” (context-task-set) model inspired by non-parametric Bayesian methods, specifying how the learner might infer hidden structure and decide whether to re-use that structure in new situations, or to create new structure. Second, we develop a neurobiologically explicit model to assess potential mechanisms of such interactive structured learning in multiple circuits linking frontal cortex and basal ganglia. We systematically explore the link betweens these levels of modeling across multiple task demands. We find that the network provides an approximate implementation of high level C-TS computations, where manipulations of specific neural mechanisms are well captured by variations in distinct C-TS parameters. Third, this synergism across models yields strong predictions about the nature of human optimal and suboptimal choices and response times during learning. In particular, the models suggest that participants spontaneously build task-set structure into a learning problem when not cued to do so, which predicts positive and negative transfer in subsequent generalization tests. We provide evidence for these predictions in two experiments and show that the C-TS model provides a good quantitative fit to human sequences of choices in this task. These findings implicate a strong tendency to interactively engage cognitive control and learning, resulting in structured abstract representations that afford generalization opportunities, and thus potentially long-term rather than short-term optimality. PMID:23356780

  16. Multi-response permutation procedure as an alternative to the analysis of variance: an SPSS implementation.

    PubMed

    Cai, Li

    2006-02-01

    A permutation test typically requires fewer assumptions than does a comparable parametric counterpart. The multi-response permutation procedure (MRPP) is a class of multivariate permutation tests of group difference useful for the analysis of experimental data. However, psychologists seldom make use of the MRPP in data analysis, in part because the MRPP is not implemented in popular statistical packages that psychologists use. A set of SPSS macros implementing the MRPP test is provided in this article. The use of the macros is illustrated by analyzing example data sets.

  17. Novel non-parametric models to estimate evolutionary rates and divergence times from heterochronous sequence data.

    PubMed

    Fourment, Mathieu; Holmes, Edward C

    2014-07-24

    Early methods for estimating divergence times from gene sequence data relied on the assumption of a molecular clock. More sophisticated methods were created to model rate variation and used auto-correlation of rates, local clocks, or the so called "uncorrelated relaxed clock" where substitution rates are assumed to be drawn from a parametric distribution. In the case of Bayesian inference methods the impact of the prior on branching times is not clearly understood, and if the amount of data is limited the posterior could be strongly influenced by the prior. We develop a maximum likelihood method--Physher--that uses local or discrete clocks to estimate evolutionary rates and divergence times from heterochronous sequence data. Using two empirical data sets we show that our discrete clock estimates are similar to those obtained by other methods, and that Physher outperformed some methods in the estimation of the root age of an influenza virus data set. A simulation analysis suggests that Physher can outperform a Bayesian method when the real topology contains two long branches below the root node, even when evolution is strongly clock-like. These results suggest it is advisable to use a variety of methods to estimate evolutionary rates and divergence times from heterochronous sequence data. Physher and the associated data sets used here are available online at http://code.google.com/p/physher/.

  18. Any Two Learning Algorithms Are (Almost) Exactly Identical

    NASA Technical Reports Server (NTRS)

    Wolpert, David H.

    2000-01-01

    This paper shows that if one is provided with a loss function, it can be used in a natural way to specify a distance measure quantifying the similarity of any two supervised learning algorithms, even non-parametric algorithms. Intuitively, this measure gives the fraction of targets and training sets for which the expected performance of the two algorithms differs significantly. Bounds on the value of this distance are calculated for the case of binary outputs and 0-1 loss, indicating that any two learning algorithms are almost exactly identical for such scenarios. As an example, for any two algorithms A and B, even for small input spaces and training sets, for less than 2e(-50) of all targets will the difference between A's and B's generalization performance of exceed 1%. In particular, this is true if B is bagging applied to A, or boosting applied to A. These bounds can be viewed alternatively as telling us, for example, that the simple English phrase 'I expect that algorithm A will generalize from the training set with an accuracy of at least 75% on the rest of the target' conveys 20,000 bytes of information concerning the target. The paper ends by discussing some of the subtleties of extending the distance measure to give a full (non-parametric) differential geometry of the manifold of learning algorithms.

  19. Bim and Gis: when Parametric Modeling Meets Geospatial Data

    NASA Astrophysics Data System (ADS)

    Barazzetti, L.; Banfi, F.

    2017-12-01

    Geospatial data have a crucial role in several projects related to infrastructures and land management. GIS software are able to perform advanced geospatial analyses, but they lack several instruments and tools for parametric modelling typically available in BIM. At the same time, BIM software designed for buildings have limited tools to handle geospatial data. As things stand at the moment, BIM and GIS could appear as complementary solutions, notwithstanding research work is currently under development to ensure a better level of interoperability, especially at the scale of the building. On the other hand, the transition from the local (building) scale to the infrastructure (where geospatial data cannot be neglected) has already demonstrated that parametric modelling integrated with geoinformation is a powerful tool to simplify and speed up some phases of the design workflow. This paper reviews such mixed approaches with both simulated and real examples, demonstrating that integration is already a reality at specific scales, which are not dominated by "pure" GIS or BIM. The paper will also demonstrate that some traditional operations carried out with GIS software are also available in parametric modelling software for BIM, such as transformation between reference systems, DEM generation, feature extraction, and geospatial queries. A real case study is illustrated and discussed to show the advantage of a combined use of both technologies. BIM and GIS integration can generate greater usage of geospatial data in the AECOO (Architecture, Engineering, Construction, Owner and Operator) industry, as well as new solutions for parametric modelling with additional geoinformation.

  20. Computational Analysis for Rocket-Based Combined-Cycle Systems During Rocket-Only Operation

    NASA Technical Reports Server (NTRS)

    Steffen, C. J., Jr.; Smith, T. D.; Yungster, S.; Keller, D. J.

    2000-01-01

    A series of Reynolds-averaged Navier-Stokes calculations were employed to study the performance of rocket-based combined-cycle systems operating in an all-rocket mode. This parametric series of calculations were executed within a statistical framework, commonly known as design of experiments. The parametric design space included four geometric and two flowfield variables set at three levels each, for a total of 729 possible combinations. A D-optimal design strategy was selected. It required that only 36 separate computational fluid dynamics (CFD) solutions be performed to develop a full response surface model, which quantified the linear, bilinear, and curvilinear effects of the six experimental variables. The axisymmetric, Reynolds-averaged Navier-Stokes simulations were executed with the NPARC v3.0 code. The response used in the statistical analysis was created from Isp efficiency data integrated from the 36 CFD simulations. The influence of turbulence modeling was analyzed by using both one- and two-equation models. Careful attention was also given to quantify the influence of mesh dependence, iterative convergence, and artificial viscosity upon the resulting statistical model. Thirteen statistically significant effects were observed to have an influence on rocket-based combined-cycle nozzle performance. It was apparent that the free-expansion process, directly downstream of the rocket nozzle, can influence the Isp efficiency. Numerical schlieren images and particle traces have been used to further understand the physical phenomena behind several of the statistically significant results.

  1. Assessment of Self-Efficacy and its Relationship with Frailty in the Elderly

    PubMed Central

    Doba, Nobutaka; Tokuda, Yasuharu; Saiki, Keiichirou; Kushiro, Toshio; Hirano, Masumi; Matsubara, Yoshihiro; Hinohara, Shigeaki

    2016-01-01

    Objective It has been increasingly recognized in various clinical areas that self-efficacy promotes the level of competence in patients. The validity, applicability and potential usefulness of a new, simple model for assessing self-efficacy in the elderly with special reference to frailty were investigated for improving elderly patients' accomplishments. Methods The subjects of the present study comprised 257 elderly people who were members of the New Elder Citizen Movement in Japan and their mean age was 82.3±3.8 years. Interview materials including self-efficacy questionnaires were sent to all participants in advance and all other physical examinations were performed at the Life Planning Center Clinic. Results The internal consistency and close relation among a set of items used as a measure of self-efficacy were evaluated by Cronbach's alpha index, which was 0.79. Although no age-dependent difference was identified in either sex, gender-related differences in some factors were noted. Regarding several parametric parameters, Beck's inventory alone revealed a significant relationship to self-efficacy in both sexes. Additionally, non-parametric items such as stamina, power and memory were strongly correlated with self-efficacy in both sexes. Frailty showed a significant independent relationship with self-efficacy in a multiple linear regression model analysis and using Beck's inventory, stamina, power and memory were identified to be independent factors for self-efficacy. Conclusion The simple assessment of self-efficacy described in this study may be a useful tool for successful aging of elderly people. PMID:27725537

  2. Assessment of Self-Efficacy and its Relationship with Frailty in the Elderly.

    PubMed

    Doba, Nobutaka; Tokuda, Yasuharu; Saiki, Keiichirou; Kushiro, Toshio; Hirano, Masumi; Matsubara, Yoshihiro; Hinohara, Shigeaki

    Objective It has been increasingly recognized in various clinical areas that self-efficacy promotes the level of competence in patients. The validity, applicability and potential usefulness of a new, simple model for assessing self-efficacy in the elderly with special reference to frailty were investigated for improving elderly patients' accomplishments. Methods The subjects of the present study comprised 257 elderly people who were members of the New Elder Citizen Movement in Japan and their mean age was 82.3±3.8 years. Interview materials including self-efficacy questionnaires were sent to all participants in advance and all other physical examinations were performed at the Life Planning Center Clinic. Results The internal consistency and close relation among a set of items used as a measure of self-efficacy were evaluated by Cronbach's alpha index, which was 0.79. Although no age-dependent difference was identified in either sex, gender-related differences in some factors were noted. Regarding several parametric parameters, Beck's inventory alone revealed a significant relationship to self-efficacy in both sexes. Additionally, non-parametric items such as stamina, power and memory were strongly correlated with self-efficacy in both sexes. Frailty showed a significant independent relationship with self-efficacy in a multiple linear regression model analysis and using Beck's inventory, stamina, power and memory were identified to be independent factors for self-efficacy. Conclusion The simple assessment of self-efficacy described in this study may be a useful tool for successful aging of elderly people.

  3. Physical, clinical, and psychosocial parameters of adolescents with different degrees of excess weight☆

    PubMed Central

    Antonini, Vanessa Drieli Seron; da Silva, Danilo Fernandes; Bianchini, Josiane Aparecida Alves; Lopera, Carlos Andres; Moreira, Amanda Caroline Teles; Locateli, João Carlos; Nardo, Nelson

    2014-01-01

    OBJECTIVE: To compare body composition, hemodynamic parameters, health-related physical fitness, and health-related quality of life of adolescents with anthropometric diagnosis of overweight, obesity, and severe obesity. METHODS: 220 adolescents with excess body weight were enrolled. They were beginners in a intervention program that included patients based on age, availability, presence of excess body weight, place of residence, and agreement to participate in the study . This study collected anthropometric and hemodynamic variables, health-related physical fitness, and health-related quality of life of the adolescents. To compare the three groups according to nutritional status, parametric and non-parametric tests were applied. Significance level was set at p<0.05. RESULTS: There was no significant difference in resting heart rate, health-related physical fitness, relative body fat, absolute and relative lean mass, and health-related quality of life between overweight, obese, and severely obese adolescents (p>0.05). Body weight, body mass index, waist and hip circumference, and systolic blood pressure increased as degree of excess weightincreased (p<0.05). Dyastolic blood pressure of the severe obesity group was higher than the other groups (p<0.05). There was an association between the degree of excess weight and the prevalence of altered blood pressure (overweight: 12.1%; obesity: 28.1%; severe obesity: 45.5%; p<0.001). The results were similar when genders were analyzed separately. CONCLUSION: Results suggest that overweight adolescents presented similar results compared to obese and severely obese adolescents in most of the parameters analyzed. PMID:25510998

  4. NOEC and LOEC as merely concessive expedients: two unambiguous alternatives and some criteria to maximize the efficiency of dose-response experimental designs.

    PubMed

    Murado, M A; Prieto, M A

    2013-09-01

    NOEC and LOEC (no and lowest observed effect concentrations, respectively) are toxicological concepts derived from analysis of variance (ANOVA), a not very sensitive method that produces ambiguous results and does not provide confidence intervals (CI) of its estimates. For a long time, despite the abundant criticism that such concepts have raised, the field of the ecotoxicology is reticent to abandon them (two possible reasons will be discussed), adducing the difficulty of clear alternatives. However, this work proves that a debugged dose-response (DR) modeling, through explicit algebraic equations, enables two simple options to accurately calculate the CI of substantially lower doses than NOEC. Both ANOVA and DR analyses are affected by the experimental error, response profile, number of observations and experimental design. The study of these effects--analytically complex and experimentally unfeasible--was carried out using systematic simulations with realistic data, including different error levels. Results revealed the weakness of NOEC and LOEC notions, confirmed the feasibility of the proposed alternatives and allowed to discuss the--often violated--conditions that minimize the CI of the parametric estimates from DR assays. In addition, a table was developed providing the experimental design that minimizes the parametric CI for a given set of working conditions. This makes possible to reduce the experimental effort and to avoid the inconclusive results that are frequently obtained from intuitive experimental plans. Copyright © 2013 Elsevier B.V. All rights reserved.

  5. Monitoring of metallic contaminants in energy drinks using ICP-MS.

    PubMed

    Kilic, Serpil; Cengiz, Mehmet Fatih; Kilic, Murat

    2018-03-09

    In this study, an improved method was validated for the determination of some metallic contaminants (arsenic (As), chromium (Cr), cadmium (Cd), lead (Pb), iron (Fe), nickel (Ni), copper (Cu), Mn, and antimony (Sb)) in energy drinks using inductive coupled plasma mass spectrometry (ICP-MS). The validation procedure was applied for the evaluation of linearity, repeatability, recovery, limit of detection, and quantification. In addition, to verify the trueness of the method, it was participated in an interlaboratory proficiency test for heavy metals in soft drink organized by the LGC (Laboratory of the Government Chemist) Standard. Validated method was used to monitor for the determination of metallic contaminants in commercial energy drink samples. Concentrations of As, Cr, Cd, Pb, Fe, Ni, Cu, Mn, and Sb in the samples were found in the ranges of 0.76-6.73, 13.25-100.96, 0.16-2.11, 9.33-28.96, 334.77-937.12, 35.98-303.97, 23.67-60.48, 5.45-489.93, and 0.01-0.42 μg L -1 , respectively. The results were compared with the provisional guideline or parametric values of the elements for drinking waters set by the WHO (World Health Organization) and EC (European Commission). As, Cd, Cu, and Sb did not exceed the WHO and EC provisional guideline or parametric values. However, the other elements (Cr, Pb, Fe, Ni, and Mn) were found to be higher than their relevant limits at various levels.

  6. Weight and the Future of Space Flight Hardware Cost Modeling

    NASA Technical Reports Server (NTRS)

    Prince, Frank A.

    2003-01-01

    Weight has been used as the primary input variable for cost estimating almost as long as there have been parametric cost models. While there are good reasons for using weight, serious limitations exist. These limitations have been addressed by multi-variable equations and trend analysis in models such as NAFCOM, PRICE, and SEER; however, these models have not be able to address the significant time lags that can occur between the development of similar space flight hardware systems. These time lags make the cost analyst's job difficult because insufficient data exists to perform trend analysis, and the current set of parametric models are not well suited to accommodating process improvements in space flight hardware design, development, build and test. As a result, people of good faith can have serious disagreement over the cost for new systems. To address these shortcomings, new cost modeling approaches are needed. The most promising approach is process based (sometimes called activity) costing. Developing process based models will require a detailed understanding of the functions required to produce space flight hardware combined with innovative approaches to estimating the necessary resources. Particularly challenging will be the lack of data at the process level. One method for developing a model is to combine notional algorithms with a discrete event simulation and model changes to the total cost as perturbations to the program are introduced. Despite these challenges, the potential benefits are such that efforts should be focused on developing process based cost models.

  7. Historic Bim: a New Repository for Structural Health Monitoring

    NASA Astrophysics Data System (ADS)

    Banfi, F.; Barazzetti, L.; Previtali, M.; Roncoroni, F.

    2017-05-01

    Recent developments in Building Information Modelling (BIM) technologies are facilitating the management of historic complex structures using new applications. This paper proposes a generative method combining the morphological and typological aspects of the historic buildings (H-BIM), with a set of monitoring information. This combination of 3D digital survey, parametric modelling and monitoring datasets allows for the development of a system for archiving and visualizing structural health monitoring (SHM) data (Fig. 1). The availability of a BIM database allows one to integrate a different kind of data stored in different ways (e.g. reports, tables, graphs, etc.) with a representation directly connected to the 3D model of the structure with appropriate levels of detail (LoD). Data can be interactively accessed by selecting specific objects of the BIM, i.e. connecting the 3D position of the sensors installed with additional digital documentation. Such innovative BIM objects, which form a new BIM family for SHM, can be then reused in other projects, facilitating data archiving and exploitation of data acquired and processed. The application of advanced modeling techniques allows for the reduction of time and costs of the generation process, and support cooperation between different disciplines using a central workspace. However, it also reveals new challenges for parametric software and exchange formats. The case study presented is the medieval bridge Azzone Visconti in Lecco (Italy), in which multi-temporal vertical movements during load testing were integrated into H-BIM.

  8. Significance of parametric spectral ratio methods in detection and recognition of whispered speech

    NASA Astrophysics Data System (ADS)

    Mathur, Arpit; Reddy, Shankar M.; Hegde, Rajesh M.

    2012-12-01

    In this article the significance of a new parametric spectral ratio method that can be used to detect whispered speech segments within normally phonated speech is described. Adaptation methods based on the maximum likelihood linear regression (MLLR) are then used to realize a mismatched train-test style speech recognition system. This proposed parametric spectral ratio method computes a ratio spectrum of the linear prediction (LP) and the minimum variance distortion-less response (MVDR) methods. The smoothed ratio spectrum is then used to detect whispered segments of speech within neutral speech segments effectively. The proposed LP-MVDR ratio method exhibits robustness at different SNRs as indicated by the whisper diarization experiments conducted on the CHAINS and the cell phone whispered speech corpus. The proposed method also performs reasonably better than the conventional methods for whisper detection. In order to integrate the proposed whisper detection method into a conventional speech recognition engine with minimal changes, adaptation methods based on the MLLR are used herein. The hidden Markov models corresponding to neutral mode speech are adapted to the whispered mode speech data in the whispered regions as detected by the proposed ratio method. The performance of this method is first evaluated on whispered speech data from the CHAINS corpus. The second set of experiments are conducted on the cell phone corpus of whispered speech. This corpus is collected using a set up that is used commercially for handling public transactions. The proposed whisper speech recognition system exhibits reasonably better performance when compared to several conventional methods. The results shown indicate the possibility of a whispered speech recognition system for cell phone based transactions.

  9. Characterizing Heterogeneity within Head and Neck Lesions Using Cluster Analysis of Multi-Parametric MRI Data

    PubMed Central

    Borri, Marco; Schmidt, Maria A.; Powell, Ceri; Koh, Dow-Mu; Riddell, Angela M.; Partridge, Mike; Bhide, Shreerang A.; Nutting, Christopher M.; Harrington, Kevin J.; Newbold, Katie L.; Leach, Martin O.

    2015-01-01

    Purpose To describe a methodology, based on cluster analysis, to partition multi-parametric functional imaging data into groups (or clusters) of similar functional characteristics, with the aim of characterizing functional heterogeneity within head and neck tumour volumes. To evaluate the performance of the proposed approach on a set of longitudinal MRI data, analysing the evolution of the obtained sub-sets with treatment. Material and Methods The cluster analysis workflow was applied to a combination of dynamic contrast-enhanced and diffusion-weighted imaging MRI data from a cohort of squamous cell carcinoma of the head and neck patients. Cumulative distributions of voxels, containing pre and post-treatment data and including both primary tumours and lymph nodes, were partitioned into k clusters (k = 2, 3 or 4). Principal component analysis and cluster validation were employed to investigate data composition and to independently determine the optimal number of clusters. The evolution of the resulting sub-regions with induction chemotherapy treatment was assessed relative to the number of clusters. Results The clustering algorithm was able to separate clusters which significantly reduced in voxel number following induction chemotherapy from clusters with a non-significant reduction. Partitioning with the optimal number of clusters (k = 4), determined with cluster validation, produced the best separation between reducing and non-reducing clusters. Conclusion The proposed methodology was able to identify tumour sub-regions with distinct functional properties, independently separating clusters which were affected differently by treatment. This work demonstrates that unsupervised cluster analysis, with no prior knowledge of the data, can be employed to provide a multi-parametric characterization of functional heterogeneity within tumour volumes. PMID:26398888

  10. Clustering of longitudinal data by using an extended baseline: A new method for treatment efficacy clustering in longitudinal data.

    PubMed

    Schramm, Catherine; Vial, Céline; Bachoud-Lévi, Anne-Catherine; Katsahian, Sandrine

    2018-01-01

    Heterogeneity in treatment efficacy is a major concern in clinical trials. Clustering may help to identify the treatment responders and the non-responders. In the context of longitudinal cluster analyses, sample size and variability of the times of measurements are the main issues with the current methods. Here, we propose a new two-step method for the Clustering of Longitudinal data by using an Extended Baseline. The first step relies on a piecewise linear mixed model for repeated measurements with a treatment-time interaction. The second step clusters the random predictions and considers several parametric (model-based) and non-parametric (partitioning, ascendant hierarchical clustering) algorithms. A simulation study compares all options of the clustering of longitudinal data by using an extended baseline method with the latent-class mixed model. The clustering of longitudinal data by using an extended baseline method with the two model-based algorithms was the more robust model. The clustering of longitudinal data by using an extended baseline method with all the non-parametric algorithms failed when there were unequal variances of treatment effect between clusters or when the subgroups had unbalanced sample sizes. The latent-class mixed model failed when the between-patients slope variability is high. Two real data sets on neurodegenerative disease and on obesity illustrate the clustering of longitudinal data by using an extended baseline method and show how clustering may help to identify the marker(s) of the treatment response. The application of the clustering of longitudinal data by using an extended baseline method in exploratory analysis as the first stage before setting up stratified designs can provide a better estimation of treatment effect in future clinical trials.

  11. Non-parametric transient classification using adaptive wavelets

    NASA Astrophysics Data System (ADS)

    Varughese, Melvin M.; von Sachs, Rainer; Stephanou, Michael; Bassett, Bruce A.

    2015-11-01

    Classifying transients based on multiband light curves is a challenging but crucial problem in the era of GAIA and Large Synoptic Sky Telescope since the sheer volume of transients will make spectroscopic classification unfeasible. We present a non-parametric classifier that predicts the transient's class given training data. It implements two novel components: the use of the BAGIDIS wavelet methodology - a characterization of functional data using hierarchical wavelet coefficients - as well as the introduction of a ranked probability classifier on the wavelet coefficients that handles both the heteroscedasticity of the data in addition to the potential non-representativity of the training set. The classifier is simple to implement while a major advantage of the BAGIDIS wavelets is that they are translation invariant. Hence, BAGIDIS does not need the light curves to be aligned to extract features. Further, BAGIDIS is non-parametric so it can be used effectively in blind searches for new objects. We demonstrate the effectiveness of our classifier against the Supernova Photometric Classification Challenge to correctly classify supernova light curves as Type Ia or non-Ia. We train our classifier on the spectroscopically confirmed subsample (which is not representative) and show that it works well for supernova with observed light-curve time spans greater than 100 d (roughly 55 per cent of the data set). For such data, we obtain a Ia efficiency of 80.5 per cent and a purity of 82.4 per cent, yielding a highly competitive challenge score of 0.49. This indicates that our `model-blind' approach may be particularly suitable for the general classification of astronomical transients in the era of large synoptic sky surveys.

  12. Whole-body PET parametric imaging employing direct 4D nested reconstruction and a generalized non-linear Patlak model

    NASA Astrophysics Data System (ADS)

    Karakatsanis, Nicolas A.; Rahmim, Arman

    2014-03-01

    Graphical analysis is employed in the research setting to provide quantitative estimation of PET tracer kinetics from dynamic images at a single bed. Recently, we proposed a multi-bed dynamic acquisition framework enabling clinically feasible whole-body parametric PET imaging by employing post-reconstruction parameter estimation. In addition, by incorporating linear Patlak modeling within the system matrix, we enabled direct 4D reconstruction in order to effectively circumvent noise amplification in dynamic whole-body imaging. However, direct 4D Patlak reconstruction exhibits a relatively slow convergence due to the presence of non-sparse spatial correlations in temporal kinetic analysis. In addition, the standard Patlak model does not account for reversible uptake, thus underestimating the influx rate Ki. We have developed a novel whole-body PET parametric reconstruction framework in the STIR platform, a widely employed open-source reconstruction toolkit, a) enabling accelerated convergence of direct 4D multi-bed reconstruction, by employing a nested algorithm to decouple the temporal parameter estimation from the spatial image update process, and b) enhancing the quantitative performance particularly in regions with reversible uptake, by pursuing a non-linear generalized Patlak 4D nested reconstruction algorithm. A set of published kinetic parameters and the XCAT phantom were employed for the simulation of dynamic multi-bed acquisitions. Quantitative analysis on the Ki images demonstrated considerable acceleration in the convergence of the nested 4D whole-body Patlak algorithm. In addition, our simulated and patient whole-body data in the postreconstruction domain indicated the quantitative benefits of our extended generalized Patlak 4D nested reconstruction for tumor diagnosis and treatment response monitoring.

  13. Fundamental cavity impedance and longitudinal coupled-bunch instabilities at the High Luminosity Large Hadron Collider

    NASA Astrophysics Data System (ADS)

    Baudrenghien, P.; Mastoridis, T.

    2017-01-01

    The interaction between beam dynamics and the radio frequency (rf) station in circular colliders is complex and can lead to longitudinal coupled-bunch instabilities at high beam currents. The excitation of the cavity higher order modes is traditionally damped using passive devices. But the wakefield developed at the cavity fundamental frequency falls in the frequency range of the rf power system and can, in theory, be compensated by modulating the generator drive. Such a regulation is the responsibility of the low-level rf (llrf) system that measures the cavity field (or beam current) and generates the rf power drive. The Large Hadron Collider (LHC) rf was designed for the nominal LHC parameter of 0.55 A DC beam current. At 7 TeV the synchrotron radiation damping time is 13 hours. Damping of the instability growth rates due to the cavity fundamental (400.789 MHz) can only come from the synchrotron tune spread (Landau damping) and will be very small (time constant in the order of 0.1 s). In this work, the ability of the present llrf compensation to prevent coupled-bunch instabilities with the planned high luminosity LHC (HiLumi LHC) doubling of the beam current to 1.1 A DC is investigated. The paper conclusions are based on the measured performances of the present llrf system. Models of the rf and llrf systems were developed at the LHC start-up. Following comparisons with measurements, the system was parametrized using these models. The parametric model then provides a more realistic estimation of the instability growth rates than an ideal model of the rf blocks. With this modeling approach, the key rf settings can be varied around their set value allowing for a sensitivity analysis (growth rate sensitivity to rf and llrf parameters). Finally, preliminary measurements from the LHC at 0.44 A DC are presented to support the conclusions of this work.

  14. Witnessing entanglement without entanglement witness operators

    PubMed Central

    Pezzè, Luca; Li, Yan; Li, Weidong; Smerzi, Augusto

    2016-01-01

    Quantum mechanics predicts the existence of correlations between composite systems that, although puzzling to our physical intuition, enable technologies not accessible in a classical world. Notwithstanding, there is still no efficient general method to theoretically quantify and experimentally detect entanglement of many qubits. Here we propose to detect entanglement by measuring the statistical response of a quantum system to an arbitrary nonlocal parametric evolution. We witness entanglement without relying on the tomographic reconstruction of the quantum state, or the realization of witness operators. The protocol requires two collective settings for any number of parties and is robust against noise and decoherence occurring after the implementation of the parametric transformation. To illustrate its user friendliness we demonstrate multipartite entanglement in different experiments with ions and photons by analyzing published data on fidelity visibilities and variances of collective observables. PMID:27681625

  15. Guidance and navigation requirements for unmanned flyby and swingby missions to the outer planets. Volume 4: High thrust mission, part 2, phase C

    NASA Technical Reports Server (NTRS)

    1971-01-01

    The guidance and navigation requirements for a set of impulsive thrust missions involving one or more outer planets or comets. Specific missions considered include two Jupiter entry missions of 800 and 1200 day duration, two multiple swingby missions with the sequences Jupiter-Uranus-Neptune and Jupiter-Saturn-Pluto, and two comets rendezvous missions involving the short period comets P/Tempel 2 and P/Tuttle-Giacobini-Kresak. Results show the relative utility of onboard and Earth-based DSN navigation. The effects of parametric variations in navigation accuracy, measurement rate, and miscellaneous constraints are determined. The utility of a TV type onboard navigation sensor - sighting on planetary satellites and comets - is examined. Velocity corrections required for the nominal and parametrically varied cases are tabulated.

  16. Sorting of Streptomyces Cell Pellets Using a Complex Object Parametric Analyzer and Sorter

    PubMed Central

    Petrus, Marloes L. C.; van Veluw, G. Jerre; Wösten, Han A. B.; Claessen, Dennis

    2014-01-01

    Streptomycetes are filamentous soil bacteria that are used in industry for the production of enzymes and antibiotics. When grown in bioreactors, these organisms form networks of interconnected hyphae, known as pellets, which are heterogeneous in size. Here we describe a method to analyze and sort mycelial pellets using a Complex Object Parametric Analyzer and Sorter (COPAS). Detailed instructions are given for the use of the instrument and the basic statistical analysis of the data. We furthermore describe how pellets can be sorted according to user-defined settings, which enables downstream processing such as the analysis of the RNA or protein content. Using this methodology the mechanism underlying heterogeneous growth can be tackled. This will be instrumental for improving streptomycetes as a cell factory, considering the fact that productivity correlates with pellet size. PMID:24561666

  17. Boltzmann sampling for an XY model using a non-degenerate optical parametric oscillator network

    NASA Astrophysics Data System (ADS)

    Takeda, Y.; Tamate, S.; Yamamoto, Y.; Takesue, H.; Inagaki, T.; Utsunomiya, S.

    2018-01-01

    We present an experimental scheme of implementing multiple spins in a classical XY model using a non-degenerate optical parametric oscillator (NOPO) network. We built an NOPO network to simulate a one-dimensional XY Hamiltonian with 5000 spins and externally controllable effective temperatures. The XY spin variables in our scheme are mapped onto the phases of multiple NOPO pulses in a single ring cavity and interactions between XY spins are implemented by mutual injections between NOPOs. We show the steady-state distribution of optical phases of such NOPO pulses is equivalent to the Boltzmann distribution of the corresponding XY model. Estimated effective temperatures converged to the setting values, and the estimated temperatures and the mean energy exhibited good agreement with the numerical simulations of the Langevin dynamics of NOPO phases.

  18. Shape-Driven 3D Segmentation Using Spherical Wavelets

    PubMed Central

    Nain, Delphine; Haker, Steven; Bobick, Aaron; Tannenbaum, Allen

    2013-01-01

    This paper presents a novel active surface segmentation algorithm using a multiscale shape representation and prior. We define a parametric model of a surface using spherical wavelet functions and learn a prior probability distribution over the wavelet coefficients to model shape variations at different scales and spatial locations in a training set. Based on this representation, we derive a parametric active surface evolution using the multiscale prior coefficients as parameters for our optimization procedure to naturally include the prior in the segmentation framework. Additionally, the optimization method can be applied in a coarse-to-fine manner. We apply our algorithm to the segmentation of brain caudate nucleus, of interest in the study of schizophrenia. Our validation shows our algorithm is computationally efficient and outperforms the Active Shape Model algorithm by capturing finer shape details. PMID:17354875

  19. Geometric calibration of a coordinate measuring machine using a laser tracking system

    NASA Astrophysics Data System (ADS)

    Umetsu, Kenta; Furutnani, Ryosyu; Osawa, Sonko; Takatsuji, Toshiyuki; Kurosawa, Tomizo

    2005-12-01

    This paper proposes a calibration method for a coordinate measuring machine (CMM) using a laser tracking system. The laser tracking system can measure three-dimensional coordinates based on the principle of trilateration with high accuracy and is easy to set up. The accuracy of length measurement of a single laser tracking interferometer (laser tracker) is about 0.3 µm over a length of 600 mm. In this study, we first measured 3D coordinates using the laser tracking system. Secondly, 21 geometric errors, namely, parametric errors of the CMM, were estimated by the comparison of the coordinates obtained by the laser tracking system and those obtained by the CMM. As a result, the estimated parametric errors agreed with those estimated by a ball plate measurement, which demonstrates the validity of the proposed calibration system.

  20. The terrain signatures of administrative units: a tool for environmental assessment.

    PubMed

    Miliaresis, George Ch

    2009-03-01

    The quantification of knowledge related to the terrain and the landuse/landcover of administrative units in Southern Greece (Peloponnesus) is performed from the CGIAR-CSI SRTM digital elevation model and the CORINE landuse/landcover database. Each administrative unit is parametrically represented by a set of attributes related to its relief. Administrative units are classified on the basis of K-means cluster analysis in an attempt to see how they are organized into groups and cluster derived geometric signatures are defined. Finally each cluster is parametrically represented on the basis of the occurrence of the Corine landuse/landcover classes included and thus, landcover signatures are derived. The geometric and the landuse/landcover signatures revealed a terrain dependent landuse/landcover organization that was used in the assessment of the forest fires impact at moderate resolution scale.

  1. Convection- and SASI-driven flows in parametrized models of core-collapse supernova explosions

    DOE PAGES

    Endeve, E.; Cardall, C. Y.; Budiardja, R. D.; ...

    2016-01-21

    We present initial results from three-dimensional simulations of parametrized core-collapse supernova (CCSN) explosions obtained with our astrophysical simulation code General Astrophysical Simulation System (GenASIS). We are interested in nonlinear flows resulting from neutrino-driven convection and the standing accretion shock instability (SASI) in the CCSN environment prior to and during the explosion. By varying parameters in our model that control neutrino heating and shock dissociation, our simulations result in convection-dominated and SASI-dominated evolution. We describe this initial set of simulation results in some detail. To characterize the turbulent flows in the simulations, we compute and compare velocity power spectra from convection-dominatedmore » and SASI-dominated (both non-exploding and exploding) models. When compared to SASI-dominated models, convection-dominated models exhibit significantly more power on small spatial scales.« less

  2. Robust point matching via vector field consensus.

    PubMed

    Jiayi Ma; Ji Zhao; Jinwen Tian; Yuille, Alan L; Zhuowen Tu

    2014-04-01

    In this paper, we propose an efficient algorithm, called vector field consensus, for establishing robust point correspondences between two sets of points. Our algorithm starts by creating a set of putative correspondences which can contain a very large number of false correspondences, or outliers, in addition to a limited number of true correspondences (inliers). Next, we solve for correspondence by interpolating a vector field between the two point sets, which involves estimating a consensus of inlier points whose matching follows a nonparametric geometrical constraint. We formulate this a maximum a posteriori (MAP) estimation of a Bayesian model with hidden/latent variables indicating whether matches in the putative set are outliers or inliers. We impose nonparametric geometrical constraints on the correspondence, as a prior distribution, using Tikhonov regularizers in a reproducing kernel Hilbert space. MAP estimation is performed by the EM algorithm which by also estimating the variance of the prior model (initialized to a large value) is able to obtain good estimates very quickly (e.g., avoiding many of the local minima inherent in this formulation). We illustrate this method on data sets in 2D and 3D and demonstrate that it is robust to a very large number of outliers (even up to 90%). We also show that in the special case where there is an underlying parametric geometrical model (e.g., the epipolar line constraint) that we obtain better results than standard alternatives like RANSAC if a large number of outliers are present. This suggests a two-stage strategy, where we use our nonparametric model to reduce the size of the putative set and then apply a parametric variant of our approach to estimate the geometric parameters. Our algorithm is computationally efficient and we provide code for others to use it. In addition, our approach is general and can be applied to other problems, such as learning with a badly corrupted training data set.

  3. The Dundee Ready Education Environment Measure (DREEM): a review of its adoption and use.

    PubMed

    Miles, Susan; Swift, Louise; Leinster, Sam J

    2012-01-01

    The Dundee Ready Education Environment Measure (DREEM) was published in 1997 as a tool to evaluate educational environments of medical schools and other health training settings and a recent review concluded that it was the most suitable such instrument. This study aimed to review the settings and purposes to which the DREEM has been applied and the approaches used to analyse and report it, with a view to guiding future users towards appropriate methodology. A systematic literature review was conducted using the Web of Knowledge databases of all articles reporting DREEM data between 1997 and 4 January 2011. The review found 40 publications, using data from 20 countries. DREEM is used in evaluation for diagnostic purposes, comparison between different groups and comparison with ideal/expected scores. A variety of non-parametric and parametric statistical methods have been applied, but their use is inconsistent. DREEM has been used internationally for different purposes and is regarded as a useful tool by users. However, reporting and analysis differs between publications. This lack of uniformity makes comparison between institutions difficult. Most users of DREEM are not statisticians and there is a need for informed guidelines on its reporting and statistical analysis.

  4. Optimization of Empirical Force Fields by Parameter Space Mapping: A Single-Step Perturbation Approach.

    PubMed

    Stroet, Martin; Koziara, Katarzyna B; Malde, Alpeshkumar K; Mark, Alan E

    2017-12-12

    A general method for parametrizing atomic interaction functions is presented. The method is based on an analysis of surfaces corresponding to the difference between calculated and target data as a function of alternative combinations of parameters (parameter space mapping). The consideration of surfaces in parameter space as opposed to local values or gradients leads to a better understanding of the relationships between the parameters being optimized and a given set of target data. This in turn enables for a range of target data from multiple molecules to be combined in a robust manner and for the optimal region of parameter space to be trivially identified. The effectiveness of the approach is illustrated by using the method to refine the chlorine 6-12 Lennard-Jones parameters against experimental solvation free enthalpies in water and hexane as well as the density and heat of vaporization of the liquid at atmospheric pressure for a set of 10 aromatic-chloro compounds simultaneously. Single-step perturbation is used to efficiently calculate solvation free enthalpies for a wide range of parameter combinations. The capacity of this approach to parametrize accurate and transferrable force fields is discussed.

  5. To Invest or Not to Invest, That Is the Question: Analysis of Firm Behavior under Anticipated Shocks

    PubMed Central

    Kovac, Dejan; Vukovic, Vuk; Kleut, Nikola; Podobnik, Boris

    2016-01-01

    When companies are faced with an upcoming and expected economic shock some of them tend to react better than others. They adapt by initiating investments thus successfully weathering the storm, while others, even though they possess the same information set, fail to adopt the same business strategy and eventually succumb to the crisis. We use a unique setting of the recent financial crisis in Croatia as an exogenous shock that hit the country with a time lag, allowing the domestic firms to adapt. We perform a survival analysis on the entire population of 144,000 firms in Croatia during the period from 2003 to 2015, and test whether investment prior to the anticipated shock makes firms more likely to survive the recession. We find that small and micro firms, which decided to invest, had between 60 and 70% higher survival rates than similar firms that chose not to invest. This claim is supported by both non-parametric and parametric tests in the survival analysis. From a normative perspective this finding could be important in mitigating the negative effects on aggregate demand during strong recessionary periods. PMID:27508896

  6. Combining Search Engines for Comparative Proteomics

    PubMed Central

    Tabb, David

    2012-01-01

    Many proteomics laboratories have found spectral counting to be an ideal way to recognize biomarkers that differentiate cohorts of samples. This approach assumes that proteins that differ in quantity between samples will generate different numbers of identifiable tandem mass spectra. Increasingly, researchers are employing multiple search engines to maximize the identifications generated from data collections. This talk evaluates four strategies to combine information from multiple search engines in comparative proteomics. The “Count Sum” model pools the spectra across search engines. The “Vote Counting” model combines the judgments from each search engine by protein. Two other models employ parametric and non-parametric analyses of protein-specific p-values from different search engines. We evaluated the four strategies in two different data sets. The ABRF iPRG 2009 study generated five LC-MS/MS analyses of “red” E. coli and five analyses of “yellow” E. coli. NCI CPTAC Study 6 generated five concentrations of Sigma UPS1 spiked into a yeast background. All data were identified with X!Tandem, Sequest, MyriMatch, and TagRecon. For both sample types, “Vote Counting” appeared to manage the diverse identification sets most effectively, yielding heightened discrimination as more search engines were added.

  7. Comparison of Two Methods for Estimating the Sampling-Related Uncertainty of Satellite Rainfall Averages Based on a Large Radar Data Set

    NASA Technical Reports Server (NTRS)

    Lau, William K. M. (Technical Monitor); Bell, Thomas L.; Steiner, Matthias; Zhang, Yu; Wood, Eric F.

    2002-01-01

    The uncertainty of rainfall estimated from averages of discrete samples collected by a satellite is assessed using a multi-year radar data set covering a large portion of the United States. The sampling-related uncertainty of rainfall estimates is evaluated for all combinations of 100 km, 200 km, and 500 km space domains, 1 day, 5 day, and 30 day rainfall accumulations, and regular sampling time intervals of 1 h, 3 h, 6 h, 8 h, and 12 h. These extensive analyses are combined to characterize the sampling uncertainty as a function of space and time domain, sampling frequency, and rainfall characteristics by means of a simple scaling law. Moreover, it is shown that both parametric and non-parametric statistical techniques of estimating the sampling uncertainty produce comparable results. Sampling uncertainty estimates, however, do depend on the choice of technique for obtaining them. They can also vary considerably from case to case, reflecting the great variability of natural rainfall, and should therefore be expressed in probabilistic terms. Rainfall calibration errors are shown to affect comparison of results obtained by studies based on data from different climate regions and/or observation platforms.

  8. Comparison of mode estimation methods and application in molecular clock analysis

    NASA Technical Reports Server (NTRS)

    Hedges, S. Blair; Shah, Prachi

    2003-01-01

    BACKGROUND: Distributions of time estimates in molecular clock studies are sometimes skewed or contain outliers. In those cases, the mode is a better estimator of the overall time of divergence than the mean or median. However, different methods are available for estimating the mode. We compared these methods in simulations to determine their strengths and weaknesses and further assessed their performance when applied to real data sets from a molecular clock study. RESULTS: We found that the half-range mode and robust parametric mode methods have a lower bias than other mode methods under a diversity of conditions. However, the half-range mode suffers from a relatively high variance and the robust parametric mode is more susceptible to bias by outliers. We determined that bootstrapping reduces the variance of both mode estimators. Application of the different methods to real data sets yielded results that were concordant with the simulations. CONCLUSION: Because the half-range mode is a simple and fast method, and produced less bias overall in our simulations, we recommend the bootstrapped version of it as a general-purpose mode estimator and suggest a bootstrap method for obtaining the standard error and 95% confidence interval of the mode.

  9. PERIOD ESTIMATION FOR SPARSELY SAMPLED QUASI-PERIODIC LIGHT CURVES APPLIED TO MIRAS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    He, Shiyuan; Huang, Jianhua Z.; Long, James

    2016-12-01

    We develop a nonlinear semi-parametric Gaussian process model to estimate periods of Miras with sparsely sampled light curves. The model uses a sinusoidal basis for the periodic variation and a Gaussian process for the stochastic changes. We use maximum likelihood to estimate the period and the parameters of the Gaussian process, while integrating out the effects of other nuisance parameters in the model with respect to a suitable prior distribution obtained from earlier studies. Since the likelihood is highly multimodal for period, we implement a hybrid method that applies the quasi-Newton algorithm for Gaussian process parameters and search the period/frequencymore » parameter space over a dense grid. A large-scale, high-fidelity simulation is conducted to mimic the sampling quality of Mira light curves obtained by the M33 Synoptic Stellar Survey. The simulated data set is publicly available and can serve as a testbed for future evaluation of different period estimation methods. The semi-parametric model outperforms an existing algorithm on this simulated test data set as measured by period recovery rate and quality of the resulting period–luminosity relations.« less

  10. To Invest or Not to Invest, That Is the Question: Analysis of Firm Behavior under Anticipated Shocks.

    PubMed

    Kovac, Dejan; Vukovic, Vuk; Kleut, Nikola; Podobnik, Boris

    2016-01-01

    When companies are faced with an upcoming and expected economic shock some of them tend to react better than others. They adapt by initiating investments thus successfully weathering the storm, while others, even though they possess the same information set, fail to adopt the same business strategy and eventually succumb to the crisis. We use a unique setting of the recent financial crisis in Croatia as an exogenous shock that hit the country with a time lag, allowing the domestic firms to adapt. We perform a survival analysis on the entire population of 144,000 firms in Croatia during the period from 2003 to 2015, and test whether investment prior to the anticipated shock makes firms more likely to survive the recession. We find that small and micro firms, which decided to invest, had between 60 and 70% higher survival rates than similar firms that chose not to invest. This claim is supported by both non-parametric and parametric tests in the survival analysis. From a normative perspective this finding could be important in mitigating the negative effects on aggregate demand during strong recessionary periods.

  11. Out-of-time-order correlators in finite open systems

    NASA Astrophysics Data System (ADS)

    Syzranov, S. V.; Gorshkov, A. V.; Galitski, V.

    2018-04-01

    We study out-of-time-order correlators (OTOCs) of the form for a quantum system weakly coupled to a dissipative environment. Such an open system may serve as a model of, e.g., a small region in a disordered interacting medium coupled to the rest of this medium considered as an environment. We demonstrate that for a system with discrete energy levels the OTOC saturates exponentially ∝∑aie-t /τi+const to a constant value at t →∞ , in contrast with quantum-chaotic systems which exhibit exponential growth of OTOCs. Focusing on the case of a two-level system, we calculate microscopically the decay times τi and the value of the saturation constant. Because some OTOCs are immune to dephasing processes and some are not, such correlators may decay on two sets of parametrically different time scales related to inelastic transitions between the system levels and to pure dephasing processes, respectively. In the case of a classical environment, the evolution of the OTOC can be mapped onto the evolution of the density matrix of two systems coupled to the same dissipative environment.

  12. Dynamic PET simulator via tomographic emission projection for kinetic modeling and parametric image studies.

    PubMed

    Häggström, Ida; Beattie, Bradley J; Schmidtlein, C Ross

    2016-06-01

    To develop and evaluate a fast and simple tool called dpetstep (Dynamic PET Simulator of Tracers via Emission Projection), for dynamic PET simulations as an alternative to Monte Carlo (MC), useful for educational purposes and evaluation of the effects of the clinical environment, postprocessing choices, etc., on dynamic and parametric images. The tool was developed in matlab using both new and previously reported modules of petstep (PET Simulator of Tracers via Emission Projection). Time activity curves are generated for each voxel of the input parametric image, whereby effects of imaging system blurring, counting noise, scatters, randoms, and attenuation are simulated for each frame. Each frame is then reconstructed into images according to the user specified method, settings, and corrections. Reconstructed images were compared to MC data, and simple Gaussian noised time activity curves (GAUSS). dpetstep was 8000 times faster than MC. Dynamic images from dpetstep had a root mean square error that was within 4% on average of that of MC images, whereas the GAUSS images were within 11%. The average bias in dpetstep and MC images was the same, while GAUSS differed by 3% points. Noise profiles in dpetstep images conformed well to MC images, confirmed visually by scatter plot histograms, and statistically by tumor region of interest histogram comparisons that showed no significant differences (p < 0.01). Compared to GAUSS, dpetstep images and noise properties agreed better with MC. The authors have developed a fast and easy one-stop solution for simulations of dynamic PET and parametric images, and demonstrated that it generates both images and subsequent parametric images with very similar noise properties to those of MC images, in a fraction of the time. They believe dpetstep to be very useful for generating fast, simple, and realistic results, however since it uses simple scatter and random models it may not be suitable for studies investigating these phenomena. dpetstep can be downloaded free of cost from https://github.com/CRossSchmidtlein/dPETSTEP.

  13. Linkage analysis of high myopia susceptibility locus in 26 families.

    PubMed

    Paget, Sandrine; Julia, Sophie; Vitezica, Zulma G; Soler, Vincent; Malecaze, François; Calvas, Patrick

    2008-01-01

    We conducted a linkage analysis in high myopia families to replicate suggestive results from chromosome 7q36 using a model of autosomal dominant inheritance and genetic heterogeneity. We also performed a genome-wide scan to identify novel loci. Twenty-six families, with at least two high-myopic subjects (ie. refractive value in the less affected eye of -5 diopters) in each family, were included. Phenotypic examination included standard autorefractometry, ultrasonographic eye length measurement, and clinical confirmation of the non-syndromic character of the refractive disorder. Nine families were collected de novo including 136 available members of whom 34 were highly myopic subjects. Twenty new subjects were added in 5 of the 17 remaining families. A total of 233 subjects were submitted to a genome scan using ABI linkage mapping set LMSv2-MD-10, additional markers in all regions where preliminary LOD scores were greater than 1.5 were used. Multipoint parametric and non-parametric analyses were conducted with the software packages Genehunter 2.0 and Merlin 1.0.1. Two autosomal recessive, two autosomal dominant, and four autosomal additive models were used in the parametric linkage analyses. No linkage was found using the subset of nine newly collected families. Study of the entire population of 26 families with a parametric model did not yield a significant LOD score (>3), even for the previously suggestive locus on 7q36. A non-parametric model demonstrated significant linkage to chromosome 7p15 in the entire population (Z-NPL=4.07, p=0.00002). The interval is 7.81 centiMorgans (cM) between markers D7S2458 and D7S2515. The significant interval reported here needs confirmation in other cohorts. Among possible susceptibility genes in the interval, certain candidates are likely to be involved in eye growth and development.

  14. Streamflow hindcasting in European river basins via multi-parametric ensemble of the mesoscale hydrologic model (mHM)

    NASA Astrophysics Data System (ADS)

    Noh, Seong Jin; Rakovec, Oldrich; Kumar, Rohini; Samaniego, Luis

    2016-04-01

    There have been tremendous improvements in distributed hydrologic modeling (DHM) which made a process-based simulation with a high spatiotemporal resolution applicable on a large spatial scale. Despite of increasing information on heterogeneous property of a catchment, DHM is still subject to uncertainties inherently coming from model structure, parameters and input forcing. Sequential data assimilation (DA) may facilitate improved streamflow prediction via DHM using real-time observations to correct internal model states. In conventional DA methods such as state updating, parametric uncertainty is, however, often ignored mainly due to practical limitations of methodology to specify modeling uncertainty with limited ensemble members. If parametric uncertainty related with routing and runoff components is not incorporated properly, predictive uncertainty by DHM may be insufficient to capture dynamics of observations, which may deteriorate predictability. Recently, a multi-scale parameter regionalization (MPR) method was proposed to make hydrologic predictions at different scales using a same set of model parameters without losing much of the model performance. The MPR method incorporated within the mesoscale hydrologic model (mHM, http://www.ufz.de/mhm) could effectively represent and control uncertainty of high-dimensional parameters in a distributed model using global parameters. In this study, we present a global multi-parametric ensemble approach to incorporate parametric uncertainty of DHM in DA to improve streamflow predictions. To effectively represent and control uncertainty of high-dimensional parameters with limited number of ensemble, MPR method is incorporated with DA. Lagged particle filtering is utilized to consider the response times and non-Gaussian characteristics of internal hydrologic processes. The hindcasting experiments are implemented to evaluate impacts of the proposed DA method on streamflow predictions in multiple European river basins having different climate and catchment characteristics. Because augmentation of parameters is not required within an assimilation window, the approach could be stable with limited ensemble members and viable for practical uses.

  15. A robust multi-kernel change detection framework for detecting leaf beetle defoliation using Landsat 7 ETM+ data

    NASA Astrophysics Data System (ADS)

    Anees, Asim; Aryal, Jagannath; O'Reilly, Małgorzata M.; Gale, Timothy J.; Wardlaw, Tim

    2016-12-01

    A robust non-parametric framework, based on multiple Radial Basic Function (RBF) kernels, is proposed in this study, for detecting land/forest cover changes using Landsat 7 ETM+ images. One of the widely used frameworks is to find change vectors (difference image) and use a supervised classifier to differentiate between change and no-change. The Bayesian Classifiers e.g. Maximum Likelihood Classifier (MLC), Naive Bayes (NB), are widely used probabilistic classifiers which assume parametric models, e.g. Gaussian function, for the class conditional distributions. However, their performance can be limited if the data set deviates from the assumed model. The proposed framework exploits the useful properties of Least Squares Probabilistic Classifier (LSPC) formulation i.e. non-parametric and probabilistic nature, to model class posterior probabilities of the difference image using a linear combination of a large number of Gaussian kernels. To this end, a simple technique, based on 10-fold cross-validation is also proposed for tuning model parameters automatically instead of selecting a (possibly) suboptimal combination from pre-specified lists of values. The proposed framework has been tested and compared with Support Vector Machine (SVM) and NB for detection of defoliation, caused by leaf beetles (Paropsisterna spp.) in Eucalyptus nitens and Eucalyptus globulus plantations of two test areas, in Tasmania, Australia, using raw bands and band combination indices of Landsat 7 ETM+. It was observed that due to multi-kernel non-parametric formulation and probabilistic nature, the LSPC outperforms parametric NB with Gaussian assumption in change detection framework, with Overall Accuracy (OA) ranging from 93.6% (κ = 0.87) to 97.4% (κ = 0.94) against 85.3% (κ = 0.69) to 93.4% (κ = 0.85), and is more robust to changing data distributions. Its performance was comparable to SVM, with added advantages of being probabilistic and capable of handling multi-class problems naturally with its original formulation.

  16. Classifier Subset Selection for the Stacked Generalization Method Applied to Emotion Recognition in Speech

    PubMed Central

    Álvarez, Aitor; Sierra, Basilio; Arruti, Andoni; López-Gil, Juan-Miguel; Garay-Vitoria, Nestor

    2015-01-01

    In this paper, a new supervised classification paradigm, called classifier subset selection for stacked generalization (CSS stacking), is presented to deal with speech emotion recognition. The new approach consists of an improvement of a bi-level multi-classifier system known as stacking generalization by means of an integration of an estimation of distribution algorithm (EDA) in the first layer to select the optimal subset from the standard base classifiers. The good performance of the proposed new paradigm was demonstrated over different configurations and datasets. First, several CSS stacking classifiers were constructed on the RekEmozio dataset, using some specific standard base classifiers and a total of 123 spectral, quality and prosodic features computed using in-house feature extraction algorithms. These initial CSS stacking classifiers were compared to other multi-classifier systems and the employed standard classifiers built on the same set of speech features. Then, new CSS stacking classifiers were built on RekEmozio using a different set of both acoustic parameters (extended version of the Geneva Minimalistic Acoustic Parameter Set (eGeMAPS)) and standard classifiers and employing the best meta-classifier of the initial experiments. The performance of these two CSS stacking classifiers was evaluated and compared. Finally, the new paradigm was tested on the well-known Berlin Emotional Speech database. We compared the performance of single, standard stacking and CSS stacking systems using the same parametrization of the second phase. All of the classifications were performed at the categorical level, including the six primary emotions plus the neutral one. PMID:26712757

  17. Enhanced nonlinear interactions in quantum optomechanics via mechanical amplification

    PubMed Central

    Lemonde, Marc-Antoine; Didier, Nicolas; Clerk, Aashish A.

    2016-01-01

    The quantum nonlinear regime of optomechanics is reached when nonlinear effects of the radiation pressure interaction are observed at the single-photon level. This requires couplings larger than the mechanical frequency and cavity-damping rate, and is difficult to achieve experimentally. Here we show how to exponentially enhance the single-photon optomechanical coupling strength using only additional linear resources. Our method is based on using a large-amplitude, strongly detuned mechanical parametric drive to amplify mechanical zero-point fluctuations and hence enhance the radiation pressure interaction. It has the further benefit of allowing time-dependent control, enabling pulsed schemes. For a two-cavity optomechanical set-up, we show that our scheme generates photon blockade for experimentally accessible parameters, and even makes the production of photonic states with negative Wigner functions possible. We discuss how our method is an example of a more general strategy for enhancing boson-mediated two-particle interactions and nonlinearities. PMID:27108814

  18. A geometric modeler based on a dual-geometry representation polyhedra and rational b-splines

    NASA Technical Reports Server (NTRS)

    Klosterman, A. L.

    1984-01-01

    For speed and data base reasons, solid geometric modeling of large complex practical systems is usually approximated by a polyhedra representation. Precise parametric surface and implicit algebraic modelers are available but it is not yet practical to model the same level of system complexity with these precise modelers. In response to this contrast the GEOMOD geometric modeling system was built so that a polyhedra abstraction of the geometry would be available for interactive modeling without losing the precise definition of the geometry. Part of the reason that polyhedra modelers are effective is that all bounded surfaces can be represented in a single canonical format (i.e., sets of planar polygons). This permits a very simple and compact data structure. Nonuniform rational B-splines are currently the best representation to describe a very large class of geometry precisely with one canonical format. The specific capabilities of the modeler are described.

  19. Bayesian Local Contamination Models for Multivariate Outliers

    PubMed Central

    Page, Garritt L.; Dunson, David B.

    2013-01-01

    In studies where data are generated from multiple locations or sources it is common for there to exist observations that are quite unlike the majority. Motivated by the application of establishing a reference value in an inter-laboratory setting when outlying labs are present, we propose a local contamination model that is able to accommodate unusual multivariate realizations in a flexible way. The proposed method models the process level of a hierarchical model using a mixture with a parametric component and a possibly nonparametric contamination. Much of the flexibility in the methodology is achieved by allowing varying random subsets of the elements in the lab-specific mean vectors to be allocated to the contamination component. Computational methods are developed and the methodology is compared to three other possible approaches using a simulation study. We apply the proposed method to a NIST/NOAA sponsored inter-laboratory study which motivated the methodological development. PMID:24363465

  20. Automatic neuron segmentation and neural network analysis method for phase contrast microscopy images.

    PubMed

    Pang, Jincheng; Özkucur, Nurdan; Ren, Michael; Kaplan, David L; Levin, Michael; Miller, Eric L

    2015-11-01

    Phase Contrast Microscopy (PCM) is an important tool for the long term study of living cells. Unlike fluorescence methods which suffer from photobleaching of fluorophore or dye molecules, PCM image contrast is generated by the natural variations in optical index of refraction. Unfortunately, the same physical principles which allow for these studies give rise to complex artifacts in the raw PCM imagery. Of particular interest in this paper are neuron images where these image imperfections manifest in very different ways for the two structures of specific interest: cell bodies (somas) and dendrites. To address these challenges, we introduce a novel parametric image model using the level set framework and an associated variational approach which simultaneously restores and segments this class of images. Using this technique as the basis for an automated image analysis pipeline, results for both the synthetic and real images validate and demonstrate the advantages of our approach.

  1. Parametric Sensitivity Analysis of Precipitation at Global and Local Scales in the Community Atmosphere Model CAM5

    DOE PAGES

    Qian, Yun; Yan, Huiping; Hou, Zhangshuan; ...

    2015-04-10

    We investigate the sensitivity of precipitation characteristics (mean, extreme and diurnal cycle) to a set of uncertain parameters that influence the qualitative and quantitative behavior of the cloud and aerosol processes in the Community Atmosphere Model (CAM5). We adopt both the Latin hypercube and quasi-Monte Carlo sampling approaches to effectively explore the high-dimensional parameter space and then conduct two large sets of simulations. One set consists of 1100 simulations (cloud ensemble) perturbing 22 parameters related to cloud physics and convection, and the other set consists of 256 simulations (aerosol ensemble) focusing on 16 parameters related to aerosols and cloud microphysics.more » Results show that for the 22 parameters perturbed in the cloud ensemble, the six having the greatest influences on the global mean precipitation are identified, three of which (related to the deep convection scheme) are the primary contributors to the total variance of the phase and amplitude of the precipitation diurnal cycle over land. The extreme precipitation characteristics are sensitive to a fewer number of parameters. The precipitation does not always respond monotonically to parameter change. The influence of individual parameters does not depend on the sampling approaches or concomitant parameters selected. Generally the GLM is able to explain more of the parametric sensitivity of global precipitation than local or regional features. The total explained variance for precipitation is primarily due to contributions from the individual parameters (75-90% in total). The total variance shows a significant seasonal variability in the mid-latitude continental regions, but very small in tropical continental regions.« less

  2. Extracting the QCD ΛMS¯ parameter in Drell-Yan process using Collins-Soper-Sterman approach

    NASA Astrophysics Data System (ADS)

    Taghavi, R.; Mirjalili, A.

    2017-03-01

    In this work, we directly fit the QCD dimensional transmutation parameter, ΛMS¯, to experimental data of Drell-Yan (DY) observables. For this purpose, we first obtain the evolution of transverse momentum dependent parton distribution functions (TMDPDFs) up to the next-to-next-to-leading logarithm (NNLL) approximation based on Collins-Soper-Sterman (CSS) formalism. As is expecting the TMDPDFs are appearing at larger values of transverse momentum by increasing the energy scales and also the order of approximation. Then we calculate the cross-section related to the TMDPDFs in the DY process. As a consequence of global fitting to the five sets of experimental data at different low center-of-mass energies and one set at high center-of-mass energy, using CETQ06 parametrizations as our boundary condition, we obtain ΛMS¯ = 221 ± 7(stat) ± 54(theory) MeV corresponding to the renormalized coupling constant αs(Mz2) = 0.117 ± 0.001(stat) ± 0.004(theory) which is within the acceptable range for this quantity. The goodness of χ2/d.o.f = 1.34 shows the results for DY cross-section are in good agreement with different experimental sets, containing E288, E605 and R209 at low center-of-mass energies and D0, CDF data at high center-of-mass energy. The repeated calculations, using HERAPDFs parametrizations is yielding us numerical values for fitted parameters very close to what we obtain using CETQ06 PDFs set. This indicates that the obtained results have enough stability by variations in the boundary conditions.

  3. 40 CFR 264.97 - General ground-water monitoring requirements.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... paragraph (i) of this section. (1) A parametric analysis of variance (ANOVA) followed by multiple... mean levels for each constituent. (2) An analysis of variance (ANOVA) based on ranks followed by...

  4. Investigation on phase noise of the signal from a singly resonant optical parametric oscillator

    NASA Astrophysics Data System (ADS)

    Jinxia, Feng; Yuanji, Li; Kuanshou, Zhang

    2018-04-01

    The phase noise of the signal from a singly resonant optical parametric oscillator (SRO) is investigated theoretically and experimentally. An SRO based on periodically poled lithium niobate is built up that generates the signal with a maximum power of 5.2 W at 1.5 µm. The intensity noise of the signal reaches the shot noise level for frequencies above 5 MHz. The phase noise of the signal oscillates depending on the analysis frequency, and there are phase noise peaks above the shot noise level at the peak frequencies. To explain the phase noise feature of the signal, a semi-classical theoretical model of SROs including the guided acoustic wave Brillouin scattering effect within the nonlinear crystal is developed. The theoretical predictions are in good agreement with the experimental results.

  5. Multi-parametric analysis of phagocyte antimicrobial responses using imaging flow cytometry.

    PubMed

    Havixbeck, Jeffrey J; Wong, Michael E; More Bayona, Juan A; Barreda, Daniel R

    2015-08-01

    We feature a multi-parametric approach based on an imaging flow cytometry platform for examining phagocyte antimicrobial responses against the gram-negative bacterium Aeromonas veronii. This pathogen is known to induce strong inflammatory responses across a broad range of animal species, including humans. We examined the contribution of A. veronii to the induction of early phagocyte inflammatory processes in RAW 264.7 murine macrophages in vitro. We found that A. veronii, both in live or heat-killed forms, induced similar levels of macrophage activation based on NF-κB translocation. Although these macrophages maintained high levels of viability following heat-killed or live challenges with A. veronii, we identified inhibition of macrophage proliferation as early as 1h post in vitro challenge. The characterization of phagocytic responses showed a time-dependent increase in phagocytosis upon A. veronii challenge, which was paired with a robust induction of intracellular respiratory burst responses. Interestingly, despite the overall increase in the production of reactive oxygen species (ROS) among RAW 264.7 macrophages, we found a significant reduction in the production of ROS among the macrophage subset that had bound A. veronii. Phagocytic uptake of the pathogen further decreased ROS production levels, even beyond those of unstimulated controls. Overall, this multi-parametric imaging flow cytometry-based approach allowed for segregation of unique phagocyte sub-populations and examination of their downstream antimicrobial responses, and should contribute to improved understanding of phagocyte responses against Aeromonas and other pathogens. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Parametrizations of three-body hadronic B - and D -decay amplitudes in terms of analytic and unitary meson-meson form factors

    NASA Astrophysics Data System (ADS)

    Boito, D.; Dedonder, J.-P.; El-Bennich, B.; Escribano, R.; Kamiński, R.; Leśniak, L.; Loiseau, B.

    2017-12-01

    We introduce parametrizations of hadronic three-body B and D weak decay amplitudes that can be readily implemented in experimental analyses and are a sound alternative to the simplistic and widely used sum of Breit-Wigner type amplitudes, also known as the isobar model. These parametrizations can be particularly useful in the interpretation of C P asymmetries in the Dalitz plots. They are derived from previous calculations based on a quasi-two-body factorization approach in which two-body hadronic final-state interactions are fully taken into account in terms of unitary S - and P -wave π π , π K , and K K ¯ form factors. These form factors can be determined rigorously, fulfilling fundamental properties of quantum field-theory amplitudes such as analyticity and unitarity, and are in agreement with the low-energy behavior predicted by effective theories of QCD. They are derived from sets of coupled-channel equations using T -matrix elements constrained by experimental meson-meson phase shifts and inelasticities, chiral symmetry, and asymptotic QCD. We provide explicit amplitude expressions for the decays B±→π+π-π±, B →K π+π-, B±→K+K-K±, D+→π-π+π+, D+→K-π+π+, and D0→KS0π+π-, for which we have shown in previous studies that this approach is phenomenologically successful; in addition, we provide expressions for the D0→KS0K+K- decay. Other three-body hadronic channels can be parametrized likewise.

  7. Sensitivity and specificity of normality tests and consequences on reference interval accuracy at small sample size: a computer-simulation study.

    PubMed

    Le Boedec, Kevin

    2016-12-01

    According to international guidelines, parametric methods must be chosen for RI construction when the sample size is small and the distribution is Gaussian. However, normality tests may not be accurate at small sample size. The purpose of the study was to evaluate normality test performance to properly identify samples extracted from a Gaussian population at small sample sizes, and assess the consequences on RI accuracy of applying parametric methods to samples that falsely identified the parent population as Gaussian. Samples of n = 60 and n = 30 values were randomly selected 100 times from simulated Gaussian, lognormal, and asymmetric populations of 10,000 values. The sensitivity and specificity of 4 normality tests were compared. Reference intervals were calculated using 6 different statistical methods from samples that falsely identified the parent population as Gaussian, and their accuracy was compared. Shapiro-Wilk and D'Agostino-Pearson tests were the best performing normality tests. However, their specificity was poor at sample size n = 30 (specificity for P < .05: .51 and .50, respectively). The best significance levels identified when n = 30 were 0.19 for Shapiro-Wilk test and 0.18 for D'Agostino-Pearson test. Using parametric methods on samples extracted from a lognormal population but falsely identified as Gaussian led to clinically relevant inaccuracies. At small sample size, normality tests may lead to erroneous use of parametric methods to build RI. Using nonparametric methods (or alternatively Box-Cox transformation) on all samples regardless of their distribution or adjusting, the significance level of normality tests depending on sample size would limit the risk of constructing inaccurate RI. © 2016 American Society for Veterinary Clinical Pathology.

  8. Lyapounov Functions of Closed Cone Fields: From Conley Theory to Time Functions

    NASA Astrophysics Data System (ADS)

    Bernard, Patrick; Suhr, Stefan

    2018-03-01

    We propose a theory "à la Conley" for cone fields using a notion of relaxed orbits based on cone enlargements, in the spirit of space time geometry. We work in the setting of closed (or equivalently semi-continuous) cone fields with singularities. This setting contains (for questions which are parametrization independent such as the existence of Lyapounov functions) the case of continuous vector-fields on manifolds, of differential inclusions, of Lorentzian metrics, and of continuous cone fields. We generalize to this setting the equivalence between stable causality and the existence of temporal functions. We also generalize the equivalence between global hyperbolicity and the existence of a steep temporal function.

  9. NR/EPDM elastomeric rubber blend miscibility evaluation by two-level fractional factorial design of experiment

    NASA Astrophysics Data System (ADS)

    Razak, Jeefferie Abd; Ahmad, Sahrim Haji; Ratnam, Chantara Thevy; Mahamood, Mazlin Aida; Yaakub, Juliana; Mohamad, Noraiham

    2014-09-01

    Fractional 25 two-level factorial design of experiment (DOE) was applied to systematically prepare the NR/EPDM blend using Haake internal mixer set-up. The process model of rubber blend preparation that correlates the relationships between the mixer process input parameters and the output response of blend compatibility was developed. Model analysis of variance (ANOVA) and model fitting through curve evaluation finalized the R2 of 99.60% with proposed parametric combination of A = 30/70 NR/EPDM blend ratio; B = 70°C mixing temperature; C = 70 rpm of rotor speed; D = 5 minutes of mixing period and E = 1.30 phr EPDM-g-MAH compatibilizer addition, with overall 0.966 desirability. Model validation with small deviation at +2.09% confirmed the repeatability of the mixing strategy with valid maximum tensile strength output representing the blend miscibility. Theoretical calculation of NR/EPDM blend compatibility is also included and compared. In short, this study provides a brief insight on the utilization of DOE for experimental simplification and parameter inter-correlation studies, especially when dealing with multiple variables during elastomeric rubber blend preparation.

  10. Deep learning for studies of galaxy morphology

    NASA Astrophysics Data System (ADS)

    Tuccillo, D.; Huertas-Company, M.; Decencière, E.; Velasco-Forero, S.

    2017-06-01

    Establishing accurate morphological measurements of galaxies in a reasonable amount of time for future big-data surveys such as EUCLID, the Large Synoptic Survey Telescope or the Wide Field Infrared Survey Telescope is a challenge. Because of its high level of abstraction with little human intervention, deep learning appears to be a promising approach. Deep learning is a rapidly growing discipline that models high-level patterns in data as complex multilayered networks. In this work we test the ability of deep convolutional networks to provide parametric properties of Hubble Space Telescope like galaxies (half-light radii, Sérsic indices, total flux etc..). We simulate a set of galaxies including point spread function and realistic noise from the CANDELS survey and try to recover the main galaxy parameters using deep-learning. We compare the results with the ones obtained with the commonly used profile fitting based software GALFIT. This way showing that with our method we obtain results at least equally good as the ones obtained with GALFIT but, once trained, with a factor 5 hundred time faster.

  11. Paleoclassical transport explains electron transport barriers in RTP and TEXTOR

    NASA Astrophysics Data System (ADS)

    Hogeweij, G. M. D.; Callen, J. D.; RTP Team; TEXTOR Team

    2008-06-01

    The recently developed paleoclassical transport model sets the minimum level of electron thermal transport in a tokamak. This transport level has proven to be in good agreement with experimental observations in many cases when fluctuation-induced anomalous transport is small, i.e. in (near-)ohmic plasmas in small to medium size tokamaks, inside internal transport barriers (ITBs) or edge transport barriers (H-mode pedestal). In this paper predictions of the paleoclassical transport model are compared in detail with data from such kinds of discharges: ohmic discharges from the RTP tokamak, EC heated RTP discharges featuring both dynamic and shot-to-shot scans of the ECH power deposition radius and off-axis EC heated discharges from the TEXTOR tokamak. For ohmically heated RTP discharges the Te profiles predicted by the paleoclassical model are in reasonable agreement with the experimental observations, and various parametric dependences are captured satisfactorily. The electron thermal ITBs observed in steady state EC heated RTP discharges and transiently after switch-off of off-axis ECH in TEXTOR are predicted very well by the paleoclassical model.

  12. Parametrization of free ion levels of four isoelectronic 4f2 systems: Insights into configuration interaction parameters

    NASA Astrophysics Data System (ADS)

    Yeung, Yau Yuen; Tanner, Peter A.

    2013-12-01

    The experimental free ion 4f2 energy level data sets comprising 12 or 13 J-multiplets of La+, Ce2+, Pr3+ and Nd4+ have been fitted by a semiempirical atomic Hamiltonian comprising 8, 10, or 12 freely-varying parameters. The root mean square errors were 16.1, 1.3, 0.3 and 0.3 cm-1, respectively for fits with 10 parameters. The fitted inter-electronic repulsion and magnetic parameters vary linearly with ionic charge, i, but better linear fits are obtained with (4-i)2, although the reason is unclear at present. The two-body configuration interaction parameters α and β exhibit a linear relation with [ΔE(bc)]-1, where ΔE(bc) is the energy difference between the 4f2 barycentre and that of the interacting configuration, namely 4f6p for La+, Ce2+, and Pr3+, and 5p54f3 for Nd4+. The linear fit provides the rationale for the negative value of α for the case of La+, where the interacting configuration is located below 4f2.

  13. Superpixel Cut for Figure-Ground Image Segmentation

    NASA Astrophysics Data System (ADS)

    Yang, Michael Ying; Rosenhahn, Bodo

    2016-06-01

    Figure-ground image segmentation has been a challenging problem in computer vision. Apart from the difficulties in establishing an effective framework to divide the image pixels into meaningful groups, the notions of figure and ground often need to be properly defined by providing either user inputs or object models. In this paper, we propose a novel graph-based segmentation framework, called superpixel cut. The key idea is to formulate foreground segmentation as finding a subset of superpixels that partitions a graph over superpixels. The problem is formulated as Min-Cut. Therefore, we propose a novel cost function that simultaneously minimizes the inter-class similarity while maximizing the intra-class similarity. This cost function is optimized using parametric programming. After a small learning step, our approach is fully automatic and fully bottom-up, which requires no high-level knowledge such as shape priors and scene content. It recovers coherent components of images, providing a set of multiscale hypotheses for high-level reasoning. We evaluate our proposed framework by comparing it to other generic figure-ground segmentation approaches. Our method achieves improved performance on state-of-the-art benchmark databases.

  14. Acoustic Characteristics of a Model Isolated Tiltrotor in DNW

    NASA Technical Reports Server (NTRS)

    Booth, Earl R., Jr.; McCluer, Megan; Tadghighi, Hormoz

    1999-01-01

    An aeroacoustic wind tunnel test was conducted using a scaled isolated tiltrotor model. Acoustic data were acquired using an in-flow microphone wing traversed beneath the model to map the directivity of the near-field acoustic radiation of the rotor for a parametric variation of rotor angle-of-attack, tunnel speed, and rotor thrust. Acoustic metric data were examined to show trends of impulsive noise for the parametric variations. BVISPL maximum noise levels were found to increase with alpha for constant mu and C(sub T), although the maximum BVI levels were found at much higher a than for a typical helicopter. BVISPL levels were found to increase with mu for constant alpha and C(sub T. BVISPL was found to decrease with increasing CT for constant a and m, although BVISPL increased with thrust for a constant wake geometry. Metric data were also scaled for M(sub up) to evaluate how well simple power law scaling could be used to correct metric data for M(sub up) effects.

  15. A statistical approach to bioclimatic trend detection in the airborne pollen records of Catalonia (NE Spain)

    NASA Astrophysics Data System (ADS)

    Fernández-Llamazares, Álvaro; Belmonte, Jordina; Delgado, Rosario; De Linares, Concepción

    2014-04-01

    Airborne pollen records are a suitable indicator for the study of climate change. The present work focuses on the role of annual pollen indices for the detection of bioclimatic trends through the analysis of the aerobiological spectra of 11 taxa of great biogeographical relevance in Catalonia over an 18-year period (1994-2011), by means of different parametric and non-parametric statistical methods. Among others, two non-parametric rank-based statistical tests were performed for detecting monotonic trends in time series data of the selected airborne pollen types and we have observed that they have similar power in detecting trends. Except for those cases in which the pollen data can be well-modeled by a normal distribution, it is better to apply non-parametric statistical methods to aerobiological studies. Our results provide a reliable representation of the pollen trends in the region and suggest that greater pollen quantities are being liberated to the atmosphere in the last years, specially by Mediterranean taxa such as Pinus, Total Quercus and Evergreen Quercus, although the trends may differ geographically. Longer aerobiological monitoring periods are required to corroborate these results and survey the increasing levels of certain pollen types that could exert an impact in terms of public health.

  16. Parametric and non-parametric modeling of short-term synaptic plasticity. Part I: computational study

    PubMed Central

    Marmarelis, Vasilis Z.; Berger, Theodore W.

    2009-01-01

    Parametric and non-parametric modeling methods are combined to study the short-term plasticity (STP) of synapses in the central nervous system (CNS). The nonlinear dynamics of STP are modeled by means: (1) previously proposed parametric models based on mechanistic hypotheses and/or specific dynamical processes, and (2) non-parametric models (in the form of Volterra kernels) that transforms the presynaptic signals into postsynaptic signals. In order to synergistically use the two approaches, we estimate the Volterra kernels of the parametric models of STP for four types of synapses using synthetic broadband input–output data. Results show that the non-parametric models accurately and efficiently replicate the input–output transformations of the parametric models. Volterra kernels provide a general and quantitative representation of the STP. PMID:18506609

  17. The chi-square test of independence.

    PubMed

    McHugh, Mary L

    2013-01-01

    The Chi-square statistic is a non-parametric (distribution free) tool designed to analyze group differences when the dependent variable is measured at a nominal level. Like all non-parametric statistics, the Chi-square is robust with respect to the distribution of the data. Specifically, it does not require equality of variances among the study groups or homoscedasticity in the data. It permits evaluation of both dichotomous independent variables, and of multiple group studies. Unlike many other non-parametric and some parametric statistics, the calculations needed to compute the Chi-square provide considerable information about how each of the groups performed in the study. This richness of detail allows the researcher to understand the results and thus to derive more detailed information from this statistic than from many others. The Chi-square is a significance statistic, and should be followed with a strength statistic. The Cramer's V is the most common strength test used to test the data when a significant Chi-square result has been obtained. Advantages of the Chi-square include its robustness with respect to distribution of the data, its ease of computation, the detailed information that can be derived from the test, its use in studies for which parametric assumptions cannot be met, and its flexibility in handling data from both two group and multiple group studies. Limitations include its sample size requirements, difficulty of interpretation when there are large numbers of categories (20 or more) in the independent or dependent variables, and tendency of the Cramer's V to produce relative low correlation measures, even for highly significant results.

  18. Some advanced parametric methods for assessing waveform distortion in a smart grid with renewable generation

    NASA Astrophysics Data System (ADS)

    Alfieri, Luisa

    2015-12-01

    Power quality (PQ) disturbances are becoming an important issue in smart grids (SGs) due to the significant economic consequences that they can generate on sensible loads. However, SGs include several distributed energy resources (DERs) that can be interconnected to the grid with static converters, which lead to a reduction of the PQ levels. Among DERs, wind turbines and photovoltaic systems are expected to be used extensively due to the forecasted reduction in investment costs and other economic incentives. These systems can introduce significant time-varying voltage and current waveform distortions that require advanced spectral analysis methods to be used. This paper provides an application of advanced parametric methods for assessing waveform distortions in SGs with dispersed generation. In particular, the Standard International Electrotechnical Committee (IEC) method, some parametric methods (such as Prony and Estimation of Signal Parameters by Rotational Invariance Technique (ESPRIT)), and some hybrid methods are critically compared on the basis of their accuracy and the computational effort required.

  19. A Multi-Wavelength IR Laser for Space Applications

    NASA Technical Reports Server (NTRS)

    Li, Steven X.; Yu, Anthony W.; Sun, Xiaoli; Fahey, Molly E.; Numata, Kenji; Krainak, Michael A.

    2017-01-01

    We present a laser technology development with space flight heritage to generate laser wavelengths in the near- to mid-infrared (NIR to MIR) for space lidar applications. Integrating an optical parametric crystal to the LOLA (Lunar Orbiter Laser Altimeter) laser transmitter design affords selective laser wavelengths from NIR to MIR that are not easily obtainable from traditional diode pumped solid-state lasers. By replacing the output coupler of the LOLA laser with a properly designed parametric crystal, we successfully demonstrated a monolithic intra-cavity optical parametric oscillator (iOPO) laser based on all high technology readiness level (TRL) subsystems and components. Several desired wavelengths have been generated including 2.1 microns, 2.7 microns and 3.4 microns. This laser can also be used in trace-gas remote sensing, as many molecules possess their unique vibrational transitions in NIR to MIR wavelength region, as well as in time-of-flight mass spectrometer where desorption of samples using MIR laser wavelengths have been successfully demonstrated.

  20. Automated, Parametric Geometry Modeling and Grid Generation for Turbomachinery Applications

    NASA Technical Reports Server (NTRS)

    Harrand, Vincent J.; Uchitel, Vadim G.; Whitmire, John B.

    2000-01-01

    The objective of this Phase I project is to develop a highly automated software system for rapid geometry modeling and grid generation for turbomachinery applications. The proposed system features a graphical user interface for interactive control, a direct interface to commercial CAD/PDM systems, support for IGES geometry output, and a scripting capability for obtaining a high level of automation and end-user customization of the tool. The developed system is fully parametric and highly automated, and, therefore, significantly reduces the turnaround time for 3D geometry modeling, grid generation and model setup. This facilitates design environments in which a large number of cases need to be generated, such as for parametric analysis and design optimization of turbomachinery equipment. In Phase I we have successfully demonstrated the feasibility of the approach. The system has been tested on a wide variety of turbomachinery geometries, including several impellers and a multi stage rotor-stator combination. In Phase II, we plan to integrate the developed system with turbomachinery design software and with commercial CAD/PDM software.

  1. A multi-wavelength IR laser for space applications

    NASA Astrophysics Data System (ADS)

    Li, Steven X.; Yu, Anthony W.; Sun, Xiaoli; Fahey, Molly E.; Numata, Kenji; Krainak, Michael A.

    2017-05-01

    We present a laser technology development with space flight heritage to generate laser wavelengths in the near- to midinfrared (NIR to MIR) for space lidar applications. Integrating an optical parametric crystal to the LOLA (Lunar Orbiter Laser Altimeter) laser transmitter design affords selective laser wavelengths from NIR to MIR that are not easily obtainable from traditional diode pumped solid-state lasers. By replacing the output coupler of the LOLA laser with a properly designed parametric crystal, we successfully demonstrated a monolithic intra-cavity optical parametric oscillator (iOPO) laser based on all high technology readiness level (TRL) subsystems and components. Several desired wavelengths have been generated including 2.1 µm, 2.7 μm and 3.4 μm. This laser can also be used in trace-gas remote sensing, as many molecules possess their unique vibrational transitions in NIR to MIR wavelength region, as well as in time-of-flight mass spectrometer where desorption of samples using MIR laser wavelengths have been successfully demonstrated

  2. Hypothesis Support Mechanism for Mid-Level Visual Pattern Recognition

    NASA Technical Reports Server (NTRS)

    Amador, Jose J (Inventor)

    2007-01-01

    A method of mid-level pattern recognition provides for a pose invariant Hough Transform by parametrizing pairs of points in a pattern with respect to at least two reference points, thereby providing a parameter table that is scale- or rotation-invariant. A corresponding inverse transform may be applied to test hypothesized matches in an image and a distance transform utilized to quantify the level of match.

  3. The nonlinear instability in flap-lag of rotor blades in forward flight

    NASA Technical Reports Server (NTRS)

    Tong, P.

    1971-01-01

    The nonlinear flap-lag coupled oscillation of torsionally rigid rotor blades in forward flight is examined using a set of consistently derived equations by the asymptotic expansion procedure of multiple time scales. The regions of stability and limit cycle oscillation are presented. The roles of parametric excitation, nonlinear oscillation, and forced excitation played in the response of the blade are determined.

  4. 3D Facial Pattern Analysis for Autism

    DTIC Science & Technology

    2010-07-01

    each individual’s data were scaled by the geometric mean of all possible linear distances between landmarks, following. The first two principal...over traditional template matching in that it can represent geometrical and non- geometrical changes of an object in the parametric template space...set of vertex templates can be generated from the root template by geometric or non- geometric transformation. Let Mtt ,...1 be M normalized vertex

  5. Stability and Existence Results for Quasimonotone Quasivariational Inequalities in Finite Dimensional Spaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Castellani, Marco; Giuli, Massimiliano, E-mail: massimiliano.giuli@univaq.it

    2016-02-15

    We study pseudomonotone and quasimonotone quasivariational inequalities in a finite dimensional space. In particular we focus our attention on the closedness of some solution maps associated to a parametric quasivariational inequality. From this study we derive two results on the existence of solutions of the quasivariational inequality. On the one hand, assuming the pseudomonotonicity of the operator, we get the nonemptiness of the set of the classical solutions. On the other hand, we show that the quasimonoticity of the operator implies the nonemptiness of the set of nonzero solutions. An application to traffic network is also considered.

  6. [Index Copernicus: The Central and Eastern European Journals Ranking System. Why indexing needed in the region?] .

    PubMed

    Graczynski, M R

    2000-09-10

    Index Copernicus is ranking system set up by members of the medical community in the Region. There were created five groups of parameters like scientific, editorial and technical quality, circulation and frequency-market stability, which allow for the generation of such a ranking system. The Authors of the Ranking System are aware of the deficiencies of parametrical analysis of science, however they believe the numbers at least set up clear, objective and just rules for all. Index Copernicus could be said the primary objectives of the system for which it has been created for.

  7. Quantum Hamiltonian identification from measurement time traces.

    PubMed

    Zhang, Jun; Sarovar, Mohan

    2014-08-22

    Precise identification of parameters governing quantum processes is a critical task for quantum information and communication technologies. In this Letter, we consider a setting where system evolution is determined by a parametrized Hamiltonian, and the task is to estimate these parameters from temporal records of a restricted set of system observables (time traces). Based on the notion of system realization from linear systems theory, we develop a constructive algorithm that provides estimates of the unknown parameters directly from these time traces. We illustrate the algorithm and its robustness to measurement noise by applying it to a one-dimensional spin chain model with variable couplings.

  8. Assessment and Reduction of Model Parametric Uncertainties: A Case Study with A Distributed Hydrological Model

    NASA Astrophysics Data System (ADS)

    Gan, Y.; Liang, X. Z.; Duan, Q.; Xu, J.; Zhao, P.; Hong, Y.

    2017-12-01

    The uncertainties associated with the parameters of a hydrological model need to be quantified and reduced for it to be useful for operational hydrological forecasting and decision support. An uncertainty quantification framework is presented to facilitate practical assessment and reduction of model parametric uncertainties. A case study, using the distributed hydrological model CREST for daily streamflow simulation during the period 2008-2010 over ten watershed, was used to demonstrate the performance of this new framework. Model behaviors across watersheds were analyzed by a two-stage stepwise sensitivity analysis procedure, using LH-OAT method for screening out insensitive parameters, followed by MARS-based Sobol' sensitivity indices for quantifying each parameter's contribution to the response variance due to its first-order and higher-order effects. Pareto optimal sets of the influential parameters were then found by the adaptive surrogate-based multi-objective optimization procedure, using MARS model for approximating the parameter-response relationship and SCE-UA algorithm for searching the optimal parameter sets of the adaptively updated surrogate model. The final optimal parameter sets were validated against the daily streamflow simulation of the same watersheds during the period 2011-2012. The stepwise sensitivity analysis procedure efficiently reduced the number of parameters that need to be calibrated from twelve to seven, which helps to limit the dimensionality of calibration problem and serves to enhance the efficiency of parameter calibration. The adaptive MARS-based multi-objective calibration exercise provided satisfactory solutions to the reproduction of the observed streamflow for all watersheds. The final optimal solutions showed significant improvement when compared to the default solutions, with about 65-90% reduction in 1-NSE and 60-95% reduction in |RB|. The validation exercise indicated a large improvement in model performance with about 40-85% reduction in 1-NSE, and 35-90% reduction in |RB|. Overall, this uncertainty quantification framework is robust, effective and efficient for parametric uncertainty analysis, the results of which provide useful information that helps to understand the model behaviors and improve the model simulations.

  9. High-power parametric amplification of 11.8-fs laser pulses with carrier-envelope phase control.

    PubMed

    Zinkstok, R Th; Witte, S; Hogervorst, W; Eikema, K S E

    2005-01-01

    Phase-stable parametric chirped-pulse amplification of ultrashort pulses from a carrier-envelope phase-stabilized mode-locked Ti:sapphire oscillator (11.0 fs) to 0.25 mJ/pulse at 1 kHz is demonstrated. Compression with a grating compressor and a LCD shaper yields near-Fourier-limited 11.8-fs pulses with an energy of 0.12 mJ. The amplifier is pumped by 532-nm pulses from a synchronized mode-locked laser, Nd:YAG amplifier system. This approach is shown to be promising for the next generation of ultrafast amplifiers aimed at producing terawatt-level phase-controlled few-cycle laser pulses.

  10. Estimating piecewise exponential frailty model with changing prior for baseline hazard function

    NASA Astrophysics Data System (ADS)

    Thamrin, Sri Astuti; Lawi, Armin

    2016-02-01

    Piecewise exponential models provide a very flexible framework for modelling univariate survival data. It can be used to estimate the effects of different covariates which are influenced by the survival data. Although in a strict sense it is a parametric model, a piecewise exponential hazard can approximate any shape of a parametric baseline hazard. In the parametric baseline hazard, the hazard function for each individual may depend on a set of risk factors or explanatory variables. However, it usually does not explain all such variables which are known or measurable, and these variables become interesting to be considered. This unknown and unobservable risk factor of the hazard function is often termed as the individual's heterogeneity or frailty. This paper analyses the effects of unobserved population heterogeneity in patients' survival times. The issue of model choice through variable selection is also considered. A sensitivity analysis is conducted to assess the influence of the prior for each parameter. We used the Markov Chain Monte Carlo method in computing the Bayesian estimator on kidney infection data. The results obtained show that the sex and frailty are substantially associated with survival in this study and the models are relatively quite sensitive to the choice of two different priors.

  11. Non-parametric methods for cost-effectiveness analysis: the central limit theorem and the bootstrap compared.

    PubMed

    Nixon, Richard M; Wonderling, David; Grieve, Richard D

    2010-03-01

    Cost-effectiveness analyses (CEA) alongside randomised controlled trials commonly estimate incremental net benefits (INB), with 95% confidence intervals, and compute cost-effectiveness acceptability curves and confidence ellipses. Two alternative non-parametric methods for estimating INB are to apply the central limit theorem (CLT) or to use the non-parametric bootstrap method, although it is unclear which method is preferable. This paper describes the statistical rationale underlying each of these methods and illustrates their application with a trial-based CEA. It compares the sampling uncertainty from using either technique in a Monte Carlo simulation. The experiments are repeated varying the sample size and the skewness of costs in the population. The results showed that, even when data were highly skewed, both methods accurately estimated the true standard errors (SEs) when sample sizes were moderate to large (n>50), and also gave good estimates for small data sets with low skewness. However, when sample sizes were relatively small and the data highly skewed, using the CLT rather than the bootstrap led to slightly more accurate SEs. We conclude that while in general using either method is appropriate, the CLT is easier to implement, and provides SEs that are at least as accurate as the bootstrap. (c) 2009 John Wiley & Sons, Ltd.

  12. An Interactive Software for Conceptual Wing Flutter Analysis and Parametric Study

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, Vivek

    1996-01-01

    An interactive computer program was developed for wing flutter analysis in the conceptual design stage. The objective was to estimate the flutter instability boundary of a flexible cantilever wing, when well-defined structural and aerodynamic data are not available, and then study the effect of change in Mach number, dynamic pressure, torsional frequency, sweep, mass ratio, aspect ratio, taper ratio, center of gravity, and pitch inertia, to guide the development of the concept. The software was developed for Macintosh or IBM compatible personal computers, on MathCad application software with integrated documentation, graphics, data base and symbolic mathematics. The analysis method was based on non-dimensional parametric plots of two primary flutter parameters, namely Regier number and Flutter number, with normalization factors based on torsional stiffness, sweep, mass ratio, taper ratio, aspect ratio, center of gravity location and pitch inertia radius of gyration. The parametric plots were compiled in a Vought Corporation report from a vast data base of past experiments and wind-tunnel tests. The computer program was utilized for flutter analysis of the outer wing of a Blended-Wing-Body concept, proposed by McDonnell Douglas Corp. Using a set of assumed data, preliminary flutter boundary and flutter dynamic pressure variation with altitude, Mach number and torsional stiffness were determined.

  13. Parametric study of the swimming performance of a fish robot propelled by a flexible caudal fin.

    PubMed

    Low, K H; Chong, C W

    2010-12-01

    In this paper, we aim to study the swimming performance of fish robots by using a statistical approach. A fish robot employing a carangiform swimming mode had been used as an experimental platform for the performance study. The experiments conducted aim to investigate the effect of various design parameters on the thrust capability of the fish robot with a flexible caudal fin. The controllable parameters associated with the fin include frequency, amplitude of oscillation, aspect ratio and the rigidity of the caudal fin. The significance of these parameters was determined in the first set of experiments by using a statistical approach. A more detailed parametric experimental study was then conducted with only those significant parameters. As a result, the parametric study could be completed with a reduced number of experiments and time spent. With the obtained experimental result, we were able to understand the relationship between various parameters and a possible adjustment of parameters to obtain a higher thrust. The proposed statistical method for experimentation provides an objective and thorough analysis of the effects of individual or combinations of parameters on the swimming performance. Such an efficient experimental design helps to optimize the process and determine factors that influence variability.

  14. [The research protocol VI: How to choose the appropriate statistical test. Inferential statistics].

    PubMed

    Flores-Ruiz, Eric; Miranda-Novales, María Guadalupe; Villasís-Keever, Miguel Ángel

    2017-01-01

    The statistical analysis can be divided in two main components: descriptive analysis and inferential analysis. An inference is to elaborate conclusions from the tests performed with the data obtained from a sample of a population. Statistical tests are used in order to establish the probability that a conclusion obtained from a sample is applicable to the population from which it was obtained. However, choosing the appropriate statistical test in general poses a challenge for novice researchers. To choose the statistical test it is necessary to take into account three aspects: the research design, the number of measurements and the scale of measurement of the variables. Statistical tests are divided into two sets, parametric and nonparametric. Parametric tests can only be used if the data show a normal distribution. Choosing the right statistical test will make it easier for readers to understand and apply the results.

  15. Demonstration of universal parametric entangling gates on a multi-qubit lattice

    PubMed Central

    Reagor, Matthew; Osborn, Christopher B.; Tezak, Nikolas; Staley, Alexa; Prawiroatmodjo, Guenevere; Scheer, Michael; Alidoust, Nasser; Sete, Eyob A.; Didier, Nicolas; da Silva, Marcus P.; Acala, Ezer; Angeles, Joel; Bestwick, Andrew; Block, Maxwell; Bloom, Benjamin; Bradley, Adam; Bui, Catvu; Caldwell, Shane; Capelluto, Lauren; Chilcott, Rick; Cordova, Jeff; Crossman, Genya; Curtis, Michael; Deshpande, Saniya; El Bouayadi, Tristan; Girshovich, Daniel; Hong, Sabrina; Hudson, Alex; Karalekas, Peter; Kuang, Kat; Lenihan, Michael; Manenti, Riccardo; Manning, Thomas; Marshall, Jayss; Mohan, Yuvraj; O’Brien, William; Otterbach, Johannes; Papageorge, Alexander; Paquette, Jean-Philip; Pelstring, Michael; Polloreno, Anthony; Rawat, Vijay; Ryan, Colm A.; Renzas, Russ; Rubin, Nick; Russel, Damon; Rust, Michael; Scarabelli, Diego; Selvanayagam, Michael; Sinclair, Rodney; Smith, Robert; Suska, Mark; To, Ting-Wai; Vahidpour, Mehrnoosh; Vodrahalli, Nagesh; Whyland, Tyler; Yadav, Kamal; Zeng, William; Rigetti, Chad T.

    2018-01-01

    We show that parametric coupling techniques can be used to generate selective entangling interactions for multi-qubit processors. By inducing coherent population exchange between adjacent qubits under frequency modulation, we implement a universal gate set for a linear array of four superconducting qubits. An average process fidelity of ℱ = 93% is estimated for three two-qubit gates via quantum process tomography. We establish the suitability of these techniques for computation by preparing a four-qubit maximally entangled state and comparing the estimated state fidelity with the expected performance of the individual entangling gates. In addition, we prepare an eight-qubit register in all possible bitstring permutations and monitor the fidelity of a two-qubit gate across one pair of these qubits. Across all these permutations, an average fidelity of ℱ = 91.6 ± 2.6% is observed. These results thus offer a path to a scalable architecture with high selectivity and low cross-talk. PMID:29423443

  16. Application of relativistic mean field and effective field theory densities to scattering observables for Ca isotopes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhuyan, M.; School of Physics, Sambalpur University, Jyotivihar, Burla 768 019; Panda, R. N.

    In the framework of relativistic mean field (RMF) theory, we have calculated the density distribution of protons and neutrons for {sup 40,42,44,48}Ca with NL3 and G2 parameter sets. The microscopic proton-nucleus optical potentials for p+{sup 40,42,44,48}Ca systems are evaluated from the Dirac nucleon-nucleon scattering amplitude and the density of the target nucleus using relativistic-Love-Franey and McNeil-Ray-Wallace parametrizations. We have estimated the scattering observables, such as the elastic differential scattering cross section, analyzing power and the spin observables with the relativistic impulse approximation (RIA). The results have been compared with the experimental data for a few selective cases and we findmore » that the use of density as well as the scattering matrix parametrizations are crucial for the theoretical prediction.« less

  17. Conceptual design of sub-exa-watt system by using optical parametric chirped pulse amplification

    NASA Astrophysics Data System (ADS)

    Kawanaka, J.; Tsubakimoto, K.; Yoshida, H.; Fujioka, K.; Fujimoto, Y.; Tokita, S.; Jitsuno, T.; Miyanaga, N.; Gekko-EXA Design Team

    2016-03-01

    A 50 PW ultrahigh-peak-power laser has been conceptually designed, which is based on optical parametric chirped pulse amplification (OPCPA). A 250 J DPSSL and a flash- lamp-pumped kJ laser are adopted as new repeatable pump source. The existed LFEX-laser with more than ten kilo joules are used in the final amplifier stage and the OPCPA with the 2x2 tiled pump beams in random phase has been proposed with several ten centimeter aperture. A pulse duration of amplified pulses is set at less than 10 fs. A broadband OPCPA with ∼500 nm of the gain spectral width near 1 μm is required. A partially deuterated KDP (p-DKDP) crystal is one of the most promising nonlinear crystals and our numerical calculation ensured such ultra-broad gain width. p-DKDP crystals with several deuteration ratio have been successfully grown.

  18. Test data from small solid propellant rocket motor plume measurements (FA-21)

    NASA Technical Reports Server (NTRS)

    Hair, L. M.; Somers, R. E.

    1976-01-01

    A program is described for obtaining a reliable, parametric set of measurements in the exhaust plumes of solid propellant rocket motors. Plume measurements included pressures, temperatures, forces, heat transfer rates, particle sampling, and high-speed movies. Approximately 210,000 digital data points and 15,000 movie frames were acquired. Measurements were made at points in the plumes via rake-mounted probes, and on the surface of a large plate impinged by the exhaust plume. Parametric variations were made in pressure altitude, propellant aluminum loading, impinged plate incidence angle and distance from nozzle exit to plate or rake. Reliability was incorporated by continual use of repeat runs. The test setup of the various hardware items is described along with an account of test procedures. Test results and data accuracy are discussed. Format of the data presentation is detailed. Complete data are included in the appendix.

  19. Parametrization in models of subcritical glass fracture: Activation offset and concerted activation

    NASA Astrophysics Data System (ADS)

    Rodrigues, Bruno Poletto; Hühn, Carolin; Erlebach, Andreas; Mey, Dorothea; Sierka, Marek; Wondraczek, Lothar

    2017-08-01

    There are two established but fundamentally different empirical approaches to parametrize the rate of subcritical fracture in brittle materials. While both are relying on a thermally activated reaction of bond rupture, the difference lies in the way as to how the externally applied stresses affect the local energy landscape. In the consideration of inorganic glasses, the strain energy is typically taken as an off-set on the activation barrier. As an alternative interpretation, the system’s volumetric strain-energy is added to its thermal energy. Such an interpretation is consistent with the democratic fiber bundle model. Here, we test this approach of concerted activation against macroscopic data of bond cleavage activation energy, and also against ab initio quantum chemical simulation of the energy barrier for cracking in silica. The fact that both models are able to reproduce experimental observation to a remarkable degree highlights the importance of a holistic consideration towards non-empirical understanding.

  20. The Advantages of Parametric Modeling for the Reconstruction of Historic Buildings. The Example of the in War Destroyed Church of ST. Catherine (katharinenkirche) in Nuremberg

    NASA Astrophysics Data System (ADS)

    Ludwig, M.; Herbst, G.; Rieke-Zapp, D.; Rosenbauer, R.; Rutishauser, S.; Zellweger, A.

    2013-02-01

    Consecrated in 1297 as the monastery church of the four years earlier founded St. Catherine's monastery, the Gothic Church of St. Catherine was largely destroyed in a devastating bombing raid on January 2nd 1945. To counteract the process of disintegration, the departments of geo-information and lower monument protection authority of the City of Nuremburg decided to getting done a three dimensional building model of the Church of St. Catherine's. A heterogeneous set of data was used for preparation of a parametric architectural model. In effect the modeling of historic buildings can profit from the so called BIM method (Building Information Modeling), as the necessary structuring of the basic data renders it into very sustainable information. The resulting model is perfectly suited to deliver a vivid impression of the interior and exterior of this former mendicant orders' church to present observers.

  1. Model-based spectral estimation of Doppler signals using parallel genetic algorithms.

    PubMed

    Solano González, J; Rodríguez Vázquez, K; García Nocetti, D F

    2000-05-01

    Conventional spectral analysis methods use a fast Fourier transform (FFT) on consecutive or overlapping windowed data segments. For Doppler ultrasound signals, this approach suffers from an inadequate frequency resolution due to the time segment duration and the non-stationarity characteristics of the signals. Parametric or model-based estimators can give significant improvements in the time-frequency resolution at the expense of a higher computational complexity. This work describes an approach which implements in real-time a parametric spectral estimator method using genetic algorithms (GAs) in order to find the optimum set of parameters for the adaptive filter that minimises the error function. The aim is to reduce the computational complexity of the conventional algorithm by using the simplicity associated to GAs and exploiting its parallel characteristics. This will allow the implementation of higher order filters, increasing the spectrum resolution, and opening a greater scope for using more complex methods.

  2. Compact, flexible, frequency agile parametric wavelength converter

    DOEpatents

    Velsko, Stephan P.; Yang, Steven T.

    2002-01-01

    This improved Frequency Agile Optical Parametric Oscillator provides near on-axis pumping of a single QPMC with a tilted periodically poled grating to overcome the necessity to find a particular crystal that will permit collinear birefringence in order to obtain a desired tuning range. A tilted grating design and the elongation of the transverse profile of the pump beam in the angle tuning plane of the FA-OPO reduces the rate of change of the overlap between the pumped volume in the crystal and the resonated and non-resonated wave mode volumes as the pump beam angle is changed. A folded mirror set relays the pivot point for beam steering from a beam deflector to the center of the FA-OPO crystal. This reduces the footprint of the device by as much as a factor of two over that obtained when using the refractive telescope design.

  3. A modified Leslie-Gower predator-prey interaction model and parameter identifiability

    NASA Astrophysics Data System (ADS)

    Tripathi, Jai Prakash; Meghwani, Suraj S.; Thakur, Manoj; Abbas, Syed

    2018-01-01

    In this work, bifurcation and a systematic approach for estimation of identifiable parameters of a modified Leslie-Gower predator-prey system with Crowley-Martin functional response and prey refuge is discussed. Global asymptotic stability is discussed by applying fluctuation lemma. The system undergoes into Hopf bifurcation with respect to parameters intrinsic growth rate of predators (s) and prey reserve (m). The stability of Hopf bifurcation is also discussed by calculating Lyapunov number. The sensitivity analysis of the considered model system with respect to all variables is performed which also supports our theoretical study. To estimate the unknown parameter from the data, an optimization procedure (pseudo-random search algorithm) is adopted. System responses and phase plots for estimated parameters are also compared with true noise free data. It is found that the system dynamics with true set of parametric values is similar to the estimated parametric values. Numerical simulations are presented to substantiate the analytical findings.

  4. A New Feedback-Based Method for Parameter Adaptation in Image Processing Routines.

    PubMed

    Khan, Arif Ul Maula; Mikut, Ralf; Reischl, Markus

    2016-01-01

    The parametrization of automatic image processing routines is time-consuming if a lot of image processing parameters are involved. An expert can tune parameters sequentially to get desired results. This may not be productive for applications with difficult image analysis tasks, e.g. when high noise and shading levels in an image are present or images vary in their characteristics due to different acquisition conditions. Parameters are required to be tuned simultaneously. We propose a framework to improve standard image segmentation methods by using feedback-based automatic parameter adaptation. Moreover, we compare algorithms by implementing them in a feedforward fashion and then adapting their parameters. This comparison is proposed to be evaluated by a benchmark data set that contains challenging image distortions in an increasing fashion. This promptly enables us to compare different standard image segmentation algorithms in a feedback vs. feedforward implementation by evaluating their segmentation quality and robustness. We also propose an efficient way of performing automatic image analysis when only abstract ground truth is present. Such a framework evaluates robustness of different image processing pipelines using a graded data set. This is useful for both end-users and experts.

  5. Numerical Investigation of the Influence of the Configuration Parameters of a Supersonic Passenger Aircraft on the Intensity of Sonic Boom

    NASA Astrophysics Data System (ADS)

    Volkov, V. F.; Mazhul', I. I.

    2018-01-01

    Results of calculations of the sonic boom produced by a supersonic passenger aircraft in a cruising regime of flight at the Mach number M = 2.03 are presented. Consideration is given to the influence of the lateral dihedral of the wings and the angle of their setting, and also of different locations of the aircraft engine nacelles on the wing. An analysis of parametric calculations has shown that the intensities of sonic boom generated by a configuration with a dihedral rear wing and by a configuration with set wings remain constant, in practice, and correspond to the intensity level created by the optimum configuration. Comparative assessments of sonic boom for tandem configurations with different locations of the engine nacelles on the wing surface have shown that the intensity of sonic boom generated by the configuration with an engine nacelle on the windward side can be reduced by 14% compared to the configuration without engine nacelles. In the case of the configuration with engine nacelles on the leeward size of the wing, the profile of the sonic-boom wave degenerates into an N-wave, in which the intensity of the bow shock is significantly reduced.

  6. A New Feedback-Based Method for Parameter Adaptation in Image Processing Routines

    PubMed Central

    Mikut, Ralf; Reischl, Markus

    2016-01-01

    The parametrization of automatic image processing routines is time-consuming if a lot of image processing parameters are involved. An expert can tune parameters sequentially to get desired results. This may not be productive for applications with difficult image analysis tasks, e.g. when high noise and shading levels in an image are present or images vary in their characteristics due to different acquisition conditions. Parameters are required to be tuned simultaneously. We propose a framework to improve standard image segmentation methods by using feedback-based automatic parameter adaptation. Moreover, we compare algorithms by implementing them in a feedforward fashion and then adapting their parameters. This comparison is proposed to be evaluated by a benchmark data set that contains challenging image distortions in an increasing fashion. This promptly enables us to compare different standard image segmentation algorithms in a feedback vs. feedforward implementation by evaluating their segmentation quality and robustness. We also propose an efficient way of performing automatic image analysis when only abstract ground truth is present. Such a framework evaluates robustness of different image processing pipelines using a graded data set. This is useful for both end-users and experts. PMID:27764213

  7. A review of parametric approaches specific to aerodynamic design process

    NASA Astrophysics Data System (ADS)

    Zhang, Tian-tian; Wang, Zhen-guo; Huang, Wei; Yan, Li

    2018-04-01

    Parametric modeling of aircrafts plays a crucial role in the aerodynamic design process. Effective parametric approaches have large design space with a few variables. Parametric methods that commonly used nowadays are summarized in this paper, and their principles have been introduced briefly. Two-dimensional parametric methods include B-Spline method, Class/Shape function transformation method, Parametric Section method, Hicks-Henne method and Singular Value Decomposition method, and all of them have wide application in the design of the airfoil. This survey made a comparison among them to find out their abilities in the design of the airfoil, and the results show that the Singular Value Decomposition method has the best parametric accuracy. The development of three-dimensional parametric methods is limited, and the most popular one is the Free-form deformation method. Those methods extended from two-dimensional parametric methods have promising prospect in aircraft modeling. Since different parametric methods differ in their characteristics, real design process needs flexible choice among them to adapt to subsequent optimization procedure.

  8. SECIMTools: a suite of metabolomics data analysis tools.

    PubMed

    Kirpich, Alexander S; Ibarra, Miguel; Moskalenko, Oleksandr; Fear, Justin M; Gerken, Joseph; Mi, Xinlei; Ashrafi, Ali; Morse, Alison M; McIntyre, Lauren M

    2018-04-20

    Metabolomics has the promise to transform the area of personalized medicine with the rapid development of high throughput technology for untargeted analysis of metabolites. Open access, easy to use, analytic tools that are broadly accessible to the biological community need to be developed. While technology used in metabolomics varies, most metabolomics studies have a set of features identified. Galaxy is an open access platform that enables scientists at all levels to interact with big data. Galaxy promotes reproducibility by saving histories and enabling the sharing workflows among scientists. SECIMTools (SouthEast Center for Integrated Metabolomics) is a set of Python applications that are available both as standalone tools and wrapped for use in Galaxy. The suite includes a comprehensive set of quality control metrics (retention time window evaluation and various peak evaluation tools), visualization techniques (hierarchical cluster heatmap, principal component analysis, modular modularity clustering), basic statistical analysis methods (partial least squares - discriminant analysis, analysis of variance, t-test, Kruskal-Wallis non-parametric test), advanced classification methods (random forest, support vector machines), and advanced variable selection tools (least absolute shrinkage and selection operator LASSO and Elastic Net). SECIMTools leverages the Galaxy platform and enables integrated workflows for metabolomics data analysis made from building blocks designed for easy use and interpretability. Standard data formats and a set of utilities allow arbitrary linkages between tools to encourage novel workflow designs. The Galaxy framework enables future data integration for metabolomics studies with other omics data.

  9. Dysfunctional Effects of a Conflict in a Healthcare Organization.

    PubMed

    Raykova, Ekaterina L; Semerjieva, Mariya A; Yordanov, Georgi Y; Cherkezov, Todor D

    2015-01-01

    Conflicts in healthcare settings are quite common events because of the continuous changes and transformations today's healthcare organizations are undergoing and the vigorous interaction between the medical professionals working in them. To survey the opinions of medical professionals about the possible destructive effects of conflicts on them in the workplace. We conducted a direct individual survey of 279 medical employees at four general hospitals. We used a set of questions that reflect the negative effects and consequences of conflict on healthcare professionals as direct or indirect participants. All data were analysed using the descriptive statistics and non-parametric analysis at a significance level for the null hypothesis of p < 0.05. Workplace conflicts contribute a lot to the stress, psychological tension and emotional exhaustion medical professionals are exposed to. The confrontation the conflict brings the participants into acts as a catalyst of the conflict and enhances the manifestation of hostile actions. A conflict generates a situation which has an impact on the behaviour of all participants involved in it giving rise to emotional states such as anger, aggression and reproaches. The destructive consequences resulting from a conflict are seen in the reduced work satisfaction and demotivation to perform the work activity. The contradictions that arise as a result affect negatively the team cooperation and obstruct the collaborative efforts in solving the problems in the healthcare setting. A conflict in a healthcare setting exerts a considerable destructive effect on an employee, therefore it requires prompt identification and effective intervention to minimise its unfavourable outcomes.

  10. ASTM F1717 standard for the preclinical evaluation of posterior spinal fixators: can we improve it?

    PubMed

    La Barbera, Luigi; Galbusera, Fabio; Villa, Tomaso; Costa, Francesco; Wilke, Hans-Joachim

    2014-10-01

    Preclinical evaluation of spinal implants is a necessary step to ensure their reliability and safety before implantation. The American Society for Testing and Materials reapproved F1717 standard for the assessment of mechanical properties of posterior spinal fixators, which simulates a vertebrectomy model and recommends mimicking vertebral bodies using polyethylene blocks. This set-up should represent the clinical use, but available data in the literature are few. Anatomical parameters depending on the spinal level were compared to published data or measurements on biplanar stereoradiography on 13 patients. Other mechanical variables, describing implant design were considered, and all parameters were investigated using a numerical parametric finite element model. Stress values were calculated by considering either the combination of the average values for each parameter or their worst-case combination depending on the spinal level. The standard set-up represents quite well the anatomy of an instrumented average thoracolumbar segment. The stress on the pedicular screw is significantly influenced by the lever arm of the applied load, the unsupported screw length, the position of the centre of rotation of the functional spine unit and the pedicular inclination with respect to the sagittal plane. The worst-case combination of parameters demonstrates that devices implanted below T5 could potentially undergo higher stresses than those described in the standard suggestions (maximum increase of 22.2% at L1). We propose to revise F1717 in order to describe the anatomical worst case condition we found at L1 level: this will guarantee higher safety of the implant for a wider population of patients. © IMechE 2014.

  11. Structure of the alexithymic brain: A parametric coordinate-based meta-analysis.

    PubMed

    Xu, Pengfei; Opmeer, Esther M; van Tol, Marie-José; Goerlich, Katharina S; Aleman, André

    2018-04-01

    Alexithymia refers to deficiencies in identifying and expressing emotions. This might be related to changes in structural brain volumes, but its neuroanatomical basis remains uncertain as studies have shown heterogeneous findings. Therefore, we conducted a parametric coordinate-based meta-analysis. We identified seventeen structural neuroimaging studies (including a total of 2586 individuals with different levels of alexithymia) investigating the association between gray matter volume and alexithymia. Volumes of the left insula, left amygdala, orbital frontal cortex and striatum were consistently smaller in people with high levels of alexithymia. These areas are important for emotion perception and emotional experience. Smaller volumes in these areas might lead to deficiencies in appropriately identifying and expressing emotions. These findings provide the first quantitative integration of results pertaining to the structural neuroanatomical basis of alexithymia. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  12. Observation of Squeezed Light in the 2 μ m Region

    NASA Astrophysics Data System (ADS)

    Mansell, Georgia L.; McRae, Terry G.; Altin, Paul A.; Yap, Min Jet; Ward, Robert L.; Slagmolen, Bram J. J.; Shaddock, Daniel A.; McClelland, David E.

    2018-05-01

    We present the generation and detection of squeezed light in the 2 μ m wavelength region. This experiment is a crucial step in realizing the quantum noise reduction techniques that will be required for future generations of gravitational-wave detectors. Squeezed vacuum is generated via degenerate optical parametric oscillation from a periodically poled potassium titanyl phosphate crystal, in a dual resonant cavity. The experiment uses a frequency stabilized 1984 nm thulium fiber laser, and squeezing is detected using balanced homodyne detection with extended InGaAs photodiodes. We have measured 4.0 ±0.1 dB of squeezing and 10.5 ±0.5 dB of antisqueezing relative to the shot noise level in the audio frequency band, limited by photodiode quantum efficiency. The inferred squeezing level directly after the optical parametric oscillator, after accounting for known losses and phase noise, is 10.7 dB.

  13. The Relationship between Teaching Styles and Autonomy among Iranian Female EFL Teachers, Teaching at Advanced Levels

    ERIC Educational Resources Information Center

    Baradaran, Abdollah

    2016-01-01

    The current research aimed at inspecting the existence of a significant relationship between teachers' teaching styles and their Autonomy. For this reason, two questionnaires with regard to the main variables were given to 175 female English language teachers, teaching at advanced levels. Moreover, non-parametric Mann Whitney and Kruskal Wallis…

  14. A Distributional Difference-in-Difference Evaluation of the Response of School Expenditures to Reforms and Tax Limits

    ERIC Educational Resources Information Center

    McMillen, Daniel P.; Singell, Larry D., Jr.

    2010-01-01

    Prior work uses a parametric approach to study the distributional effects of school finance reform and finds evidence that reform yields greater equality of school expenditures by lowering spending in high-spending districts (leveling down) or increasing spending in low-spending districts (leveling up). We develop a kernel density…

  15. Influence of the Level Density Parametrization on the Effective GDR Width at High Spins

    NASA Astrophysics Data System (ADS)

    Mazurek, K.; Matejska, M.; Kmiecik, M.; Maj, A.; Dudek, J.

    Parameterizations of the nucleonic level densities are tested by computing the effective GDR strength-functions and GDR widths at high spins. Calculations are based on the thermal shape fluctuation method with the Lublin-Strasbourg Drop (LSD) model. Results for 106Sn, 147Eu, 176W, 194Hg are compared to the experimental data.

  16. Methodologies for Investigating Item- and Test-Level Measurement Equivalence in International Large-Scale Assessments

    ERIC Educational Resources Information Center

    Oliveri, Maria Elena; Olson, Brent F.; Ercikan, Kadriye; Zumbo, Bruno D.

    2012-01-01

    In this study, the Canadian English and French versions of the Problem-Solving Measure of the Programme for International Student Assessment 2003 were examined to investigate their degree of measurement comparability at the item- and test-levels. Three methods of differential item functioning (DIF) were compared: parametric and nonparametric item…

  17. A framework for multivariate data-based at-site flood frequency analysis: Essentiality of the conjugal application of parametric and nonparametric approaches

    NASA Astrophysics Data System (ADS)

    Vittal, H.; Singh, Jitendra; Kumar, Pankaj; Karmakar, Subhankar

    2015-06-01

    In watershed management, flood frequency analysis (FFA) is performed to quantify the risk of flooding at different spatial locations and also to provide guidelines for determining the design periods of flood control structures. The traditional FFA was extensively performed by considering univariate scenario for both at-site and regional estimation of return periods. However, due to inherent mutual dependence of the flood variables or characteristics [i.e., peak flow (P), flood volume (V) and flood duration (D), which are random in nature], analysis has been further extended to multivariate scenario, with some restrictive assumptions. To overcome the assumption of same family of marginal density function for all flood variables, the concept of copula has been introduced. Although, the advancement from univariate to multivariate analyses drew formidable attention to the FFA research community, the basic limitation was that the analyses were performed with the implementation of only parametric family of distributions. The aim of the current study is to emphasize the importance of nonparametric approaches in the field of multivariate FFA; however, the nonparametric distribution may not always be a good-fit and capable of replacing well-implemented multivariate parametric and multivariate copula-based applications. Nevertheless, the potential of obtaining best-fit using nonparametric distributions might be improved because such distributions reproduce the sample's characteristics, resulting in more accurate estimations of the multivariate return period. Hence, the current study shows the importance of conjugating multivariate nonparametric approach with multivariate parametric and copula-based approaches, thereby results in a comprehensive framework for complete at-site FFA. Although the proposed framework is designed for at-site FFA, this approach can also be applied to regional FFA because regional estimations ideally include at-site estimations. The framework is based on the following steps: (i) comprehensive trend analysis to assess nonstationarity in the observed data; (ii) selection of the best-fit univariate marginal distribution with a comprehensive set of parametric and nonparametric distributions for the flood variables; (iii) multivariate frequency analyses with parametric, copula-based and nonparametric approaches; and (iv) estimation of joint and various conditional return periods. The proposed framework for frequency analysis is demonstrated using 110 years of observed data from Allegheny River at Salamanca, New York, USA. The results show that for both univariate and multivariate cases, the nonparametric Gaussian kernel provides the best estimate. Further, we perform FFA for twenty major rivers over continental USA, which shows for seven rivers, all the flood variables followed nonparametric Gaussian kernel; whereas for other rivers, parametric distributions provide the best-fit either for one or two flood variables. Thus the summary of results shows that the nonparametric method cannot substitute the parametric and copula-based approaches, but should be considered during any at-site FFA to provide the broadest choices for best estimation of the flood return periods.

  18. Imaging of prostate cancer: a platform for 3D co-registration of in-vivo MRI ex-vivo MRI and pathology

    NASA Astrophysics Data System (ADS)

    Orczyk, Clément; Mikheev, Artem; Rosenkrantz, Andrew; Melamed, Jonathan; Taneja, Samir S.; Rusinek, Henry

    2012-02-01

    Objectives: Multi-parametric MRI is emerging as a promising method for prostate cancer diagnosis. prognosis and treatment planning. However, the localization of in-vivo detected lesions and pathologic sites of cancer remains a significant challenge. To overcome this limitation we have developed and tested a system for co-registration of in-vivo MRI, ex-vivo MRI and histology. Materials and Methods: Three men diagnosed with localized prostate cancer (ages 54-72, PSA levels 5.1-7.7 ng/ml) were prospectively enrolled in this study. All patients underwent 3T multi-parametric MRI that included T2W, DCEMRI, and DWI prior to robotic-assisted prostatectomy. Ex-vivo multi-parametric MRI was performed on fresh prostate specimen. Excised prostates were then sliced at regular intervals and photographed both before and after fixation. Slices were perpendicular to the main axis of the posterior capsule, i.e., along the direction of the rectal wall. Guided by the location of the urethra, 2D digital images were assembled into 3D models. Cancer foci, extra-capsular extensions and zonal margins were delineated by the pathologist and included in 3D histology data. A locally-developed software was applied to register in-vivo, ex-vivo and histology using an over-determined set of anatomical landmarks placed in anterior fibro-muscular stroma, central. transition and peripheral zones. The mean root square distance across corresponding control points was used to assess co-registration error. Results: Two specimens were pT3a and one pT2b (negative margin) at pathology. The software successfully fused invivo MRI. ex-vivo MRI fresh specimen and histology using appropriate (rigid and affine) transformation models with mean square error of 1.59 mm. Coregistration accuracy was confirmed by multi-modality viewing using operator-guided variable transparency. Conclusion: The method enables successful co-registration of pre-operative MRI, ex-vivo MRI and pathology and it provides initial evidence of feasibility of MRI-guided surgical planning.

  19. Rephasing invariant parametrization of flavor mixing

    NASA Astrophysics Data System (ADS)

    Lee, Tae-Hun

    A new rephasing invariant parametrization for the 3 x 3 CKM matrix, called (x, y) parametrization, is introduced and the properties and applications of the parametrization are discussed. The overall phase condition leads this parametrization to have only six rephsing invariant parameters and two constraints. Its simplicity and regularity become apparent when it is applied to the one-loop RGE (renormalization group equations) for the Yukawa couplings. The implications of this parametrization for unification of the Yukawa couplings are also explored.

  20. Water Residence Time estimation by 1D deconvolution in the form of a l2 -regularized inverse problem with smoothness, positivity and causality constraints

    NASA Astrophysics Data System (ADS)

    Meresescu, Alina G.; Kowalski, Matthieu; Schmidt, Frédéric; Landais, François

    2018-06-01

    The Water Residence Time distribution is the equivalent of the impulse response of a linear system allowing the propagation of water through a medium, e.g. the propagation of rain water from the top of the mountain towards the aquifers. We consider the output aquifer levels as the convolution between the input rain levels and the Water Residence Time, starting with an initial aquifer base level. The estimation of Water Residence Time is important for a better understanding of hydro-bio-geochemical processes and mixing properties of wetlands used as filters in ecological applications, as well as protecting fresh water sources for wells from pollutants. Common methods of estimating the Water Residence Time focus on cross-correlation, parameter fitting and non-parametric deconvolution methods. Here we propose a 1D full-deconvolution, regularized, non-parametric inverse problem algorithm that enforces smoothness and uses constraints of causality and positivity to estimate the Water Residence Time curve. Compared to Bayesian non-parametric deconvolution approaches, it has a fast runtime per test case; compared to the popular and fast cross-correlation method, it produces a more precise Water Residence Time curve even in the case of noisy measurements. The algorithm needs only one regularization parameter to balance between smoothness of the Water Residence Time and accuracy of the reconstruction. We propose an approach on how to automatically find a suitable value of the regularization parameter from the input data only. Tests on real data illustrate the potential of this method to analyze hydrological datasets.

  1. Covariate analysis of bivariate survival data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bennett, L.E.

    1992-01-01

    The methods developed are used to analyze the effects of covariates on bivariate survival data when censoring and ties are present. The proposed method provides models for bivariate survival data that include differential covariate effects and censored observations. The proposed models are based on an extension of the univariate Buckley-James estimators which replace censored data points by their expected values, conditional on the censoring time and the covariates. For the bivariate situation, it is necessary to determine the expectation of the failure times for one component conditional on the failure or censoring time of the other component. Two different methodsmore » have been developed to estimate these expectations. In the semiparametric approach these expectations are determined from a modification of Burke's estimate of the bivariate empirical survival function. In the parametric approach censored data points are also replaced by their conditional expected values where the expected values are determined from a specified parametric distribution. The model estimation will be based on the revised data set, comprised of uncensored components and expected values for the censored components. The variance-covariance matrix for the estimated covariate parameters has also been derived for both the semiparametric and parametric methods. Data from the Demographic and Health Survey was analyzed by these methods. The two outcome variables are post-partum amenorrhea and breastfeeding; education and parity were used as the covariates. Both the covariate parameter estimates and the variance-covariance estimates for the semiparametric and parametric models will be compared. In addition, a multivariate test statistic was used in the semiparametric model to examine contrasts. The significance of the statistic was determined from a bootstrap distribution of the test statistic.« less

  2. Transparency and Documentation in Simulations of Infectious Disease Outbreaks: Towards Evidence-Based Public Health Decisions and Communications

    NASA Astrophysics Data System (ADS)

    Ekberg, Joakim; Timpka, Toomas; Morin, Magnus; Jenvald, Johan; Nyce, James M.; Gursky, Elin A.; Eriksson, Henrik

    Computer simulations have emerged as important tools in the preparation for outbreaks of infectious disease. To support the collaborative planning and responding to the outbreaks, reports from simulations need to be transparent (accessible) with regard to the underlying parametric settings. This paper presents a design for generation of simulation reports where the background settings used in the simulation models are automatically visualized. We extended the ontology-management system Protégé to tag different settings into categories, and included these in report generation in parallel to the simulation outcomes. The report generator takes advantage of an XSLT specification and collects the documentation of the particular simulation settings into abridged XMLs including also summarized results. We conclude that even though inclusion of critical background settings in reports may not increase the accuracy of infectious disease simulations, it can prevent misunderstandings and less than optimal public health decisions.

  3. Strict Constraint Feasibility in Analysis and Design of Uncertain Systems

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Giesy, Daniel P.; Kenny, Sean P.

    2006-01-01

    This paper proposes a methodology for the analysis and design optimization of models subject to parametric uncertainty, where hard inequality constraints are present. Hard constraints are those that must be satisfied for all parameter realizations prescribed by the uncertainty model. Emphasis is given to uncertainty models prescribed by norm-bounded perturbations from a nominal parameter value, i.e., hyper-spheres, and by sets of independently bounded uncertain variables, i.e., hyper-rectangles. These models make it possible to consider sets of parameters having comparable as well as dissimilar levels of uncertainty. Two alternative formulations for hyper-rectangular sets are proposed, one based on a transformation of variables and another based on an infinity norm approach. The suite of tools developed enable us to determine if the satisfaction of hard constraints is feasible by identifying critical combinations of uncertain parameters. Since this practice is performed without sampling or partitioning the parameter space, the resulting assessments of robustness are analytically verifiable. Strategies that enable the comparison of the robustness of competing design alternatives, the approximation of the robust design space, and the systematic search for designs with improved robustness characteristics are also proposed. Since the problem formulation is generic and the solution methods only require standard optimization algorithms for their implementation, the tools developed are applicable to a broad range of problems in several disciplines.

  4. S66: A Well-balanced Database of Benchmark Interaction Energies Relevant to Biomolecular Structures

    PubMed Central

    2011-01-01

    With numerous new quantum chemistry methods being developed in recent years and the promise of even more new methods to be developed in the near future, it is clearly critical that highly accurate, well-balanced, reference data for many different atomic and molecular properties be available for the parametrization and validation of these methods. One area of research that is of particular importance in many areas of chemistry, biology, and material science is the study of noncovalent interactions. Because these interactions are often strongly influenced by correlation effects, it is necessary to use computationally expensive high-order wave function methods to describe them accurately. Here, we present a large new database of interaction energies calculated using an accurate CCSD(T)/CBS scheme. Data are presented for 66 molecular complexes, at their reference equilibrium geometries and at 8 points systematically exploring their dissociation curves; in total, the database contains 594 points: 66 at equilibrium geometries, and 528 in dissociation curves. The data set is designed to cover the most common types of noncovalent interactions in biomolecules, while keeping a balanced representation of dispersion and electrostatic contributions. The data set is therefore well suited for testing and development of methods applicable to bioorganic systems. In addition to the benchmark CCSD(T) results, we also provide decompositions of the interaction energies by means of DFT-SAPT calculations. The data set was used to test several correlated QM methods, including those parametrized specifically for noncovalent interactions. Among these, the SCS-MI-CCSD method outperforms all other tested methods, with a root-mean-square error of 0.08 kcal/mol for the S66 data set. PMID:21836824

  5. B97-3c: A revised low-cost variant of the B97-D density functional method

    NASA Astrophysics Data System (ADS)

    Brandenburg, Jan Gerit; Bannwarth, Christoph; Hansen, Andreas; Grimme, Stefan

    2018-02-01

    A revised version of the well-established B97-D density functional approximation with general applicability for chemical properties of large systems is proposed. Like B97-D, it is based on Becke's power-series ansatz from 1997 and is explicitly parametrized by including the standard D3 semi-classical dispersion correction. The orbitals are expanded in a modified valence triple-zeta Gaussian basis set, which is available for all elements up to Rn. Remaining basis set errors are mostly absorbed in the modified B97 parametrization, while an established atom-pairwise short-range potential is applied to correct for the systematically too long bonds of main group elements which are typical for most semi-local density functionals. The new composite scheme (termed B97-3c) completes the hierarchy of "low-cost" electronic structure methods, which are all mainly free of basis set superposition error and account for most interactions in a physically sound and asymptotically correct manner. B97-3c yields excellent molecular and condensed phase geometries, similar to most hybrid functionals evaluated in a larger basis set expansion. Results on the comprehensive GMTKN55 energy database demonstrate its good performance for main group thermochemistry, kinetics, and non-covalent interactions, when compared to functionals of the same class. This also transfers to metal-organic reactions, which is a major area of applicability for semi-local functionals. B97-3c can be routinely applied to hundreds of atoms on a single processor and we suggest it as a robust computational tool, in particular, for more strongly correlated systems where our previously published "3c" schemes might be problematic.

  6. Systematic and Automated Development of Quantum Mechanically Derived Force Fields: The Challenging Case of Halogenated Hydrocarbons.

    PubMed

    Prampolini, Giacomo; Campetella, Marco; De Mitri, Nicola; Livotto, Paolo Roberto; Cacelli, Ivo

    2016-11-08

    A robust and automated protocol for the derivation of sound force field parameters, suitable for condensed-phase classical simulations, is here tested and validated on several halogenated hydrocarbons, a class of compounds for which standard force fields have often been reported to deliver rather inaccurate performances. The major strength of the proposed protocol is that all of the parameters are derived only from first principles because all of the information required is retrieved from quantum mechanical data, purposely computed for the investigated molecule. This a priori parametrization is carried out separately for the intra- and intermolecular contributions to the force fields, respectively exploiting the Joyce and Picky programs, previously developed in our group. To avoid high computational costs, all quantum mechanical calculations were performed exploiting the density functional theory. Because the choice of the functional is known to be crucial for the description of the intermolecular interactions, a specific procedure is proposed, which allows for a reliable benchmark of different functionals against higher-level data. The intramolecular and intermolecular contribution are eventually joined together, and the resulting quantum mechanically derived force field is thereafter employed in lengthy molecular dynamics simulations to compute several thermodynamic properties that characterize the resulting bulk phase. The accuracy of the proposed parametrization protocol is finally validated by comparing the computed macroscopic observables with the available experimental counterparts. It is found that, on average, the proposed approach is capable of yielding a consistent description of the investigated set, often outperforming the literature standard force fields, or at least delivering results of similar accuracy.

  7. Automated Training of ReaxFF Reactive Force Fields for Energetics of Enzymatic Reactions.

    PubMed

    Trnka, Tomáš; Tvaroška, Igor; Koča, Jaroslav

    2018-01-09

    Computational studies of the reaction mechanisms of various enzymes are nowadays based almost exclusively on hybrid QM/MM models. Unfortunately, the success of this approach strongly depends on the selection of the QM region, and computational cost is a crucial limiting factor. An interesting alternative is offered by empirical reactive molecular force fields, especially the ReaxFF potential developed by van Duin and co-workers. However, even though an initial parametrization of ReaxFF for biomolecules already exists, it does not provide the desired level of accuracy. We have conducted a thorough refitting of the ReaxFF force field to improve the description of reaction energetics. To minimize the human effort required, we propose a fully automated approach to generate an extensive training set comprised of thousands of different geometries and molecular fragments starting from a few model molecules. Electrostatic parameters were optimized with QM electrostatic potentials as the main target quantity, avoiding excessive dependence on the choice of reference atomic charges and improving robustness and transferability. The remaining force field parameters were optimized using the VD-CMA-ES variant of the CMA-ES optimization algorithm. This method is able to optimize hundreds of parameters simultaneously with unprecedented speed and reliability. The resulting force field was validated on a real enzymatic system, ppGalNAcT2 glycosyltransferase. The new force field offers excellent qualitative agreement with the reference QM/MM reaction energy profile, matches the relative energies of intermediate and product minima almost exactly, and reduces the overestimation of transition state energies by 27-48% compared with the previous parametrization.

  8. The adaptive nature of eye movements in linguistic tasks: how payoff and architecture shape speed-accuracy trade-offs.

    PubMed

    Lewis, Richard L; Shvartsman, Michael; Singh, Satinder

    2013-07-01

    We explore the idea that eye-movement strategies in reading are precisely adapted to the joint constraints of task structure, task payoff, and processing architecture. We present a model of saccadic control that separates a parametric control policy space from a parametric machine architecture, the latter based on a small set of assumptions derived from research on eye movements in reading (Engbert, Nuthmann, Richter, & Kliegl, 2005; Reichle, Warren, & McConnell, 2009). The eye-control model is embedded in a decision architecture (a machine and policy space) that is capable of performing a simple linguistic task integrating information across saccades. Model predictions are derived by jointly optimizing the control of eye movements and task decisions under payoffs that quantitatively express different desired speed-accuracy trade-offs. The model yields distinct eye-movement predictions for the same task under different payoffs, including single-fixation durations, frequency effects, accuracy effects, and list position effects, and their modulation by task payoff. The predictions are compared to-and found to accord with-eye-movement data obtained from human participants performing the same task under the same payoffs, but they are found not to accord as well when the assumptions concerning payoff optimization and processing architecture are varied. These results extend work on rational analysis of oculomotor control and adaptation of reading strategy (Bicknell & Levy, ; McConkie, Rayner, & Wilson, 1973; Norris, 2009; Wotschack, 2009) by providing evidence for adaptation at low levels of saccadic control that is shaped by quantitatively varying task demands and the dynamics of processing architecture. Copyright © 2013 Cognitive Science Society, Inc.

  9. Parametric modulation of neural activity by emotion in youth with bipolar disorder, youth with severe mood dysregulation, and healthy volunteers.

    PubMed

    Thomas, Laura A; Brotman, Melissa A; Muhrer, Eli J; Rosen, Brooke H; Bones, Brian L; Reynolds, Richard C; Deveney, Christen M; Pine, Daniel S; Leibenluft, Ellen

    2012-12-01

    CONTEXT Youth with bipolar disorder (BD) and those with severe, nonepisodic irritability (severe mood dysregulation [SMD]) exhibit amygdala dysfunction during facial emotion processing. However, studies have not compared such patients with each other and with comparison individuals in neural responsiveness to subtle changes in facial emotion; the ability to process such changes is important for social cognition. To evaluate this, we used a novel, parametrically designed faces paradigm. OBJECTIVE To compare activation in the amygdala and across the brain in BD patients, SMD patients, and healthy volunteers (HVs). DESIGN Case-control study. SETTING Government research institute. PARTICIPANTS Fifty-seven youths (19 BD, 15 SMD, and 23 HVs). MAIN OUTCOME MEASURE Blood oxygenation level-dependent data. Neutral faces were morphed with angry and happy faces in 25% intervals; static facial stimuli appeared for 3000 milliseconds. Participants performed hostility or nonemotional facial feature (ie, nose width) ratings. The slope of blood oxygenation level-dependent activity was calculated across neutral-to-angry and neutral-to-happy facial stimuli. RESULTS In HVs, but not BD or SMD participants, there was a positive association between left amygdala activity and anger on the face. In the neutral-to-happy whole-brain analysis, BD and SMD participants modulated parietal, temporal, and medial-frontal areas differently from each other and from that in HVs; with increasing facial happiness, SMD patients demonstrated increased, and BD patients decreased, activity in the parietal, temporal, and frontal regions. CONCLUSIONS Youth with BD or SMD differ from HVs in modulation of amygdala activity in response to small changes in facial anger displays. In contrast, individuals with BD or SMD show distinct perturbations in regions mediating attention and face processing in association with changes in the emotional intensity of facial happiness displays. These findings demonstrate similarities and differences in the neural correlates of facial emotion processing in BD and SMD, suggesting that these distinct clinical presentations may reflect differing dysfunctions along a mood disorders spectrum.

  10. The relationship between multilevel models and non-parametric multilevel mixture models: Discrete approximation of intraclass correlation, random coefficient distributions, and residual heteroscedasticity.

    PubMed

    Rights, Jason D; Sterba, Sonya K

    2016-11-01

    Multilevel data structures are common in the social sciences. Often, such nested data are analysed with multilevel models (MLMs) in which heterogeneity between clusters is modelled by continuously distributed random intercepts and/or slopes. Alternatively, the non-parametric multilevel regression mixture model (NPMM) can accommodate the same nested data structures through discrete latent class variation. The purpose of this article is to delineate analytic relationships between NPMM and MLM parameters that are useful for understanding the indirect interpretation of the NPMM as a non-parametric approximation of the MLM, with relaxed distributional assumptions. We define how seven standard and non-standard MLM specifications can be indirectly approximated by particular NPMM specifications. We provide formulas showing how the NPMM can serve as an approximation of the MLM in terms of intraclass correlation, random coefficient means and (co)variances, heteroscedasticity of residuals at level 1, and heteroscedasticity of residuals at level 2. Further, we discuss how these relationships can be useful in practice. The specific relationships are illustrated with simulated graphical demonstrations, and direct and indirect interpretations of NPMM classes are contrasted. We provide an R function to aid in implementing and visualizing an indirect interpretation of NPMM classes. An empirical example is presented and future directions are discussed. © 2016 The British Psychological Society.

  11. Design Automation Using Script Languages. High-Level CAD Templates in Non-Parametric Programs

    NASA Astrophysics Data System (ADS)

    Moreno, R.; Bazán, A. M.

    2017-10-01

    The main purpose of this work is to study the advantages offered by the application of traditional techniques of technical drawing in processes for automation of the design, with non-parametric CAD programs, provided with scripting languages. Given that an example drawing can be solved with traditional step-by-step detailed procedures, is possible to do the same with CAD applications and to generalize it later, incorporating references. In today’s modern CAD applications, there are striking absences of solutions for building engineering: oblique projections (military and cavalier), 3D modelling of complex stairs, roofs, furniture, and so on. The use of geometric references (using variables in script languages) and their incorporation into high-level CAD templates allows the automation of processes. Instead of repeatedly creating similar designs or modifying their data, users should be able to use these templates to generate future variations of the same design. This paper presents the automation process of several complex drawing examples based on CAD script files aided with parametric geometry calculation tools. The proposed method allows us to solve complex geometry designs not currently incorporated in the current CAD applications and to subsequently create other new derivatives without user intervention. Automation in the generation of complex designs not only saves time but also increases the quality of the presentations and reduces the possibility of human errors.

  12. Brain signal variability is parametrically modifiable.

    PubMed

    Garrett, Douglas D; McIntosh, Anthony R; Grady, Cheryl L

    2014-11-01

    Moment-to-moment brain signal variability is a ubiquitous neural characteristic, yet remains poorly understood. Evidence indicates that heightened signal variability can index and aid efficient neural function, but it is not known whether signal variability responds to precise levels of environmental demand, or instead whether variability is relatively static. Using multivariate modeling of functional magnetic resonance imaging-based parametric face processing data, we show here that within-person signal variability level responds to incremental adjustments in task difficulty, in a manner entirely distinct from results produced by examining mean brain signals. Using mixed modeling, we also linked parametric modulations in signal variability with modulations in task performance. We found that difficulty-related reductions in signal variability predicted reduced accuracy and longer reaction times within-person; mean signal changes were not predictive. We further probed the various differences between signal variance and signal means by examining all voxels, subjects, and conditions; this analysis of over 2 million data points failed to reveal any notable relations between voxel variances and means. Our results suggest that brain signal variability provides a systematic task-driven signal of interest from which we can understand the dynamic function of the human brain, and in a way that mean signals cannot capture. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  13. Deep space network software cost estimation model

    NASA Technical Reports Server (NTRS)

    Tausworthe, R. C.

    1981-01-01

    A parametric software cost estimation model prepared for Jet PRopulsion Laboratory (JPL) Deep Space Network (DSN) Data System implementation tasks is described. The resource estimation mdel modifies and combines a number of existing models. The model calibrates the task magnitude and difficulty, development environment, and software technology effects through prompted responses to a set of approximately 50 questions. Parameters in the model are adjusted to fit JPL software life-cycle statistics.

  14. On Monotone Embedding in Information Geometry (Open Access)

    DTIC Science & Technology

    2015-06-25

    the non-parametric ( infinite - dimensional ) setting, as well [4,6], with the α-connection structure cast in a more general way. Theorem 1 of [4] gives... the weighting function for taking the expectation of random variables in calculating the Riemannian metric (G = 1 reduces to F - geometry , with the ...is a trivial rewriting of the convex function f used by [2]. This paper will start in Section 1

  15. Waves and instabilities in plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, L.

    1987-01-01

    The contents of this book are: Plasma as a Dielectric Medium; Nyquist Technique; Absolute and Convective Instabilities; Landau Damping and Phase Mixing; Particle Trapping and Breakdown of Linear Theory; Solution of Viasov Equation via Guilding-Center Transformation; Kinetic Theory of Magnetohydrodynamic Waves; Geometric Optics; Wave-Kinetic Equation; Cutoff and Resonance; Resonant Absorption; Mode Conversion; Gyrokinetic Equation; Drift Waves; Quasi-Linear Theory; Ponderomotive Force; Parametric Instabilities; Problem Sets for Homework, Midterm and Final Examinations.

  16. A Scenario-Based Parametric Analysis of Stable Marriage Approaches to the Army Officer Assignment Problem

    DTIC Science & Technology

    2017-03-23

    solutions obtained through their proposed method to comparative instances of a generalized assignment problem with either ordinal cost components or... method flag: Designates the method by which the changed/ new assignment problem instance is solved. methodFlag = 0:SMAWarmstart Returns a matching...of randomized perturbations. We examine the contrasts between these methods in the context of assigning Army Officers among a set of identified

  17. A Parametric Finite-Element Model for Evaluating Segmented Mirrors with Discrete, Edgewise Connectivity

    NASA Technical Reports Server (NTRS)

    Gersh-Range, Jessica A.; Arnold, William R.; Peck, Mason A.; Stahl, H. Philip

    2011-01-01

    Since future astrophysics missions require space telescopes with apertures of at least 10 meters, there is a need for on-orbit assembly methods that decouple the size of the primary mirror from the choice of launch vehicle. One option is to connect the segments edgewise using mechanisms analogous to damped springs. To evaluate the feasibility of this approach, a parametric ANSYS model that calculates the mode shapes, natural frequencies, and disturbance response of such a mirror, as well as of the equivalent monolithic mirror, has been developed. This model constructs a mirror using rings of hexagonal segments that are either connected continuously along the edges (to form a monolith) or at discrete locations corresponding to the mechanism locations (to form a segmented mirror). As an example, this paper presents the case of a mirror whose segments are connected edgewise by mechanisms analogous to a set of four collocated single-degree-of-freedom damped springs. The results of a set of parameter studies suggest that such mechanisms can be used to create a 15-m segmented mirror that behaves similarly to a monolith, although fully predicting the segmented mirror performance would require incorporating measured mechanism properties into the model. Keywords: segmented mirror, edgewise connectivity, space telescope

  18. Modeling of second order space charge driven coherent sum and difference instabilities

    NASA Astrophysics Data System (ADS)

    Yuan, Yao-Shuo; Boine-Frankenheim, Oliver; Hofmann, Ingo

    2017-10-01

    Second order coherent oscillation modes in intense particle beams play an important role for beam stability in linear or circular accelerators. In addition to the well-known second order even envelope modes and their instability, coupled even envelope modes and odd (skew) modes have recently been shown in [Phys. Plasmas 23, 090705 (2016), 10.1063/1.4963851] to lead to parametric instabilities in periodic focusing lattices with sufficiently different tunes. While this work was partly using the usual envelope equations, partly also particle-in-cell (PIC) simulation, we revisit these modes here and show that the complete set of second order even and odd mode phenomena can be obtained in a unifying approach by using a single set of linearized rms moment equations based on "Chernin's equations." This has the advantage that accurate information on growth rates can be obtained and gathered in a "tune diagram." In periodic focusing we retrieve the parametric sum instabilities of coupled even and of odd modes. The stop bands obtained from these equations are compared with results from PIC simulations for waterbag beams and found to show very good agreement. The "tilting instability" obtained in constant focusing confirms the equivalence of this method with the linearized Vlasov-Poisson system evaluated in second order.

  19. Perfusion CT in acute stroke: effectiveness of automatically-generated colour maps.

    PubMed

    Ukmar, Maja; Degrassi, Ferruccio; Pozzi Mucelli, Roberta Antea; Neri, Francesca; Mucelli, Fabio Pozzi; Cova, Maria Assunta

    2017-04-01

    To evaluate the accuracy of perfusion CT (pCT) in the definition of the infarcted core and the penumbra, comparing the data obtained from the evaluation of parametric maps [cerebral blood volume (CBV), cerebral blood flow (CBF) and mean transit time (MTT)] with software-generated colour maps. A retrospective analysis was performed to identify patients with suspected acute ischaemic strokes and who had undergone unenhanced CT and pCT carried out within 4.5 h from the onset of the symptoms. A qualitative evaluation of the CBV, CBF and MTT maps was performed, followed by an analysis of the colour maps automatically generated by the software. 26 patients were identified, but a direct CT follow-up was performed only on 19 patients after 24-48 h. In the qualitative analysis, 14 patients showed perfusion abnormalities. Specifically, 29 perfusion deficit areas were detected, of which 15 areas suggested the penumbra and the remaining 14 areas suggested the infarct. As for automatically software-generated maps, 12 patients showed perfusion abnormalities. 25 perfusion deficit areas were identified, 15 areas of which suggested the penumbra and the other 10 areas the infarct. The McNemar's test showed no statistically significant difference between the two methods of evaluation in highlighting infarcted areas proved later at CT follow-up. We demonstrated how pCT provides good diagnostic accuracy in the identification of acute ischaemic lesions. The limits of identification of the lesions mainly lie at the pons level and in the basal ganglia area. Qualitative analysis has proven to be more efficient in identification of perfusion lesions in comparison with software-generated maps. However, software-generated maps have proven to be very useful in the emergency setting. Advances in knowledge: The use of CT perfusion is requested in increasingly more patients in order to optimize the treatment, thanks also to the technological evolution of CT, which now allows a whole-brain study. The need for performing CT perfusion study also in the emergency setting could represent a problem for physicians who are not used to interpreting the parametric maps (CBV, MTT etc.). The software-generated maps could be of value in these settings, helping the less expert physician in the differentiation between different areas.

  20. Documenting the location of systematic transrectal ultrasound-guided prostate biopsies: correlation with multi-parametric MRI.

    PubMed

    Turkbey, Baris; Xu, Sheng; Kruecker, Jochen; Locklin, Julia; Pang, Yuxi; Shah, Vijay; Bernardo, Marcelino; Baccala, Angelo; Rastinehad, Ardeshir; Benjamin, Compton; Merino, Maria J; Wood, Bradford J; Choyke, Peter L; Pinto, Peter A

    2011-03-29

    During transrectal ultrasound (TRUS)-guided prostate biopsies, the actual location of the biopsy site is rarely documented. Here, we demonstrate the capability of TRUS-magnetic resonance imaging (MRI) image fusion to document the biopsy site and correlate biopsy results with multi-parametric MRI findings. Fifty consecutive patients (median age 61 years) with a median prostate-specific antigen (PSA) level of 5.8 ng/ml underwent 12-core TRUS-guided biopsy of the prostate. Pre-procedural T2-weighted magnetic resonance images were fused to TRUS. A disposable needle guide with miniature tracking sensors was attached to the TRUS probe to enable fusion with MRI. Real-time TRUS images during biopsy and the corresponding tracking information were recorded. Each biopsy site was superimposed onto the MRI. Each biopsy site was classified as positive or negative for cancer based on the results of each MRI sequence. Sensitivity, specificity, and receiver operating curve (ROC) area under the curve (AUC) values were calculated for multi-parametric MRI. Gleason scores for each multi-parametric MRI pattern were also evaluated. Six hundred and 5 systemic biopsy cores were analyzed in 50 patients, of whom 20 patients had 56 positive cores. MRI identified 34 of 56 positive cores. Overall, sensitivity, specificity, and ROC area values for multi-parametric MRI were 0.607, 0.727, 0.667, respectively. TRUS-MRI fusion after biopsy can be used to document the location of each biopsy site, which can then be correlated with MRI findings. Based on correlation with tracked biopsies, T2-weighted MRI and apparent diffusion coefficient maps derived from diffusion-weighted MRI are the most sensitive sequences, whereas the addition of delayed contrast enhancement MRI and three-dimensional magnetic resonance spectroscopy demonstrated higher specificity consistent with results obtained using radical prostatectomy specimens.

Top