Sample records for parametric yield prediction

  1. Comparison Between Linear and Non-parametric Regression Models for Genome-Enabled Prediction in Wheat

    PubMed Central

    Pérez-Rodríguez, Paulino; Gianola, Daniel; González-Camacho, Juan Manuel; Crossa, José; Manès, Yann; Dreisigacker, Susanne

    2012-01-01

    In genome-enabled prediction, parametric, semi-parametric, and non-parametric regression models have been used. This study assessed the predictive ability of linear and non-linear models using dense molecular markers. The linear models were linear on marker effects and included the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B. The non-linear models (this refers to non-linearity on markers) were reproducing kernel Hilbert space (RKHS) regression, Bayesian regularized neural networks (BRNN), and radial basis function neural networks (RBFNN). These statistical models were compared using 306 elite wheat lines from CIMMYT genotyped with 1717 diversity array technology (DArT) markers and two traits, days to heading (DTH) and grain yield (GY), measured in each of 12 environments. It was found that the three non-linear models had better overall prediction accuracy than the linear regression specification. Results showed a consistent superiority of RKHS and RBFNN over the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B models. PMID:23275882

  2. Comparison between linear and non-parametric regression models for genome-enabled prediction in wheat.

    PubMed

    Pérez-Rodríguez, Paulino; Gianola, Daniel; González-Camacho, Juan Manuel; Crossa, José; Manès, Yann; Dreisigacker, Susanne

    2012-12-01

    In genome-enabled prediction, parametric, semi-parametric, and non-parametric regression models have been used. This study assessed the predictive ability of linear and non-linear models using dense molecular markers. The linear models were linear on marker effects and included the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B. The non-linear models (this refers to non-linearity on markers) were reproducing kernel Hilbert space (RKHS) regression, Bayesian regularized neural networks (BRNN), and radial basis function neural networks (RBFNN). These statistical models were compared using 306 elite wheat lines from CIMMYT genotyped with 1717 diversity array technology (DArT) markers and two traits, days to heading (DTH) and grain yield (GY), measured in each of 12 environments. It was found that the three non-linear models had better overall prediction accuracy than the linear regression specification. Results showed a consistent superiority of RKHS and RBFNN over the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B models.

  3. Influence of Yield Stress Determination in Anisotropic Hardening Model on Springback Prediction in Dual-Phase Steel

    NASA Astrophysics Data System (ADS)

    Lee, J.; Bong, H. J.; Ha, J.; Choi, J.; Barlat, F.; Lee, M.-G.

    2018-05-01

    In this study, a numerical sensitivity analysis of the springback prediction was performed using advanced strain hardening models. In particular, the springback in U-draw bending for dual-phase 780 steel sheets was investigated while focusing on the effect of the initial yield stress determined from the cyclic loading tests. The anisotropic hardening models could reproduce the flow stress behavior under the non-proportional loading condition for the considered parametric cases. However, various identification schemes for determining the yield stress of the anisotropic hardening models significantly influenced the springback prediction. The deviations from the measured springback varied from 4% to 13.5% depending on the identification method.

  4. A global goodness-of-fit test for receiver operating characteristic curve analysis via the bootstrap method.

    PubMed

    Zou, Kelly H; Resnic, Frederic S; Talos, Ion-Florin; Goldberg-Zimring, Daniel; Bhagwat, Jui G; Haker, Steven J; Kikinis, Ron; Jolesz, Ferenc A; Ohno-Machado, Lucila

    2005-10-01

    Medical classification accuracy studies often yield continuous data based on predictive models for treatment outcomes. A popular method for evaluating the performance of diagnostic tests is the receiver operating characteristic (ROC) curve analysis. The main objective was to develop a global statistical hypothesis test for assessing the goodness-of-fit (GOF) for parametric ROC curves via the bootstrap. A simple log (or logit) and a more flexible Box-Cox normality transformations were applied to untransformed or transformed data from two clinical studies to predict complications following percutaneous coronary interventions (PCIs) and for image-guided neurosurgical resection results predicted by tumor volume, respectively. We compared a non-parametric with a parametric binormal estimate of the underlying ROC curve. To construct such a GOF test, we used the non-parametric and parametric areas under the curve (AUCs) as the metrics, with a resulting p value reported. In the interventional cardiology example, logit and Box-Cox transformations of the predictive probabilities led to satisfactory AUCs (AUC=0.888; p=0.78, and AUC=0.888; p=0.73, respectively), while in the brain tumor resection example, log and Box-Cox transformations of the tumor size also led to satisfactory AUCs (AUC=0.898; p=0.61, and AUC=0.899; p=0.42, respectively). In contrast, significant departures from GOF were observed without applying any transformation prior to assuming a binormal model (AUC=0.766; p=0.004, and AUC=0.831; p=0.03), respectively. In both studies the p values suggested that transformations were important to consider before applying any binormal model to estimate the AUC. Our analyses also demonstrated and confirmed the predictive values of different classifiers for determining the interventional complications following PCIs and resection outcomes in image-guided neurosurgery.

  5. The light-yield response of a NE-213 liquid-scintillator detector measured using 2-6 MeV tagged neutrons

    NASA Astrophysics Data System (ADS)

    Scherzinger, J.; Al Jebali, R.; Annand, J. R. M.; Fissum, K. G.; Hall-Wilton, R.; Kanaki, K.; Lundin, M.; Nilsson, B.; Perrey, H.; Rosborg, A.; Svensson, H.

    2016-12-01

    The response of a NE-213 liquid-scintillator detector has been measured using tagged neutrons from 2 to 6 MeV originating from an Am/Be neutron source. The neutron energies were determined using the time-of-flight technique. Pulse-shape discrimination was employed to discern between gamma-rays and neutrons. The behavior of both the fast (35 ns) and the combined fast and slow (475 ns) components of the neutron scintillation-light pulses were studied. Three different prescriptions were used to relate the neutron maximum energy-transfer edges to the corresponding recoil-proton scintillation-light yields, and the results were compared to simulations. The overall normalizations of parametrizations which predict the fast or total light yield of the scintillation pulses were also tested. Our results agree with both existing data and existing parametrizations. We observe a clear sensitivity to the portion and length of the neutron scintillation-light pulse considered.

  6. Control law parameterization for an aeroelastic wind-tunnel model equipped with an active roll control system and comparison with experiment

    NASA Technical Reports Server (NTRS)

    Perry, Boyd, III; Dunn, H. J.; Sandford, Maynard C.

    1988-01-01

    Nominal roll control laws were designed, implemented, and tested on an aeroelastically-scaled free-to-roll wind-tunnel model of an advanced fighter configuration. The tests were performed in the NASA Langley Transonic Dynamics Tunnel. A parametric study of the nominal roll control system was conducted. This parametric study determined possible control system gain variations which yielded identical closed-loop stability (roll mode pole location) and identical roll response but different maximum control-surface deflections. Comparison of analytical predictions with wind-tunnel results was generally very good.

  7. Parametric Optimization Of Gas Metal Arc Welding Process By Using Grey Based Taguchi Method On Aisi 409 Ferritic Stainless Steel

    NASA Astrophysics Data System (ADS)

    Ghosh, Nabendu; Kumar, Pradip; Nandi, Goutam

    2016-10-01

    Welding input process parameters play a very significant role in determining the quality of the welded joint. Only by properly controlling every element of the process can product quality be controlled. For better quality of MIG welding of Ferritic stainless steel AISI 409, precise control of process parameters, parametric optimization of the process parameters, prediction and control of the desired responses (quality indices) etc., continued and elaborate experiments, analysis and modeling are needed. A data of knowledge - base may thus be generated which may be utilized by the practicing engineers and technicians to produce good quality weld more precisely, reliably and predictively. In the present work, X-ray radiographic test has been conducted in order to detect surface and sub-surface defects of weld specimens made of Ferritic stainless steel. The quality of the weld has been evaluated in terms of yield strength, ultimate tensile strength and percentage of elongation of the welded specimens. The observed data have been interpreted, discussed and analyzed by considering ultimate tensile strength ,yield strength and percentage elongation combined with use of Grey-Taguchi methodology.

  8. Cure modeling in real-time prediction: How much does it help?

    PubMed

    Ying, Gui-Shuang; Zhang, Qiang; Lan, Yu; Li, Yimei; Heitjan, Daniel F

    2017-08-01

    Various parametric and nonparametric modeling approaches exist for real-time prediction in time-to-event clinical trials. Recently, Chen (2016 BMC Biomedical Research Methodology 16) proposed a prediction method based on parametric cure-mixture modeling, intending to cover those situations where it appears that a non-negligible fraction of subjects is cured. In this article we apply a Weibull cure-mixture model to create predictions, demonstrating the approach in RTOG 0129, a randomized trial in head-and-neck cancer. We compare the ultimate realized data in RTOG 0129 to interim predictions from a Weibull cure-mixture model, a standard Weibull model without a cure component, and a nonparametric model based on the Bayesian bootstrap. The standard Weibull model predicted that events would occur earlier than the Weibull cure-mixture model, but the difference was unremarkable until late in the trial when evidence for a cure became clear. Nonparametric predictions often gave undefined predictions or infinite prediction intervals, particularly at early stages of the trial. Simulations suggest that cure modeling can yield better-calibrated prediction intervals when there is a cured component, or the appearance of a cured component, but at a substantial cost in the average width of the intervals. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Modeling the impact of bubbling bed hydrodynamics on tar yield and its fluctuations during biomass fast pyrolysis

    DOE PAGES

    Xiong, Qingang; Ramirez, Emilio; Pannala, Sreekanth; ...

    2015-10-09

    The impact of bubbling bed hydrodynamics on temporal variations in the exit tar yield for biomass fast pyrolysis was investigated using computational simulations of an experimental laboratory-scale reactor. A multi-fluid computational fluid dynamics model was employed to simulate the differential conservation equations in the reactor, and this was combined with a multi-component, multi-step pyrolysis kinetics scheme for biomass to account for chemical reactions. The predicted mean tar yields at the reactor exit appear to match corresponding experimental observations. Parametric studies predicted that increasing the fluidization velocity should improve the mean tar yield but increase its temporal variations. Increases in themore » mean tar yield coincide with reducing the diameter of sand particles or increasing the initial sand bed height. However, trends in tar yield variability are more complex than the trends in mean yield. The standard deviation in tar yield reaches a maximum with changes in sand particle size. As a result, the standard deviation in tar yield increases with the increases in initial bed height in freely bubbling state, while reaches a maximum in slugging state.« less

  10. Towards an Empirically Based Parametric Explosion Spectral Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ford, S R; Walter, W R; Ruppert, S

    2009-08-31

    Small underground nuclear explosions need to be confidently detected, identified, and characterized in regions of the world where they have never before been tested. The focus of our work is on the local and regional distances (< 2000 km) and phases (Pn, Pg, Sn, Lg) necessary to see small explosions. We are developing a parametric model of the nuclear explosion seismic source spectrum that is compatible with the earthquake-based geometrical spreading and attenuation models developed using the Magnitude Distance Amplitude Correction (MDAC) techniques (Walter and Taylor, 2002). The explosion parametric model will be particularly important in regions without any priormore » explosion data for calibration. The model is being developed using the available body of seismic data at local and regional distances for past nuclear explosions at foreign and domestic test sites. Parametric modeling is a simple and practical approach for widespread monitoring applications, prior to the capability to carry out fully deterministic modeling. The achievable goal of our parametric model development is to be able to predict observed local and regional distance seismic amplitudes for event identification and yield determination in regions with incomplete or no prior history of underground nuclear testing. The relationship between the parametric equations and the geologic and containment conditions will assist in our physical understanding of the nuclear explosion source.« less

  11. Impact of parametric uncertainty on estimation of the energy deposition into an irradiated brain tumor

    NASA Astrophysics Data System (ADS)

    Taverniers, Søren; Tartakovsky, Daniel M.

    2017-11-01

    Predictions of the total energy deposited into a brain tumor through X-ray irradiation are notoriously error-prone. We investigate how this predictive uncertainty is affected by uncertainty in both the location of the region occupied by a dose-enhancing iodinated contrast agent and the agent's concentration. This is done within the probabilistic framework in which these uncertain parameters are modeled as random variables. We employ the stochastic collocation (SC) method to estimate statistical moments of the deposited energy in terms of statistical moments of the random inputs, and the global sensitivity analysis (GSA) to quantify the relative importance of uncertainty in these parameters on the overall predictive uncertainty. A nonlinear radiation-diffusion equation dramatically magnifies the coefficient of variation of the uncertain parameters, yielding a large coefficient of variation for the predicted energy deposition. This demonstrates that accurate prediction of the energy deposition requires a proper treatment of even small parametric uncertainty. Our analysis also reveals that SC outperforms standard Monte Carlo, but its relative efficiency decreases as the number of uncertain parameters increases from one to three. A robust GSA ameliorates this problem by reducing this number.

  12. Grain-scale modeling and splash parametrization for aeolian sand transport.

    PubMed

    Lämmel, Marc; Dzikowski, Kamil; Kroy, Klaus; Oger, Luc; Valance, Alexandre

    2017-02-01

    The collision of a spherical grain with a granular bed is commonly parametrized by the splash function, which provides the velocity of the rebounding grain and the velocity distribution and number of ejected grains. Starting from elementary geometric considerations and physical principles, like momentum conservation and energy dissipation in inelastic pair collisions, we derive a rebound parametrization for the collision of a spherical grain with a granular bed. Combined with a recently proposed energy-splitting model [Ho et al., Phys. Rev. E 85, 052301 (2012)PLEEE81539-375510.1103/PhysRevE.85.052301] that predicts how the impact energy is distributed among the bed grains, this yields a coarse-grained but complete characterization of the splash as a function of the impact velocity and the impactor-bed grain-size ratio. The predicted mean values of the rebound angle, total and vertical restitution, ejection speed, and number of ejected grains are in excellent agreement with experimental literature data and with our own discrete-element computer simulations. We extract a set of analytical asymptotic relations for shallow impact geometries, which can readily be used in coarse-grained analytical modeling or computer simulations of geophysical particle-laden flows.

  13. Random Predictor Models for Rigorous Uncertainty Quantification: Part 1

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2015-01-01

    This and a companion paper propose techniques for constructing parametric mathematical models describing key features of the distribution of an output variable given input-output data. By contrast to standard models, which yield a single output value at each value of the input, Random Predictors Models (RPMs) yield a random variable at each value of the input. Optimization-based strategies for calculating RPMs having a polynomial dependency on the input and a linear dependency on the parameters are proposed. These formulations yield RPMs having various levels of fidelity in which the mean and the variance of the model's parameters, thus of the predicted output, are prescribed. As such they encompass all RPMs conforming to these prescriptions. The RPMs are optimal in the sense that they yield the tightest predictions for which all (or, depending on the formulation, most) of the observations are less than a fixed number of standard deviations from the mean prediction. When the data satisfies mild stochastic assumptions, and the optimization problem(s) used to calculate the RPM is convex (or, when its solution coincides with the solution to an auxiliary convex problem), the model's reliability, which is the probability that a future observation would be within the predicted ranges, can be bounded tightly and rigorously.

  14. Studies on the Parametric Effects of Plasma Arc Welding of 2205 Duplex Stainless Steel

    NASA Astrophysics Data System (ADS)

    Selva Bharathi, R.; Siva Shanmugam, N.; Murali Kannan, R.; Arungalai Vendan, S.

    2018-03-01

    This research study attempts to create an optimized parametric window by employing Taguchi algorithm for Plasma Arc Welding (PAW) of 2 mm thick 2205 duplex stainless steel. The parameters considered for experimentation and optimization are the welding current, welding speed and pilot arc length respectively. The experimentation involves the parameters variation and subsequently recording the depth of penetration and bead width. Welding current of 60-70 A, welding speed of 250-300 mm/min and pilot arc length of 1-2 mm are the range between which the parameters are varied. Design of experiments is used for the experimental trials. Back propagation neural network, Genetic algorithm and Taguchi techniques are used for predicting the bead width, depth of penetration and validated with experimentally achieved results which were in good agreement. Additionally, micro-structural characterizations are carried out to examine the weld quality. The extrapolation of these optimized parametric values yield enhanced weld strength with cost and time reduction.

  15. A model-averaging method for assessing groundwater conceptual model uncertainty.

    PubMed

    Ye, Ming; Pohlmann, Karl F; Chapman, Jenny B; Pohll, Greg M; Reeves, Donald M

    2010-01-01

    This study evaluates alternative groundwater models with different recharge and geologic components at the northern Yucca Flat area of the Death Valley Regional Flow System (DVRFS), USA. Recharge over the DVRFS has been estimated using five methods, and five geological interpretations are available at the northern Yucca Flat area. Combining the recharge and geological components together with additional modeling components that represent other hydrogeological conditions yields a total of 25 groundwater flow models. As all the models are plausible given available data and information, evaluating model uncertainty becomes inevitable. On the other hand, hydraulic parameters (e.g., hydraulic conductivity) are uncertain in each model, giving rise to parametric uncertainty. Propagation of the uncertainty in the models and model parameters through groundwater modeling causes predictive uncertainty in model predictions (e.g., hydraulic head and flow). Parametric uncertainty within each model is assessed using Monte Carlo simulation, and model uncertainty is evaluated using the model averaging method. Two model-averaging techniques (on the basis of information criteria and GLUE) are discussed. This study shows that contribution of model uncertainty to predictive uncertainty is significantly larger than that of parametric uncertainty. For the recharge and geological components, uncertainty in the geological interpretations has more significant effect on model predictions than uncertainty in the recharge estimates. In addition, weighted residuals vary more for the different geological models than for different recharge models. Most of the calibrated observations are not important for discriminating between the alternative models, because their weighted residuals vary only slightly from one model to another.

  16. Computational Fluid Dynamics simulation of hydrothermal liquefaction of microalgae in a continuous plug-flow reactor.

    PubMed

    Ranganathan, Panneerselvam; Savithri, Sivaraman

    2018-06-01

    Computational Fluid Dynamics (CFD) technique is used in this work to simulate the hydrothermal liquefaction of Nannochloropsis sp. microalgae in a lab-scale continuous plug-flow reactor to understand the fluid dynamics, heat transfer, and reaction kinetics in a HTL reactor under hydrothermal condition. The temperature profile in the reactor and the yield of HTL products from the present simulation are obtained and they are validated with the experimental data available in the literature. Furthermore, the parametric study is carried out to study the effect of slurry flow rate, reactor temperature, and external heat transfer coefficient on the yield of products. Though the model predictions are satisfactory in comparison with the experimental results, it still needs to be improved for better prediction of the product yields. This improved model will be considered as a baseline for design and scale-up of large-scale HTL reactor. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. Genetic Algorithm Based Framework for Automation of Stochastic Modeling of Multi-Season Streamflows

    NASA Astrophysics Data System (ADS)

    Srivastav, R. K.; Srinivasan, K.; Sudheer, K.

    2009-05-01

    Synthetic streamflow data generation involves the synthesis of likely streamflow patterns that are statistically indistinguishable from the observed streamflow data. The various kinds of stochastic models adopted for multi-season streamflow generation in hydrology are: i) parametric models which hypothesize the form of the periodic dependence structure and the distributional form a priori (examples are PAR, PARMA); disaggregation models that aim to preserve the correlation structure at the periodic level and the aggregated annual level; ii) Nonparametric models (examples are bootstrap/kernel based methods), which characterize the laws of chance, describing the stream flow process, without recourse to prior assumptions as to the form or structure of these laws; (k-nearest neighbor (k-NN), matched block bootstrap (MABB)); non-parametric disaggregation model. iii) Hybrid models which blend both parametric and non-parametric models advantageously to model the streamflows effectively. Despite many of these developments that have taken place in the field of stochastic modeling of streamflows over the last four decades, accurate prediction of the storage and the critical drought characteristics has been posing a persistent challenge to the stochastic modeler. This is partly because, usually, the stochastic streamflow model parameters are estimated by minimizing a statistically based objective function (such as maximum likelihood (MLE) or least squares (LS) estimation) and subsequently the efficacy of the models is being validated based on the accuracy of prediction of the estimates of the water-use characteristics, which requires large number of trial simulations and inspection of many plots and tables. Still accurate prediction of the storage and the critical drought characteristics may not be ensured. In this study a multi-objective optimization framework is proposed to find the optimal hybrid model (blend of a simple parametric model, PAR(1) model and matched block bootstrap (MABB) ) based on the explicit objective functions of minimizing the relative bias and relative root mean square error in estimating the storage capacity of the reservoir. The optimal parameter set of the hybrid model is obtained based on the search over a multi- dimensional parameter space (involving simultaneous exploration of the parametric (PAR(1)) as well as the non-parametric (MABB) components). This is achieved using the efficient evolutionary search based optimization tool namely, non-dominated sorting genetic algorithm - II (NSGA-II). This approach helps in reducing the drudgery involved in the process of manual selection of the hybrid model, in addition to predicting the basic summary statistics dependence structure, marginal distribution and water-use characteristics accurately. The proposed optimization framework is used to model the multi-season streamflows of River Beaver and River Weber of USA. In case of both the rivers, the proposed GA-based hybrid model yields a much better prediction of the storage capacity (where simultaneous exploration of both parametric and non-parametric components is done) when compared with the MLE-based hybrid models (where the hybrid model selection is done in two stages, thus probably resulting in a sub-optimal model). This framework can be further extended to include different linear/non-linear hybrid stochastic models at other temporal and spatial scales as well.

  18. The influence of vegetation height heterogeneity on forest and woodland bird species richness across the United States.

    PubMed

    Huang, Qiongyu; Swatantran, Anu; Dubayah, Ralph; Goetz, Scott J

    2014-01-01

    Avian diversity is under increasing pressures. It is thus critical to understand the ecological variables that contribute to large scale spatial distribution of avian species diversity. Traditionally, studies have relied primarily on two-dimensional habitat structure to model broad scale species richness. Vegetation vertical structure is increasingly used at local scales. However, the spatial arrangement of vegetation height has never been taken into consideration. Our goal was to examine the efficacies of three-dimensional forest structure, particularly the spatial heterogeneity of vegetation height in improving avian richness models across forested ecoregions in the U.S. We developed novel habitat metrics to characterize the spatial arrangement of vegetation height using the National Biomass and Carbon Dataset for the year 2000 (NBCD). The height-structured metrics were compared with other habitat metrics for statistical association with richness of three forest breeding bird guilds across Breeding Bird Survey (BBS) routes: a broadly grouped woodland guild, and two forest breeding guilds with preferences for forest edge and for interior forest. Parametric and non-parametric models were built to examine the improvement of predictability. Height-structured metrics had the strongest associations with species richness, yielding improved predictive ability for the woodland guild richness models (r(2) = ∼ 0.53 for the parametric models, 0.63 the non-parametric models) and the forest edge guild models (r(2) = ∼ 0.34 for the parametric models, 0.47 the non-parametric models). All but one of the linear models incorporating height-structured metrics showed significantly higher adjusted-r2 values than their counterparts without additional metrics. The interior forest guild richness showed a consistent low association with height-structured metrics. Our results suggest that height heterogeneity, beyond canopy height alone, supplements habitat characterization and richness models of forest bird species. The metrics and models derived in this study demonstrate practical examples of utilizing three-dimensional vegetation data for improved characterization of spatial patterns in species richness.

  19. The Influence of Vegetation Height Heterogeneity on Forest and Woodland Bird Species Richness across the United States

    PubMed Central

    Huang, Qiongyu; Swatantran, Anu; Dubayah, Ralph; Goetz, Scott J.

    2014-01-01

    Avian diversity is under increasing pressures. It is thus critical to understand the ecological variables that contribute to large scale spatial distribution of avian species diversity. Traditionally, studies have relied primarily on two-dimensional habitat structure to model broad scale species richness. Vegetation vertical structure is increasingly used at local scales. However, the spatial arrangement of vegetation height has never been taken into consideration. Our goal was to examine the efficacies of three-dimensional forest structure, particularly the spatial heterogeneity of vegetation height in improving avian richness models across forested ecoregions in the U.S. We developed novel habitat metrics to characterize the spatial arrangement of vegetation height using the National Biomass and Carbon Dataset for the year 2000 (NBCD). The height-structured metrics were compared with other habitat metrics for statistical association with richness of three forest breeding bird guilds across Breeding Bird Survey (BBS) routes: a broadly grouped woodland guild, and two forest breeding guilds with preferences for forest edge and for interior forest. Parametric and non-parametric models were built to examine the improvement of predictability. Height-structured metrics had the strongest associations with species richness, yielding improved predictive ability for the woodland guild richness models (r2 = ∼0.53 for the parametric models, 0.63 the non-parametric models) and the forest edge guild models (r2 = ∼0.34 for the parametric models, 0.47 the non-parametric models). All but one of the linear models incorporating height-structured metrics showed significantly higher adjusted-r2 values than their counterparts without additional metrics. The interior forest guild richness showed a consistent low association with height-structured metrics. Our results suggest that height heterogeneity, beyond canopy height alone, supplements habitat characterization and richness models of forest bird species. The metrics and models derived in this study demonstrate practical examples of utilizing three-dimensional vegetation data for improved characterization of spatial patterns in species richness. PMID:25101782

  20. Random Predictor Models for Rigorous Uncertainty Quantification: Part 2

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2015-01-01

    This and a companion paper propose techniques for constructing parametric mathematical models describing key features of the distribution of an output variable given input-output data. By contrast to standard models, which yield a single output value at each value of the input, Random Predictors Models (RPMs) yield a random variable at each value of the input. Optimization-based strategies for calculating RPMs having a polynomial dependency on the input and a linear dependency on the parameters are proposed. These formulations yield RPMs having various levels of fidelity in which the mean, the variance, and the range of the model's parameter, thus of the output, are prescribed. As such they encompass all RPMs conforming to these prescriptions. The RPMs are optimal in the sense that they yield the tightest predictions for which all (or, depending on the formulation, most) of the observations are less than a fixed number of standard deviations from the mean prediction. When the data satisfies mild stochastic assumptions, and the optimization problem(s) used to calculate the RPM is convex (or, when its solution coincides with the solution to an auxiliary convex problem), the model's reliability, which is the probability that a future observation would be within the predicted ranges, is bounded rigorously.

  1. Non-covalent interactions across organic and biological subsets of chemical space: Physics-based potentials parametrized from machine learning

    NASA Astrophysics Data System (ADS)

    Bereau, Tristan; DiStasio, Robert A.; Tkatchenko, Alexandre; von Lilienfeld, O. Anatole

    2018-06-01

    Classical intermolecular potentials typically require an extensive parametrization procedure for any new compound considered. To do away with prior parametrization, we propose a combination of physics-based potentials with machine learning (ML), coined IPML, which is transferable across small neutral organic and biologically relevant molecules. ML models provide on-the-fly predictions for environment-dependent local atomic properties: electrostatic multipole coefficients (significant error reduction compared to previously reported), the population and decay rate of valence atomic densities, and polarizabilities across conformations and chemical compositions of H, C, N, and O atoms. These parameters enable accurate calculations of intermolecular contributions—electrostatics, charge penetration, repulsion, induction/polarization, and many-body dispersion. Unlike other potentials, this model is transferable in its ability to handle new molecules and conformations without explicit prior parametrization: All local atomic properties are predicted from ML, leaving only eight global parameters—optimized once and for all across compounds. We validate IPML on various gas-phase dimers at and away from equilibrium separation, where we obtain mean absolute errors between 0.4 and 0.7 kcal/mol for several chemically and conformationally diverse datasets representative of non-covalent interactions in biologically relevant molecules. We further focus on hydrogen-bonded complexes—essential but challenging due to their directional nature—where datasets of DNA base pairs and amino acids yield an extremely encouraging 1.4 kcal/mol error. Finally, and as a first look, we consider IPML for denser systems: water clusters, supramolecular host-guest complexes, and the benzene crystal.

  2. A Backward-Lagrangian-Stochastic Footprint Model for the Urban Environment

    NASA Astrophysics Data System (ADS)

    Wang, Chenghao; Wang, Zhi-Hua; Yang, Jiachuan; Li, Qi

    2018-02-01

    Built terrains, with their complexity in morphology, high heterogeneity, and anthropogenic impact, impose substantial challenges in Earth-system modelling. In particular, estimation of the source areas and footprints of atmospheric measurements in cities requires realistic representation of the landscape characteristics and flow physics in urban areas, but has hitherto been heavily reliant on large-eddy simulations. In this study, we developed physical parametrization schemes for estimating urban footprints based on the backward-Lagrangian-stochastic algorithm, with the built environment represented by street canyons. The vertical profile of mean streamwise velocity is parametrized for the urban canopy and boundary layer. Flux footprints estimated by the proposed model show reasonable agreement with analytical predictions over flat surfaces without roughness elements, and with experimental observations over sparse plant canopies. Furthermore, comparisons of canyon flow and turbulence profiles and the subsequent footprints were made between the proposed model and large-eddy simulation data. The results suggest that the parametrized canyon wind and turbulence statistics, based on the simple similarity theory used, need to be further improved to yield more realistic urban footprint modelling.

  3. Parametrization and calibration of a quasi-analytical algorithm for tropical eutrophic waters

    NASA Astrophysics Data System (ADS)

    Watanabe, Fernanda; Mishra, Deepak R.; Astuti, Ike; Rodrigues, Thanan; Alcântara, Enner; Imai, Nilton N.; Barbosa, Cláudio

    2016-11-01

    Quasi-analytical algorithm (QAA) was designed to derive the inherent optical properties (IOPs) of water bodies from above-surface remote sensing reflectance (Rrs). Several variants of QAA have been developed for environments with different bio-optical characteristics. However, most variants of QAA suffer from moderate to high negative IOP prediction when applied to tropical eutrophic waters. This research is aimed at parametrizing a QAA for tropical eutrophic water dominated by cyanobacteria. The alterations proposed in the algorithm yielded accurate absorption coefficients and chlorophyll-a (Chl-a) concentration. The main changes accomplished were the selection of wavelengths representative of the optically relevant constituents (ORCs) and calibration of values directly associated with the pigments and detritus plus colored dissolved organic material (CDM) absorption coefficients. The re-parametrized QAA eliminated the retrieval of negative values, commonly identified in other variants of QAA. The calibrated model generated a normalized root mean square error (NRMSE) of 21.88% and a mean absolute percentage error (MAPE) of 28.27% for at(λ), where the largest errors were found at 412 nm and 620 nm. Estimated NRMSE for aCDM(λ) was 18.86% with a MAPE of 31.17%. A NRMSE of 22.94% and a MAPE of 60.08% were obtained for aφ(λ). Estimated aφ(665) and aφ(709) was used to predict Chl-a concentration. aφ(665) derived from QAA for Barra Bonita Hydroelectric Reservoir (QAA_BBHR) was able to predict Chl-a accurately, with a NRMSE of 11.3% and MAPE of 38.5%. The performance of the Chl-a model was comparable to some of the most widely used empirical algorithms such as 2-band, 3-band, and the normalized difference chlorophyll index (NDCI). The new QAA was parametrized based on the band configuration of MEdium Resolution Imaging Spectrometer (MERIS), Sentinel-2A and 3A and can be readily scaled-up for spatio-temporal monitoring of IOPs in tropical waters.

  4. Electrical characterization of standard and radiation-hardened RCA CDP1856D 4-BIT, CMOS, bus buffer/separator

    NASA Technical Reports Server (NTRS)

    Stokes, R. L.

    1979-01-01

    Tests performed to determine accuracy and efficiency of bus separators used in microprocessors are presented. Functional, AC parametric, and DC parametric tests were performed in a Tektronix S-3260 automated test system. All the devices passed the functional tests and yielded nominal values in the parametric test.

  5. Videopanorama Frame Rate Requirements Derived from Visual Discrimination of Deceleration During Simulated Aircraft Landing

    NASA Technical Reports Server (NTRS)

    Furnstenau, Norbert; Ellis, Stephen R.

    2015-01-01

    In order to determine the required visual frame rate (FR) for minimizing prediction errors with out-the-window video displays at remote/virtual airport towers, thirteen active air traffic controllers viewed high dynamic fidelity simulations of landing aircraft and decided whether aircraft would stop as if to be able to make a turnoff or whether a runway excursion would be expected. The viewing conditions and simulation dynamics replicated visual rates and environments of transport aircraft landing at small commercial airports. The required frame rate was estimated using Bayes inference on prediction errors by linear FRextrapolation of event probabilities conditional on predictions (stop, no-stop). Furthermore estimates were obtained from exponential model fits to the parametric and non-parametric perceptual discriminabilities d' and A (average area under ROC-curves) as dependent on FR. Decision errors are biased towards preference of overshoot and appear due to illusionary increase in speed at low frames rates. Both Bayes and A - extrapolations yield a framerate requirement of 35 < FRmin < 40 Hz. When comparing with published results [12] on shooter game scores the model based d'(FR)-extrapolation exhibits the best agreement and indicates even higher FRmin > 40 Hz for minimizing decision errors. Definitive recommendations require further experiments with FR > 30 Hz.

  6. Chain Ends and the Ultimate Tensile Strength of Polyethylene Fibers

    NASA Astrophysics Data System (ADS)

    O'Connor, Thomas C.; Robbins, Mark O.

    Determining the tensile yield mechanisms of oriented polymer fibers remains a challenging problem in polymer mechanics. By maximizing the alignment and crystallinity of polyethylene (PE) fibers, tensile strengths σ ~ 6 - 7 GPa have been achieved. While impressive, first-principal calculations predict carbon backbone bonds would allow strengths four times higher (σ ~ 20 GPa) before breaking. The reduction in strength is caused by crystal defects like chain ends, which allow fibers to yield by chain slip in addition to bond breaking. We use large scale molecular dynamics (MD) simulations to determine the tensile yield mechanism of orthorhombic PE crystals with finite chains spanning 102 -104 carbons in length. The yield stress σy saturates for long chains at ~ 6 . 3 GPa, agreeing well with experiments. Chains do not break but always yield by slip, after nucleation of 1D dislocations at chain ends. Dislocations are accurately described by a Frenkel-Kontorova model, parametrized by the mechanical properties of an ideal crystal. We compute a dislocation core size ξ = 25 . 24 Å and determine the high and low strain rate limits of σy. Our results suggest characterizing such 1D dislocations is an efficient method for predicting fiber strength. This research was performed within the Center for Materials in Extreme Dynamic Environments (CMEDE) under the Hopkins Extreme Materials Institute at Johns Hopkins University. Financial support was provided by Grant W911NF-12-2-0022.

  7. The Experiment Data Depot: A Web-Based Software Tool for Biological Experimental Data Storage, Sharing, and Visualization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morrell, William C.; Birkel, Garrett W.; Forrer, Mark

    Although recent advances in synthetic biology allow us to produce biological designs more efficiently than ever, our ability to predict the end result of these designs is still nascent. Predictive models require large amounts of high-quality data to be parametrized and tested, which are not generally available. Here, we present the Experiment Data Depot (EDD), an online tool designed as a repository of experimental data and metadata. EDD provides a convenient way to upload a variety of data types, visualize these data, and export them in a standardized fashion for use with predictive algorithms. In this paper, we describe EDDmore » and showcase its utility for three different use cases: storage of characterized synthetic biology parts, leveraging proteomics data to improve biofuel yield, and the use of extracellular metabolite concentrations to predict intracellular metabolic fluxes.« less

  8. The Experiment Data Depot: A Web-Based Software Tool for Biological Experimental Data Storage, Sharing, and Visualization

    DOE PAGES

    Morrell, William C.; Birkel, Garrett W.; Forrer, Mark; ...

    2017-08-21

    Although recent advances in synthetic biology allow us to produce biological designs more efficiently than ever, our ability to predict the end result of these designs is still nascent. Predictive models require large amounts of high-quality data to be parametrized and tested, which are not generally available. Here, we present the Experiment Data Depot (EDD), an online tool designed as a repository of experimental data and metadata. EDD provides a convenient way to upload a variety of data types, visualize these data, and export them in a standardized fashion for use with predictive algorithms. In this paper, we describe EDDmore » and showcase its utility for three different use cases: storage of characterized synthetic biology parts, leveraging proteomics data to improve biofuel yield, and the use of extracellular metabolite concentrations to predict intracellular metabolic fluxes.« less

  9. The Experiment Data Depot: A Web-Based Software Tool for Biological Experimental Data Storage, Sharing, and Visualization.

    PubMed

    Morrell, William C; Birkel, Garrett W; Forrer, Mark; Lopez, Teresa; Backman, Tyler W H; Dussault, Michael; Petzold, Christopher J; Baidoo, Edward E K; Costello, Zak; Ando, David; Alonso-Gutierrez, Jorge; George, Kevin W; Mukhopadhyay, Aindrila; Vaino, Ian; Keasling, Jay D; Adams, Paul D; Hillson, Nathan J; Garcia Martin, Hector

    2017-12-15

    Although recent advances in synthetic biology allow us to produce biological designs more efficiently than ever, our ability to predict the end result of these designs is still nascent. Predictive models require large amounts of high-quality data to be parametrized and tested, which are not generally available. Here, we present the Experiment Data Depot (EDD), an online tool designed as a repository of experimental data and metadata. EDD provides a convenient way to upload a variety of data types, visualize these data, and export them in a standardized fashion for use with predictive algorithms. In this paper, we describe EDD and showcase its utility for three different use cases: storage of characterized synthetic biology parts, leveraging proteomics data to improve biofuel yield, and the use of extracellular metabolite concentrations to predict intracellular metabolic fluxes.

  10. Evolution of the characteristics of Parametric X-ray Radiation from textured polycrystals under different observation angles

    NASA Astrophysics Data System (ADS)

    Alekseev, V. I.; Eliseyev, A. N.; Irribarra, E.; Kishin, I. A.; Klyuev, A. S.; Kubankin, A. S.; Nazhmudinov, R. M.; Zhukova, P. N.

    2018-02-01

    The Parametric X-Ray radiation (PXR) spectra and yield dependencies on the orientation angle are measured during the interaction of 7 MeV electrons with a tungsten textured polycrystalline foil for different observation angles. The effects of PXR spectral density increase and PXR yield orientation dependence broadening in the backward direction is shown experimentally for the first time. The experimental results are compared with PXR kinematical theories for both mosaic crystals and polycrystals.

  11. A Parametric Rosetta Energy Function Analysis with LK Peptides on SAM Surfaces.

    PubMed

    Lubin, Joseph H; Pacella, Michael S; Gray, Jeffrey J

    2018-05-08

    Although structures have been determined for many soluble proteins and an increasing number of membrane proteins, experimental structure determination methods are limited for complexes of proteins and solid surfaces. An economical alternative or complement to experimental structure determination is molecular simulation. Rosetta is one software suite that models protein-surface interactions, but Rosetta is normally benchmarked on soluble proteins. For surface interactions, the validity of the energy function is uncertain because it is a combination of independent parameters from energy functions developed separately for solution proteins and mineral surfaces. Here, we assess the performance of the RosettaSurface algorithm and test the accuracy of its energy function by modeling the adsorption of leucine/lysine (LK)-repeat peptides on methyl- and carboxy-terminated self-assembled monolayers (SAMs). We investigated how RosettaSurface predictions for this system compare with the experimental results, which showed that on both surfaces, LK-α peptides folded into helices and LK-β peptides held extended structures. Utilizing this model system, we performed a parametric analysis of Rosetta's Talaris energy function and determined that adjusting solvation parameters offered improved predictive accuracy. Simultaneously increasing lysine carbon hydrophilicity and the hydrophobicity of the surface methyl head groups yielded computational predictions most closely matching the experimental results. De novo models still should be interpreted skeptically unless bolstered in an integrative approach with experimental data.

  12. An improved numerical procedure for the parametric optimization of three dimensional scramjet nozzles. [supersonic combustion ramjet engines - computer programs

    NASA Technical Reports Server (NTRS)

    Dash, S.; Delguidice, P. D.

    1975-01-01

    A parametric numerical procedure permitting the rapid determination of the performance of a class of scramjet nozzle configurations is presented. The geometric complexity of these configurations ruled out attempts to employ conventional nozzle design procedures. The numerical program developed permitted the parametric variation of cowl length, turning angles on the cowl and vehicle undersurface and lateral expansion, and was subject to fixed constraints such as the vehicle length and nozzle exit height. The program required uniform initial conditions at the burner exit station and yielded the location of all predominant wave zones, accounting for lateral expansion effects. In addition, the program yielded the detailed pressure distribution on the cowl, vehicle undersurface and fences, if any, and calculated the nozzle thrust, lift and pitching moments.

  13. Curvature, metric and parametrization of origami tessellations: theory and application to the eggbox pattern.

    PubMed

    Nassar, H; Lebée, A; Monasse, L

    2017-01-01

    Origami tessellations are particular textured morphing shell structures. Their unique folding and unfolding mechanisms on a local scale aggregate and bring on large changes in shape, curvature and elongation on a global scale. The existence of these global deformation modes allows for origami tessellations to fit non-trivial surfaces thus inspiring applications across a wide range of domains including structural engineering, architectural design and aerospace engineering. The present paper suggests a homogenization-type two-scale asymptotic method which, combined with standard tools from differential geometry of surfaces, yields a macroscopic continuous characterization of the global deformation modes of origami tessellations and other similar periodic pin-jointed trusses. The outcome of the method is a set of nonlinear differential equations governing the parametrization, metric and curvature of surfaces that the initially discrete structure can fit. The theory is presented through a case study of a fairly generic example: the eggbox pattern. The proposed continuous model predicts correctly the existence of various fittings that are subsequently constructed and illustrated.

  14. Curvature, metric and parametrization of origami tessellations: theory and application to the eggbox pattern

    NASA Astrophysics Data System (ADS)

    Nassar, H.; Lebée, A.; Monasse, L.

    2017-01-01

    Origami tessellations are particular textured morphing shell structures. Their unique folding and unfolding mechanisms on a local scale aggregate and bring on large changes in shape, curvature and elongation on a global scale. The existence of these global deformation modes allows for origami tessellations to fit non-trivial surfaces thus inspiring applications across a wide range of domains including structural engineering, architectural design and aerospace engineering. The present paper suggests a homogenization-type two-scale asymptotic method which, combined with standard tools from differential geometry of surfaces, yields a macroscopic continuous characterization of the global deformation modes of origami tessellations and other similar periodic pin-jointed trusses. The outcome of the method is a set of nonlinear differential equations governing the parametrization, metric and curvature of surfaces that the initially discrete structure can fit. The theory is presented through a case study of a fairly generic example: the eggbox pattern. The proposed continuous model predicts correctly the existence of various fittings that are subsequently constructed and illustrated.

  15. Arguments for fundamental emission by the parametric process L yields T + S in interplanetary type III bursts. [langmuir, electromagnetic, ion acoustic waves (L, T, S)

    NASA Technical Reports Server (NTRS)

    Cairns, I. H.

    1984-01-01

    Observations of low frequency ion acoustic-like waves associated with Langmuir waves present during interplanetary Type 3 bursts are used to study plasma emission mechanisms and wave processes involving ion acoustic waves. It is shown that the observed wave frequency characteristics are consistent with the processes L yields T + S (where L = Langmuir waves, T = electromagnetic waves, S = ion acoustic waves) and L yields L' + S proceeding. The usual incoherent (random phase) version of the process L yields T + S cannot explain the observed wave production time scale. The clumpy nature of the observed Langmuir waves is vital to the theory of IP Type 3 bursts. The incoherent process L yields T + S may encounter difficulties explaining the observed Type 3 brightness temperatures when Langmuir wave clumps are incorporated into the theory. The parametric process L yields T + S may be the important emission process for the fundamental radiation of interplanetary Type 3 bursts.

  16. Parametric and Nonparametric Statistical Methods for Genomic Selection of Traits with Additive and Epistatic Genetic Architectures

    PubMed Central

    Howard, Réka; Carriquiry, Alicia L.; Beavis, William D.

    2014-01-01

    Parametric and nonparametric methods have been developed for purposes of predicting phenotypes. These methods are based on retrospective analyses of empirical data consisting of genotypic and phenotypic scores. Recent reports have indicated that parametric methods are unable to predict phenotypes of traits with known epistatic genetic architectures. Herein, we review parametric methods including least squares regression, ridge regression, Bayesian ridge regression, least absolute shrinkage and selection operator (LASSO), Bayesian LASSO, best linear unbiased prediction (BLUP), Bayes A, Bayes B, Bayes C, and Bayes Cπ. We also review nonparametric methods including Nadaraya-Watson estimator, reproducing kernel Hilbert space, support vector machine regression, and neural networks. We assess the relative merits of these 14 methods in terms of accuracy and mean squared error (MSE) using simulated genetic architectures consisting of completely additive or two-way epistatic interactions in an F2 population derived from crosses of inbred lines. Each simulated genetic architecture explained either 30% or 70% of the phenotypic variability. The greatest impact on estimates of accuracy and MSE was due to genetic architecture. Parametric methods were unable to predict phenotypic values when the underlying genetic architecture was based entirely on epistasis. Parametric methods were slightly better than nonparametric methods for additive genetic architectures. Distinctions among parametric methods for additive genetic architectures were incremental. Heritability, i.e., proportion of phenotypic variability, had the second greatest impact on estimates of accuracy and MSE. PMID:24727289

  17. Feature selection and classification of multiparametric medical images using bagging and SVM

    NASA Astrophysics Data System (ADS)

    Fan, Yong; Resnick, Susan M.; Davatzikos, Christos

    2008-03-01

    This paper presents a framework for brain classification based on multi-parametric medical images. This method takes advantage of multi-parametric imaging to provide a set of discriminative features for classifier construction by using a regional feature extraction method which takes into account joint correlations among different image parameters; in the experiments herein, MRI and PET images of the brain are used. Support vector machine classifiers are then trained based on the most discriminative features selected from the feature set. To facilitate robust classification and optimal selection of parameters involved in classification, in view of the well-known "curse of dimensionality", base classifiers are constructed in a bagging (bootstrap aggregating) framework for building an ensemble classifier and the classification parameters of these base classifiers are optimized by means of maximizing the area under the ROC (receiver operating characteristic) curve estimated from their prediction performance on left-out samples of bootstrap sampling. This classification system is tested on a sex classification problem, where it yields over 90% classification rates for unseen subjects. The proposed classification method is also compared with other commonly used classification algorithms, with favorable results. These results illustrate that the methods built upon information jointly extracted from multi-parametric images have the potential to perform individual classification with high sensitivity and specificity.

  18. The SAMPL5 challenge for embedded-cluster integral equation theory: solvation free energies, aqueous p K a, and cyclohexane-water log D

    NASA Astrophysics Data System (ADS)

    Tielker, Nicolas; Tomazic, Daniel; Heil, Jochen; Kloss, Thomas; Ehrhart, Sebastian; Güssregen, Stefan; Schmidt, K. Friedemann; Kast, Stefan M.

    2016-11-01

    We predict cyclohexane-water distribution coefficients (log D 7.4) for drug-like molecules taken from the SAMPL5 blind prediction challenge by the "embedded cluster reference interaction site model" (EC-RISM) integral equation theory. This task involves the coupled problem of predicting both partition coefficients (log P) of neutral species between the solvents and aqueous acidity constants (p K a) in order to account for a change of protonation states. The first issue is addressed by calibrating an EC-RISM-based model for solvation free energies derived from the "Minnesota Solvation Database" (MNSOL) for both water and cyclohexane utilizing a correction based on the partial molar volume, yielding a root mean square error (RMSE) of 2.4 kcal mol-1 for water and 0.8-0.9 kcal mol-1 for cyclohexane depending on the parametrization. The second one is treated by employing on one hand an empirical p K a model (MoKa) and, on the other hand, an EC-RISM-derived regression of published acidity constants (RMSE of 1.5 for a single model covering acids and bases). In total, at most 8 adjustable parameters are necessary (2-3 for each solvent and two for the p K a) for training solvation and acidity models. Applying the final models to the log D 7.4 dataset corresponds to evaluating an independent test set comprising other, composite observables, yielding, for different cyclohexane parametrizations, 2.0-2.1 for the RMSE with the first and 2.2-2.8 with the combined first and second SAMPL5 data set batches. Notably, a pure log P model (assuming neutral species only) performs statistically similarly for these particular compounds. The nature of the approximations and possible perspectives for future developments are discussed.

  19. The SAMPL5 challenge for embedded-cluster integral equation theory: solvation free energies, aqueous pK a, and cyclohexane-water log D.

    PubMed

    Tielker, Nicolas; Tomazic, Daniel; Heil, Jochen; Kloss, Thomas; Ehrhart, Sebastian; Güssregen, Stefan; Schmidt, K Friedemann; Kast, Stefan M

    2016-11-01

    We predict cyclohexane-water distribution coefficients (log D 7.4 ) for drug-like molecules taken from the SAMPL5 blind prediction challenge by the "embedded cluster reference interaction site model" (EC-RISM) integral equation theory. This task involves the coupled problem of predicting both partition coefficients (log P) of neutral species between the solvents and aqueous acidity constants (pK a ) in order to account for a change of protonation states. The first issue is addressed by calibrating an EC-RISM-based model for solvation free energies derived from the "Minnesota Solvation Database" (MNSOL) for both water and cyclohexane utilizing a correction based on the partial molar volume, yielding a root mean square error (RMSE) of 2.4 kcal mol -1 for water and 0.8-0.9 kcal mol -1 for cyclohexane depending on the parametrization. The second one is treated by employing on one hand an empirical pK a model (MoKa) and, on the other hand, an EC-RISM-derived regression of published acidity constants (RMSE of 1.5 for a single model covering acids and bases). In total, at most 8 adjustable parameters are necessary (2-3 for each solvent and two for the pK a ) for training solvation and acidity models. Applying the final models to the log D 7.4 dataset corresponds to evaluating an independent test set comprising other, composite observables, yielding, for different cyclohexane parametrizations, 2.0-2.1 for the RMSE with the first and 2.2-2.8 with the combined first and second SAMPL5 data set batches. Notably, a pure log P model (assuming neutral species only) performs statistically similarly for these particular compounds. The nature of the approximations and possible perspectives for future developments are discussed.

  20. Parametric correlation functions to model the structure of permanent environmental (co)variances in milk yield random regression models.

    PubMed

    Bignardi, A B; El Faro, L; Cardoso, V L; Machado, P F; Albuquerque, L G

    2009-09-01

    The objective of the present study was to estimate milk yield genetic parameters applying random regression models and parametric correlation functions combined with a variance function to model animal permanent environmental effects. A total of 152,145 test-day milk yields from 7,317 first lactations of Holstein cows belonging to herds located in the southeastern region of Brazil were analyzed. Test-day milk yields were divided into 44 weekly classes of days in milk. Contemporary groups were defined by herd-test-day comprising a total of 2,539 classes. The model included direct additive genetic, permanent environmental, and residual random effects. The following fixed effects were considered: contemporary group, age of cow at calving (linear and quadratic regressions), and the population average lactation curve modeled by fourth-order orthogonal Legendre polynomial. Additive genetic effects were modeled by random regression on orthogonal Legendre polynomials of days in milk, whereas permanent environmental effects were estimated using a stationary or nonstationary parametric correlation function combined with a variance function of different orders. The structure of residual variances was modeled using a step function containing 6 variance classes. The genetic parameter estimates obtained with the model using a stationary correlation function associated with a variance function to model permanent environmental effects were similar to those obtained with models employing orthogonal Legendre polynomials for the same effect. A model using a sixth-order polynomial for additive effects and a stationary parametric correlation function associated with a seventh-order variance function to model permanent environmental effects would be sufficient for data fitting.

  1. An Acoustic Charge Transport Imager for High Definition Television Applications: Reliability Modeling and Parametric Yield Prediction of GaAs Multiple Quantum Well Avalanche Photodiodes. Degree awarded Oct. 1997

    NASA Technical Reports Server (NTRS)

    Hunt, W. D.; Brennan, K. F.; Summers, C. J.; Yun, Ilgu

    1994-01-01

    Reliability modeling and parametric yield prediction of GaAs/AlGaAs multiple quantum well (MQW) avalanche photodiodes (APDs), which are of interest as an ultra-low noise image capture mechanism for high definition systems, have been investigated. First, the effect of various doping methods on the reliability of GaAs/AlGaAs multiple quantum well (MQW) avalanche photodiode (APD) structures fabricated by molecular beam epitaxy is investigated. Reliability is examined by accelerated life tests by monitoring dark current and breakdown voltage. Median device lifetime and the activation energy of the degradation mechanism are computed for undoped, doped-barrier, and doped-well APD structures. Lifetimes for each device structure are examined via a statistically designed experiment. Analysis of variance shows that dark-current is affected primarily by device diameter, temperature and stressing time, and breakdown voltage depends on the diameter, stressing time and APD type. It is concluded that the undoped APD has the highest reliability, followed by the doped well and doped barrier devices, respectively. To determine the source of the degradation mechanism for each device structure, failure analysis using the electron-beam induced current method is performed. This analysis reveals some degree of device degradation caused by ionic impurities in the passivation layer, and energy-dispersive spectrometry subsequently verified the presence of ionic sodium as the primary contaminant. However, since all device structures are similarly passivated, sodium contamination alone does not account for the observed variation between the differently doped APDs. This effect is explained by the dopant migration during stressing, which is verified by free carrier concentration measurements using the capacitance-voltage technique.

  2. Stock price forecasting for companies listed on Tehran stock exchange using multivariate adaptive regression splines model and semi-parametric splines technique

    NASA Astrophysics Data System (ADS)

    Rounaghi, Mohammad Mahdi; Abbaszadeh, Mohammad Reza; Arashi, Mohammad

    2015-11-01

    One of the most important topics of interest to investors is stock price changes. Investors whose goals are long term are sensitive to stock price and its changes and react to them. In this regard, we used multivariate adaptive regression splines (MARS) model and semi-parametric splines technique for predicting stock price in this study. The MARS model as a nonparametric method is an adaptive method for regression and it fits for problems with high dimensions and several variables. semi-parametric splines technique was used in this study. Smoothing splines is a nonparametric regression method. In this study, we used 40 variables (30 accounting variables and 10 economic variables) for predicting stock price using the MARS model and using semi-parametric splines technique. After investigating the models, we select 4 accounting variables (book value per share, predicted earnings per share, P/E ratio and risk) as influencing variables on predicting stock price using the MARS model. After fitting the semi-parametric splines technique, only 4 accounting variables (dividends, net EPS, EPS Forecast and P/E Ratio) were selected as variables effective in forecasting stock prices.

  3. Outcome of temporal lobe epilepsy surgery predicted by statistical parametric PET imaging.

    PubMed

    Wong, C Y; Geller, E B; Chen, E Q; MacIntyre, W J; Morris, H H; Raja, S; Saha, G B; Lüders, H O; Cook, S A; Go, R T

    1996-07-01

    PET is useful in the presurgical evaluation of temporal lobe epilepsy. The purpose of this retrospective study is to assess the clinical use of statistical parametric imaging in predicting surgical outcome. Interictal 18FDG-PET scans in 17 patients with surgically-treated temporal lobe epilepsy (Group A-13 seizure-free, group B = 4 not seizure-free at 6 mo) were transformed into statistical parametric imaging, with each pixel representing a z-score value by using the mean and s.d. of count distribution in each individual patient, for both visual and quantitative analysis. Mean z-scores were significantly more negative in anterolateral (AL) and mesial (M) regions on the operated side than the nonoperated side in group A (AL: p < 0.00005, M: p = 0.0097), but not in group B (AL: p = 0.46, M: p = 0.08). Statistical parametric imaging correctly lateralized 16 out of 17 patients. Only the AL region, however, was significant in predicting surgical outcome (F = 29.03, p < 0.00005). Using a cut-off z-score value of -1.5, statistical parametric imaging correctly classified 92% of temporal lobes from group A and 88% of those from Group B. The preliminary results indicate that statistical parametric imaging provides both clinically useful information for lateralization in temporal lobe epilepsy and a reliable predictive indicator of clinical outcome following surgical treatment.

  4. Housing price prediction: parametric versus semi-parametric spatial hedonic models

    NASA Astrophysics Data System (ADS)

    Montero, José-María; Mínguez, Román; Fernández-Avilés, Gema

    2018-01-01

    House price prediction is a hot topic in the economic literature. House price prediction has traditionally been approached using a-spatial linear (or intrinsically linear) hedonic models. It has been shown, however, that spatial effects are inherent in house pricing. This article considers parametric and semi-parametric spatial hedonic model variants that account for spatial autocorrelation, spatial heterogeneity and (smooth and nonparametrically specified) nonlinearities using penalized splines methodology. The models are represented as a mixed model that allow for the estimation of the smoothing parameters along with the other parameters of the model. To assess the out-of-sample performance of the models, the paper uses a database containing the price and characteristics of 10,512 homes in Madrid, Spain (Q1 2010). The results obtained suggest that the nonlinear models accounting for spatial heterogeneity and flexible nonlinear relationships between some of the individual or areal characteristics of the houses and their prices are the best strategies for house price prediction.

  5. Enhanced multi-protocol analysis via intelligent supervised embedding (EMPrAvISE): detecting prostate cancer on multi-parametric MRI

    NASA Astrophysics Data System (ADS)

    Viswanath, Satish; Bloch, B. Nicholas; Chappelow, Jonathan; Patel, Pratik; Rofsky, Neil; Lenkinski, Robert; Genega, Elizabeth; Madabhushi, Anant

    2011-03-01

    Currently, there is significant interest in developing methods for quantitative integration of multi-parametric (structural, functional) imaging data with the objective of building automated meta-classifiers to improve disease detection, diagnosis, and prognosis. Such techniques are required to address the differences in dimensionalities and scales of individual protocols, while deriving an integrated multi-parametric data representation which best captures all disease-pertinent information available. In this paper, we present a scheme called Enhanced Multi-Protocol Analysis via Intelligent Supervised Embedding (EMPrAvISE); a powerful, generalizable framework applicable to a variety of domains for multi-parametric data representation and fusion. Our scheme utilizes an ensemble of embeddings (via dimensionality reduction, DR); thereby exploiting the variance amongst multiple uncorrelated embeddings in a manner similar to ensemble classifier schemes (e.g. Bagging, Boosting). We apply this framework to the problem of prostate cancer (CaP) detection on 12 3 Tesla pre-operative in vivo multi-parametric (T2-weighted, Dynamic Contrast Enhanced, and Diffusion-weighted) magnetic resonance imaging (MRI) studies, in turn comprising a total of 39 2D planar MR images. We first align the different imaging protocols via automated image registration, followed by quantification of image attributes from individual protocols. Multiple embeddings are generated from the resultant high-dimensional feature space which are then combined intelligently to yield a single stable solution. Our scheme is employed in conjunction with graph embedding (for DR) and probabilistic boosting trees (PBTs) to detect CaP on multi-parametric MRI. Finally, a probabilistic pairwise Markov Random Field algorithm is used to apply spatial constraints to the result of the PBT classifier, yielding a per-voxel classification of CaP presence. Per-voxel evaluation of detection results against ground truth for CaP extent on MRI (obtained by spatially registering pre-operative MRI with available whole-mount histological specimens) reveals that EMPrAvISE yields a statistically significant improvement (AUC=0.77) over classifiers constructed from individual protocols (AUC=0.62, 0.62, 0.65, for T2w, DCE, DWI respectively) as well as one trained using multi-parametric feature concatenation (AUC=0.67).

  6. Impact of state updating and multi-parametric ensemble for streamflow hindcasting in European river basins

    NASA Astrophysics Data System (ADS)

    Noh, S. J.; Rakovec, O.; Kumar, R.; Samaniego, L. E.

    2015-12-01

    Accurate and reliable streamflow prediction is essential to mitigate social and economic damage coming from water-related disasters such as flood and drought. Sequential data assimilation (DA) may facilitate improved streamflow prediction using real-time observations to correct internal model states. In conventional DA methods such as state updating, parametric uncertainty is often ignored mainly due to practical limitations of methodology to specify modeling uncertainty with limited ensemble members. However, if parametric uncertainty related with routing and runoff components is not incorporated properly, predictive uncertainty by model ensemble may be insufficient to capture dynamics of observations, which may deteriorate predictability. Recently, a multi-scale parameter regionalization (MPR) method was proposed to make hydrologic predictions at different scales using a same set of model parameters without losing much of the model performance. The MPR method incorporated within the mesoscale hydrologic model (mHM, http://www.ufz.de/mhm) could effectively represent and control uncertainty of high-dimensional parameters in a distributed model using global parameters. In this study, we evaluate impacts of streamflow data assimilation over European river basins. Especially, a multi-parametric ensemble approach is tested to consider the effects of parametric uncertainty in DA. Because augmentation of parameters is not required within an assimilation window, the approach could be more stable with limited ensemble members and have potential for operational uses. To consider the response times and non-Gaussian characteristics of internal hydrologic processes, lagged particle filtering is utilized. The presentation will be focused on gains and limitations of streamflow data assimilation and multi-parametric ensemble method over large-scale basins.

  7. Regularized linearization for quantum nonlinear optical cavities: application to degenerate optical parametric oscillators.

    PubMed

    Navarrete-Benlloch, Carlos; Roldán, Eugenio; Chang, Yue; Shi, Tao

    2014-10-06

    Nonlinear optical cavities are crucial both in classical and quantum optics; in particular, nowadays optical parametric oscillators are one of the most versatile and tunable sources of coherent light, as well as the sources of the highest quality quantum-correlated light in the continuous variable regime. Being nonlinear systems, they can be driven through critical points in which a solution ceases to exist in favour of a new one, and it is close to these points where quantum correlations are the strongest. The simplest description of such systems consists in writing the quantum fields as the classical part plus some quantum fluctuations, linearizing then the dynamical equations with respect to the latter; however, such an approach breaks down close to critical points, where it provides unphysical predictions such as infinite photon numbers. On the other hand, techniques going beyond the simple linear description become too complicated especially regarding the evaluation of two-time correlators, which are of major importance to compute observables outside the cavity. In this article we provide a regularized linear description of nonlinear cavities, that is, a linearization procedure yielding physical results, taking the degenerate optical parametric oscillator as the guiding example. The method, which we call self-consistent linearization, is shown to be equivalent to a general Gaussian ansatz for the state of the system, and we compare its predictions with those obtained with available exact (or quasi-exact) methods. Apart from its operational value, we believe that our work is valuable also from a fundamental point of view, especially in connection to the question of how far linearized or Gaussian theories can be pushed to describe nonlinear dissipative systems which have access to non-Gaussian states.

  8. Investigation of the photon statistics of parametric fluorescence in a traveling-wave parametric amplifier by means of self-homodyne tomography.

    PubMed

    Vasilyev, M; Choi, S K; Kumar, P; D'Ariano, G M

    1998-09-01

    Photon-number distributions for parametric fluorescence from a nondegenerate optical parametric amplifier are measured with a novel self-homodyne technique. These distributions exhibit the thermal-state character predicted by theory. However, a difference between the fluorescence gain and the signal gain of the parametric amplifier is observed. We attribute this difference to a change in the signal-beam profile during the traveling-wave pulsed amplification process.

  9. AucPR: an AUC-based approach using penalized regression for disease prediction with high-dimensional omics data.

    PubMed

    Yu, Wenbao; Park, Taesung

    2014-01-01

    It is common to get an optimal combination of markers for disease classification and prediction when multiple markers are available. Many approaches based on the area under the receiver operating characteristic curve (AUC) have been proposed. Existing works based on AUC in a high-dimensional context depend mainly on a non-parametric, smooth approximation of AUC, with no work using a parametric AUC-based approach, for high-dimensional data. We propose an AUC-based approach using penalized regression (AucPR), which is a parametric method used for obtaining a linear combination for maximizing the AUC. To obtain the AUC maximizer in a high-dimensional context, we transform a classical parametric AUC maximizer, which is used in a low-dimensional context, into a regression framework and thus, apply the penalization regression approach directly. Two kinds of penalization, lasso and elastic net, are considered. The parametric approach can avoid some of the difficulties of a conventional non-parametric AUC-based approach, such as the lack of an appropriate concave objective function and a prudent choice of the smoothing parameter. We apply the proposed AucPR for gene selection and classification using four real microarray and synthetic data. Through numerical studies, AucPR is shown to perform better than the penalized logistic regression and the nonparametric AUC-based method, in the sense of AUC and sensitivity for a given specificity, particularly when there are many correlated genes. We propose a powerful parametric and easily-implementable linear classifier AucPR, for gene selection and disease prediction for high-dimensional data. AucPR is recommended for its good prediction performance. Beside gene expression microarray data, AucPR can be applied to other types of high-dimensional omics data, such as miRNA and protein data.

  10. Predictive data-based exposition of 5s5p{sup 1,3}P{sub 1} lifetimes in the Cd isoelectronic sequence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Curtis, L. J.; Matulioniene, R.; Ellis, D. G.

    2000-11-01

    Experimental and theoretical values for the lifetimes of the 5s5p{sup 1}P{sub 1} and {sup 3}P{sub 1} levels in the Cd isoelectronic sequence are examined in the context of a data-based isoelectronic systematization. Lifetime and energy-level data are combined to account for the effects of intermediate coupling, thereby reducing the data to a regular and slowly varying parametric mapping. This empirically characterizes small contributions due to spin-other-orbit interaction, spin dependences of the radial wave functions, and configuration interaction, and yields accurate interpolative and extrapolative predictions. Multiconfiguration Dirac-Hartree-Fock calculations are used to verify the regularity of these trends, and to examine themore » extent to which they can be extrapolated to high nuclear charge.« less

  11. Diffeomorphic demons: efficient non-parametric image registration.

    PubMed

    Vercauteren, Tom; Pennec, Xavier; Perchant, Aymeric; Ayache, Nicholas

    2009-03-01

    We propose an efficient non-parametric diffeomorphic image registration algorithm based on Thirion's demons algorithm. In the first part of this paper, we show that Thirion's demons algorithm can be seen as an optimization procedure on the entire space of displacement fields. We provide strong theoretical roots to the different variants of Thirion's demons algorithm. This analysis predicts a theoretical advantage for the symmetric forces variant of the demons algorithm. We show on controlled experiments that this advantage is confirmed in practice and yields a faster convergence. In the second part of this paper, we adapt the optimization procedure underlying the demons algorithm to a space of diffeomorphic transformations. In contrast to many diffeomorphic registration algorithms, our solution is computationally efficient since in practice it only replaces an addition of displacement fields by a few compositions. Our experiments show that in addition to being diffeomorphic, our algorithm provides results that are similar to the ones from the demons algorithm but with transformations that are much smoother and closer to the gold standard, available in controlled experiments, in terms of Jacobians.

  12. Instability behaviour of cosmic gravito-coupled correlative complex bi-fluidic admixture

    NASA Astrophysics Data System (ADS)

    Das, Papari; Karmakar, Pralay Kumar

    2017-10-01

    The gravitational instability of an unbounded infinitely extended composite gravitating cloud system composed of gravito-coupled neutral gaseous fluid (NGF) and dark matter fluid (DMF) is theoretically investigated in a classical framework. It is based on a spatially-flat geometry approximation (1D, sheet-like, boundless) at the backdrop that the radius of curvature of the gravito-confined bi-fluidic-boundary is much larger than all the hydro-characteristic scale lengths of interest. The relevant collective correlative dynamics, via the lowest-order mnemonic viscoelasticity, is mooted. We apply a standard formalism of normal mode analysis to yield a unique brand of generalized quadratic dispersion relation having variable multi-parametric coefficients dependent on the diversified equilibrium properties. It is parametrically seen that the DMF flow speed and the DMF viscoelasticity introduce stabilizing effects against the composite cloud collapse. The instability physiognomies, as specialized extreme corollaries, are in good accord with the previously reported predictions. The analysis may be widely useful to see the gravito-thermally coupled wave dynamics leading to the formation of large-scale hierarchical non-homologous structures in dark-matter-dominated dwarf galaxies.

  13. Degradation of Leakage Currents and Reliability Prediction for Tantalum Capacitors

    NASA Technical Reports Server (NTRS)

    Teverovsky, Alexander

    2016-01-01

    Two types of failures in solid tantalum capacitors, catastrophic and parametric, and their mechanisms are described. Analysis of voltage and temperature reliability acceleration factors reported in literature shows a wide spread of results and requires more investigation. In this work, leakage currents in two types of chip tantalum capacitors were monitored during highly accelerated life testing (HALT) at different temperatures and voltages. Distributions of degradation rates were approximated using a general log-linear Weibull model and yielded voltage acceleration constants B = 9.8 +/- 0.5 and 5.5. The activation energies were Ea = 1.65 eV and 1.42 eV. The model allows for conservative estimations of times to failure and was validated by long-term life test data. Parametric degradation and failures are reversible and can be annealed at high temperatures. The process is attributed to migration of charged oxygen vacancies that reduce the barrier height at the MnO2/Ta2O5 interface and increase injection of electrons from the MnO2 cathode. Analysis showed that the activation energy of the vacancies' migration is 1.1 eV.

  14. Fourth standard model family neutrino at future linear colliders

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ciftci, A.K.; Ciftci, R.; Sultansoy, S.

    2005-09-01

    It is known that flavor democracy favors the existence of the fourth standard model (SM) family. In order to give nonzero masses for the first three-family fermions flavor democracy has to be slightly broken. A parametrization for democracy breaking, which gives the correct values for fundamental fermion masses and, at the same time, predicts quark and lepton Cabibbo-Kobayashi-Maskawa (CKM) matrices in a good agreement with the experimental data, is proposed. The pair productions of the fourth SM family Dirac ({nu}{sub 4}) and Majorana (N{sub 1}) neutrinos at future linear colliders with {radical}(s)=500 GeV, 1 TeV, and 3 TeV are considered.more » The cross section for the process e{sup +}e{sup -}{yields}{nu}{sub 4}{nu}{sub 4}(N{sub 1}N{sub 1}) and the branching ratios for possible decay modes of the both neutrinos are determined. The decays of the fourth family neutrinos into muon channels ({nu}{sub 4}(N{sub 1}){yields}{mu}{sup {+-}}W{sup {+-}}) provide cleanest signature at e{sup +}e{sup -} colliders. Meanwhile, in our parametrization this channel is dominant. W bosons produced in decays of the fourth family neutrinos will be seen in detector as either di-jets or isolated leptons. As an example, we consider the production of 200 GeV mass fourth family neutrinos at {radical}(s)=500 GeV linear colliders by taking into account di-muon plus four jet events as signatures.« less

  15. Illiquidity premium and expected stock returns in the UK: A new approach

    NASA Astrophysics Data System (ADS)

    Chen, Jiaqi; Sherif, Mohamed

    2016-09-01

    This study examines the relative importance of liquidity risk for the time-series and cross-section of stock returns in the UK. We propose a simple way to capture the multidimensionality of illiquidity. Our analysis indicates that existing illiquidity measures have considerable asset specific components, which justifies our new approach. Further, we use an alternative test of the Amihud (2002) measure and parametric and non-parametric methods to investigate whether liquidity risk is priced in the UK. We find that the inclusion of the illiquidity factor in the capital asset pricing model plays a significant role in explaining the cross-sectional variation in stock returns, in particular with the Fama-French three-factor model. Further, using Hansen-Jagannathan non-parametric bounds, we find that the illiquidity-augmented capital asset pricing models yield a small distance error, other non-liquidity based models fail to yield economically plausible distance values. Our findings have important implications for managing the liquidity risk of equity portfolios.

  16. Widely tunable single photon source with high purity at telecom wavelength.

    PubMed

    Jin, Rui-Bo; Shimizu, Ryosuke; Wakui, Kentaro; Benichi, Hugo; Sasaki, Masahide

    2013-05-06

    We theoretically and experimentally investigate the spectral tunability and purity of photon pairs generated from spontaneous parametric down conversion in periodically poled KTiOPO(4) crystal with group-velocity matching condition. The numerical simulation predicts that the spectral purity can be kept higher than 0.81 when the wavelength is tuned from 1460 nm to 1675 nm, which covers the S-, C-, L-, and U-band in telecommunication wavelengths. We also experimentally measured the joint spectral intensity at 1565 nm, 1584 nm and 1565 nm, yielding Schmidt numbers of 1.01, 1.02 and 1.04, respectively. Such a photon source is useful for quantum information and communication systems.

  17. The adaptive nature of eye movements in linguistic tasks: how payoff and architecture shape speed-accuracy trade-offs.

    PubMed

    Lewis, Richard L; Shvartsman, Michael; Singh, Satinder

    2013-07-01

    We explore the idea that eye-movement strategies in reading are precisely adapted to the joint constraints of task structure, task payoff, and processing architecture. We present a model of saccadic control that separates a parametric control policy space from a parametric machine architecture, the latter based on a small set of assumptions derived from research on eye movements in reading (Engbert, Nuthmann, Richter, & Kliegl, 2005; Reichle, Warren, & McConnell, 2009). The eye-control model is embedded in a decision architecture (a machine and policy space) that is capable of performing a simple linguistic task integrating information across saccades. Model predictions are derived by jointly optimizing the control of eye movements and task decisions under payoffs that quantitatively express different desired speed-accuracy trade-offs. The model yields distinct eye-movement predictions for the same task under different payoffs, including single-fixation durations, frequency effects, accuracy effects, and list position effects, and their modulation by task payoff. The predictions are compared to-and found to accord with-eye-movement data obtained from human participants performing the same task under the same payoffs, but they are found not to accord as well when the assumptions concerning payoff optimization and processing architecture are varied. These results extend work on rational analysis of oculomotor control and adaptation of reading strategy (Bicknell & Levy, ; McConkie, Rayner, & Wilson, 1973; Norris, 2009; Wotschack, 2009) by providing evidence for adaptation at low levels of saccadic control that is shaped by quantitatively varying task demands and the dynamics of processing architecture. Copyright © 2013 Cognitive Science Society, Inc.

  18. Nonperturbative renormalization group study of the stochastic Navier-Stokes equation.

    PubMed

    Mejía-Monasterio, Carlos; Muratore-Ginanneschi, Paolo

    2012-07-01

    We study the renormalization group flow of the average action of the stochastic Navier-Stokes equation with power-law forcing. Using Galilean invariance, we introduce a nonperturbative approximation adapted to the zero-frequency sector of the theory in the parametric range of the Hölder exponent 4-2ε of the forcing where real-space local interactions are relevant. In any spatial dimension d, we observe the convergence of the resulting renormalization group flow to a unique fixed point which yields a kinetic energy spectrum scaling in agreement with canonical dimension analysis. Kolmogorov's -5/3 law is, thus, recovered for ε = 2 as also predicted by perturbative renormalization. At variance with the perturbative prediction, the -5/3 law emerges in the presence of a saturation in the ε dependence of the scaling dimension of the eddy diffusivity at ε = 3/2 when, according to perturbative renormalization, the velocity field becomes infrared relevant.

  19. Development of a Polarizable Force Field for Molecular Dynamics Simulations of Poly (Ethylene Oxide) in Aqueous Solution.

    PubMed

    Starovoytov, Oleg N; Borodin, Oleg; Bedrov, Dmitry; Smith, Grant D

    2011-06-14

    We have developed a quantum chemistry-based polarizable potential for poly(ethylene oxide) (PEO) in aqueous solution based on the APPLE&P polarizable ether and the SWM4-DP polarizable water models. Ether-water interactions were parametrized to reproduce the binding energy of water with 1,2-dimethoxyethane (DME) determined from high-level quantum chemistry calculations. Simulations of DME-water and PEO-water solutions at room temperature using the new polarizable potentials yielded thermodynamic properties in good agreement with experimental results. The predicted miscibility of PEO and water as a function of the temperature was found to be strongly correlated with the predicted free energy of solvation of DME. The developed nonbonded force field parameters were found to be transferrable to poly(propylene oxide) (PPO), as confirmed by capturing, at least qualitatively, the miscibility of PPO in water as a function of the molecular weight.

  20. Optimal Design of Experiments by Combining Coarse and Fine Measurements

    NASA Astrophysics Data System (ADS)

    Lee, Alpha A.; Brenner, Michael P.; Colwell, Lucy J.

    2017-11-01

    In many contexts, it is extremely costly to perform enough high-quality experimental measurements to accurately parametrize a predictive quantitative model. However, it is often much easier to carry out large numbers of experiments that indicate whether each sample is above or below a given threshold. Can many such categorical or "coarse" measurements be combined with a much smaller number of high-resolution or "fine" measurements to yield accurate models? Here, we demonstrate an intuitive strategy, inspired by statistical physics, wherein the coarse measurements are used to identify the salient features of the data, while the fine measurements determine the relative importance of these features. A linear model is inferred from the fine measurements, augmented by a quadratic term that captures the correlation structure of the coarse data. We illustrate our strategy by considering the problems of predicting the antimalarial potency and aqueous solubility of small organic molecules from their 2D molecular structure.

  1. TU-CD-BRB-09: Prediction of Chemo-Radiation Outcome for Rectal Cancer Based On Radiomics of Tumor Clinical Characteristics and Multi-Parametric MRI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nie, K; Yue, N; Shi, L

    2015-06-15

    Purpose: To evaluate the tumor clinical characteristics and quantitative multi-parametric MR imaging features for prediction of response to chemo-radiation treatment (CRT) in locally advanced rectal cancer (LARC). Methods: Forty-three consecutive patients (59.7±6.9 years, from 09/2013 – 06/2014) receiving neoadjuvant CRT followed by surgery were enrolled. All underwent MRI including anatomical T1/T2, Dynamic Contrast Enhanced (DCE)-MRI and Diffusion-Weighted MRI (DWI) prior to the treatment. A total of 151 quantitative features, including morphology/Gray Level Co-occurrence Matrix (GLCM) texture from T1/T2, enhancement kinetics and the voxelized distribution from DCE-MRI, apparent diffusion coefficient (ADC) from DWI, along with clinical information (carcinoembryonic antigen CEA level,more » TNM staging etc.), were extracted for each patient. Response groups were separated based on down-staging, good response and pathological complete response (pCR) status. Logistic regression analysis (LRA) was used to select the best predictors to classify different groups and the predictive performance were calculated using receiver operating characteristic (ROC) analysis. Results: Individual imaging category or clinical charateristics might yield certain level of power in assessing the response. However, the combined model outperformed than any category alone in prediction. With selected features as Volume, GLCM AutoCorrelation (T2), MaxEnhancementProbability (DCE-MRI), and MeanADC (DWI), the down-staging prediciton accuracy (area under the ROC curve, AUC) could be 0.95, better than individual tumor metrics with AUC from 0.53–0.85. While for the pCR prediction, the best set included CEA (clinical charateristics), Homogeneity (DCE-MRI) and MeanADC (DWI) with an AUC of 0.89, more favorable compared to conventional tumor metrics with an AUC ranging from 0.511–0.79. Conclusion: Through a systematic analysis of multi-parametric MR imaging features, we are able to build models with improved predictive value over conventional imaging or clinical metrics. This is encouraging, suggesting the wealth of imaging radiomics should be further explored to help tailor the treatment into the era of personalized medicine. This work is supported by the National Science Foundation of China (NSFC Grant No. 81201091), National High Technology Research and Development Program of China (863 program, Grant No. 2015AA020917), and Fund Project for Excellent Abroad Scholar Personnel in Science and Technology.« less

  2. Enhanced force sensitivity and noise squeezing in an electromechanical resonator coupled to a nanotransistor

    NASA Astrophysics Data System (ADS)

    Mahboob, I.; Flurin, E.; Nishiguchi, K.; Fujiwara, A.; Yamaguchi, H.

    2010-12-01

    A nanofield-effect transistor (nano-FET) is coupled to a massive piezoelectricity based electromechanical resonator integrated with a parametric amplifier. The mechanical parametric amplifier can enhance the resonator's displacement and the resulting electrical signal is further amplified by the nano-FET. This hybrid amplification scheme yields an increase in the mechanical displacement signal by 70 dB resulting in a force sensitivity of 200 aN Hz-1/2 at 3 K. The mechanical parametric amplifier can also squeeze the displacement noise in one oscillation phase by 5 dB enabling a factor of 4 reduction in the thermomechanical noise force level.

  3. A question of separation: disentangling tracer bias and gravitational non-linearity with counts-in-cells statistics

    NASA Astrophysics Data System (ADS)

    Uhlemann, C.; Feix, M.; Codis, S.; Pichon, C.; Bernardeau, F.; L'Huillier, B.; Kim, J.; Hong, S. E.; Laigle, C.; Park, C.; Shin, J.; Pogosyan, D.

    2018-02-01

    Starting from a very accurate model for density-in-cells statistics of dark matter based on large deviation theory, a bias model for the tracer density in spheres is formulated. It adopts a mean bias relation based on a quadratic bias model to relate the log-densities of dark matter to those of mass-weighted dark haloes in real and redshift space. The validity of the parametrized bias model is established using a parametrization-independent extraction of the bias function. This average bias model is then combined with the dark matter PDF, neglecting any scatter around it: it nevertheless yields an excellent model for densities-in-cells statistics of mass tracers that is parametrized in terms of the underlying dark matter variance and three bias parameters. The procedure is validated on measurements of both the one- and two-point statistics of subhalo densities in the state-of-the-art Horizon Run 4 simulation showing excellent agreement for measured dark matter variance and bias parameters. Finally, it is demonstrated that this formalism allows for a joint estimation of the non-linear dark matter variance and the bias parameters using solely the statistics of subhaloes. Having verified that galaxy counts in hydrodynamical simulations sampled on a scale of 10 Mpc h-1 closely resemble those of subhaloes, this work provides important steps towards making theoretical predictions for density-in-cells statistics applicable to upcoming galaxy surveys like Euclid or WFIRST.

  4. Predictive power of theoretical modelling of the nuclear mean field: examples of improving predictive capacities

    NASA Astrophysics Data System (ADS)

    Dedes, I.; Dudek, J.

    2018-03-01

    We examine the effects of the parametric correlations on the predictive capacities of the theoretical modelling keeping in mind the nuclear structure applications. The main purpose of this work is to illustrate the method of establishing the presence and determining the form of parametric correlations within a model as well as an algorithm of elimination by substitution (see text) of parametric correlations. We examine the effects of the elimination of the parametric correlations on the stabilisation of the model predictions further and further away from the fitting zone. It follows that the choice of the physics case and the selection of the associated model are of secondary importance in this case. Under these circumstances we give priority to the relative simplicity of the underlying mathematical algorithm, provided the model is realistic. Following such criteria, we focus specifically on an important but relatively simple case of doubly magic spherical nuclei. To profit from the algorithmic simplicity we chose working with the phenomenological spherically symmetric Woods–Saxon mean-field. We employ two variants of the underlying Hamiltonian, the traditional one involving both the central and the spin orbit potential in the Woods–Saxon form and the more advanced version with the self-consistent density-dependent spin–orbit interaction. We compare the effects of eliminating of various types of correlations and discuss the improvement of the quality of predictions (‘predictive power’) under realistic parameter adjustment conditions.

  5. A generalized parametric response mapping method for analysis of multi-parametric imaging: A feasibility study with application to glioblastoma.

    PubMed

    Lausch, Anthony; Yeung, Timothy Pok-Chi; Chen, Jeff; Law, Elton; Wang, Yong; Urbini, Benedetta; Donelli, Filippo; Manco, Luigi; Fainardi, Enrico; Lee, Ting-Yim; Wong, Eugene

    2017-11-01

    Parametric response map (PRM) analysis of functional imaging has been shown to be an effective tool for early prediction of cancer treatment outcomes and may also be well-suited toward guiding personalized adaptive radiotherapy (RT) strategies such as sub-volume boosting. However, the PRM method was primarily designed for analysis of longitudinally acquired pairs of single-parameter image data. The purpose of this study was to demonstrate the feasibility of a generalized parametric response map analysis framework, which enables analysis of multi-parametric data while maintaining the key advantages of the original PRM method. MRI-derived apparent diffusion coefficient (ADC) and relative cerebral blood volume (rCBV) maps acquired at 1 and 3-months post-RT for 19 patients with high-grade glioma were used to demonstrate the algorithm. Images were first co-registered and then standardized using normal tissue image intensity values. Tumor voxels were then plotted in a four-dimensional Cartesian space with coordinate values equal to a voxel's image intensity in each of the image volumes and an origin defined as the multi-parametric mean of normal tissue image intensity values. Voxel positions were orthogonally projected onto a line defined by the origin and a pre-determined response vector. The voxels are subsequently classified as positive, negative or nil, according to whether projected positions along the response vector exceeded a threshold distance from the origin. The response vector was selected by identifying the direction in which the standard deviation of tumor image intensity values was maximally different between responding and non-responding patients within a training dataset. Voxel classifications were visualized via familiar three-class response maps and then the fraction of tumor voxels associated with each of the classes was investigated for predictive utility analogous to the original PRM method. Independent PRM and MPRM analyses of the contrast-enhancing lesion (CEL) and a 1 cm shell of surrounding peri-tumoral tissue were performed. Prediction using tumor volume metrics was also investigated. Leave-one-out cross validation (LOOCV) was used in combination with permutation testing to assess preliminary predictive efficacy and estimate statistically robust P-values. The predictive endpoint was overall survival (OS) greater than or equal to the median OS of 18.2 months. Single-parameter PRM and multi-parametric response maps (MPRMs) were generated for each patient and used to predict OS via the LOOCV. Tumor volume metrics (P ≥ 0.071 ± 0.01) and single-parameter PRM analyses (P ≥ 0.170 ± 0.01) were not found to be predictive of OS within this study. MPRM analysis of the peri-tumoral region but not the CEL was found to be predictive of OS with a classification sensitivity, specificity and accuracy of 80%, 100%, and 89%, respectively (P = 0.001 ± 0.01). The feasibility of a generalized MPRM analysis framework was demonstrated with improved prediction of overall survival compared to the original single-parameter method when applied to a glioblastoma dataset. The proposed algorithm takes the spatial heterogeneity in multi-parametric response into consideration and enables visualization. MPRM analysis of peri-tumoral regions was shown to have predictive potential supporting further investigation of a larger glioblastoma dataset. © 2017 American Association of Physicists in Medicine.

  6. Numerical prediction of 3-D ejector flows

    NASA Technical Reports Server (NTRS)

    Roberts, D. W.; Paynter, G. C.

    1979-01-01

    The use of parametric flow analysis, rather than parametric scale testing, to support the design of an ejector system offers a number of potential advantages. The application of available 3-D flow analyses to the design ejectors can be subdivided into several key elements. These are numerics, turbulence modeling, data handling and display, and testing in support of analysis development. Experimental and predicted jet exhaust for the Boeing 727 aircraft are examined.

  7. Bootstrap Prediction Intervals in Non-Parametric Regression with Applications to Anomaly Detection

    NASA Technical Reports Server (NTRS)

    Kumar, Sricharan; Srivistava, Ashok N.

    2012-01-01

    Prediction intervals provide a measure of the probable interval in which the outputs of a regression model can be expected to occur. Subsequently, these prediction intervals can be used to determine if the observed output is anomalous or not, conditioned on the input. In this paper, a procedure for determining prediction intervals for outputs of nonparametric regression models using bootstrap methods is proposed. Bootstrap methods allow for a non-parametric approach to computing prediction intervals with no specific assumptions about the sampling distribution of the noise or the data. The asymptotic fidelity of the proposed prediction intervals is theoretically proved. Subsequently, the validity of the bootstrap based prediction intervals is illustrated via simulations. Finally, the bootstrap prediction intervals are applied to the problem of anomaly detection on aviation data.

  8. An advanced constitutive model in the sheet metal forming simulation: the Teodosiu microstructural model and the Cazacu Barlat yield criterion

    NASA Astrophysics Data System (ADS)

    Alves, J. L.; Oliveira, M. C.; Menezes, L. F.

    2004-06-01

    Two constitutive models used to describe the plastic behavior of sheet metals in the numerical simulation of sheet metal forming process are studied: a recently proposed advanced constitutive model based on the Teodosiu microstructural model and the Cazacu Barlat yield criterion is compared with a more classical one, based on the Swift law and the Hill 1948 yield criterion. These constitutive models are implemented into DD3IMP, a finite element home code specifically developed to simulate sheet metal forming processes, which generically is a 3-D elastoplastic finite element code with an updated Lagrangian formulation, following a fully implicit time integration scheme, large elastoplastic strains and rotations. Solid finite elements and parametric surfaces are used to model the blank sheet and tool surfaces, respectively. Some details of the numerical implementation of the constitutive models are given. Finally, the theory is illustrated with the numerical simulation of the deep drawing of a cylindrical cup. The results show that the proposed advanced constitutive model predicts with more exactness the final shape (medium height and ears profile) of the formed part, as one can conclude from the comparison with the experimental results.

  9. Streamflow hindcasting in European river basins via multi-parametric ensemble of the mesoscale hydrologic model (mHM)

    NASA Astrophysics Data System (ADS)

    Noh, Seong Jin; Rakovec, Oldrich; Kumar, Rohini; Samaniego, Luis

    2016-04-01

    There have been tremendous improvements in distributed hydrologic modeling (DHM) which made a process-based simulation with a high spatiotemporal resolution applicable on a large spatial scale. Despite of increasing information on heterogeneous property of a catchment, DHM is still subject to uncertainties inherently coming from model structure, parameters and input forcing. Sequential data assimilation (DA) may facilitate improved streamflow prediction via DHM using real-time observations to correct internal model states. In conventional DA methods such as state updating, parametric uncertainty is, however, often ignored mainly due to practical limitations of methodology to specify modeling uncertainty with limited ensemble members. If parametric uncertainty related with routing and runoff components is not incorporated properly, predictive uncertainty by DHM may be insufficient to capture dynamics of observations, which may deteriorate predictability. Recently, a multi-scale parameter regionalization (MPR) method was proposed to make hydrologic predictions at different scales using a same set of model parameters without losing much of the model performance. The MPR method incorporated within the mesoscale hydrologic model (mHM, http://www.ufz.de/mhm) could effectively represent and control uncertainty of high-dimensional parameters in a distributed model using global parameters. In this study, we present a global multi-parametric ensemble approach to incorporate parametric uncertainty of DHM in DA to improve streamflow predictions. To effectively represent and control uncertainty of high-dimensional parameters with limited number of ensemble, MPR method is incorporated with DA. Lagged particle filtering is utilized to consider the response times and non-Gaussian characteristics of internal hydrologic processes. The hindcasting experiments are implemented to evaluate impacts of the proposed DA method on streamflow predictions in multiple European river basins having different climate and catchment characteristics. Because augmentation of parameters is not required within an assimilation window, the approach could be stable with limited ensemble members and viable for practical uses.

  10. Assessment of parametric uncertainty for groundwater reactive transport modeling,

    USGS Publications Warehouse

    Shi, Xiaoqing; Ye, Ming; Curtis, Gary P.; Miller, Geoffery L.; Meyer, Philip D.; Kohler, Matthias; Yabusaki, Steve; Wu, Jichun

    2014-01-01

    The validity of using Gaussian assumptions for model residuals in uncertainty quantification of a groundwater reactive transport model was evaluated in this study. Least squares regression methods explicitly assume Gaussian residuals, and the assumption leads to Gaussian likelihood functions, model parameters, and model predictions. While the Bayesian methods do not explicitly require the Gaussian assumption, Gaussian residuals are widely used. This paper shows that the residuals of the reactive transport model are non-Gaussian, heteroscedastic, and correlated in time; characterizing them requires using a generalized likelihood function such as the formal generalized likelihood function developed by Schoups and Vrugt (2010). For the surface complexation model considered in this study for simulating uranium reactive transport in groundwater, parametric uncertainty is quantified using the least squares regression methods and Bayesian methods with both Gaussian and formal generalized likelihood functions. While the least squares methods and Bayesian methods with Gaussian likelihood function produce similar Gaussian parameter distributions, the parameter distributions of Bayesian uncertainty quantification using the formal generalized likelihood function are non-Gaussian. In addition, predictive performance of formal generalized likelihood function is superior to that of least squares regression and Bayesian methods with Gaussian likelihood function. The Bayesian uncertainty quantification is conducted using the differential evolution adaptive metropolis (DREAM(zs)) algorithm; as a Markov chain Monte Carlo (MCMC) method, it is a robust tool for quantifying uncertainty in groundwater reactive transport models. For the surface complexation model, the regression-based local sensitivity analysis and Morris- and DREAM(ZS)-based global sensitivity analysis yield almost identical ranking of parameter importance. The uncertainty analysis may help select appropriate likelihood functions, improve model calibration, and reduce predictive uncertainty in other groundwater reactive transport and environmental modeling.

  11. Increasing crop diversity mitigates weather variations and improves yield stability.

    PubMed

    Gaudin, Amélie C M; Tolhurst, Tor N; Ker, Alan P; Janovicek, Ken; Tortora, Cristina; Martin, Ralph C; Deen, William

    2015-01-01

    Cropping sequence diversification provides a systems approach to reduce yield variations and improve resilience to multiple environmental stresses. Yield advantages of more diverse crop rotations and their synergistic effects with reduced tillage are well documented, but few studies have quantified the impact of these management practices on yields and their stability when soil moisture is limiting or in excess. Using yield and weather data obtained from a 31-year long term rotation and tillage trial in Ontario, we tested whether crop rotation diversity is associated with greater yield stability when abnormal weather conditions occur. We used parametric and non-parametric approaches to quantify the impact of rotation diversity (monocrop, 2-crops, 3-crops without or with one or two legume cover crops) and tillage (conventional or reduced tillage) on yield probabilities and the benefits of crop diversity under different soil moisture and temperature scenarios. Although the magnitude of rotation benefits varied with crops, weather patterns and tillage, yield stability significantly increased when corn and soybean were integrated into more diverse rotations. Introducing small grains into short corn-soybean rotation was enough to provide substantial benefits on long-term soybean yields and their stability while the effects on corn were mostly associated with the temporal niche provided by small grains for underseeded red clover or alfalfa. Crop diversification strategies increased the probability of harnessing favorable growing conditions while decreasing the risk of crop failure. In hot and dry years, diversification of corn-soybean rotations and reduced tillage increased yield by 7% and 22% for corn and soybean respectively. Given the additional advantages associated with cropping system diversification, such a strategy provides a more comprehensive approach to lowering yield variability and improving the resilience of cropping systems to multiple environmental stresses. This could help to sustain future yield levels in challenging production environments.

  12. Comparison of radiation parametrizations within the HARMONIE-AROME NWP model

    NASA Astrophysics Data System (ADS)

    Rontu, Laura; Lindfors, Anders V.

    2018-05-01

    Downwelling shortwave radiation at the surface (SWDS, global solar radiation flux), given by three different parametrization schemes, was compared to observations in the HARMONIE-AROME numerical weather prediction (NWP) model experiments over Finland in spring 2017. Simulated fluxes agreed well with each other and with the observations in the clear-sky cases. In the cloudy-sky conditions, all schemes tended to underestimate SWDS at the daily level, as compared to the measurements. Large local and temporal differences between the model results and observations were seen, related to the variations and uncertainty of the predicted cloud properties. The results suggest a possibility to benefit from the use of different radiative transfer parametrizations in a NWP model to obtain perturbations for the fine-resolution ensemble prediction systems. In addition, we recommend usage of the global radiation observations for the standard validation of the NWP models.

  13. Parametric study for the optimization of ionic liquid pretreatment of corn stover

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Papa, Gabriella; Feldman, Taya; Sale, Kenneth L.

    A parametric study of the efficacy of the ionic liquid (IL) pretreatment (PT) of corn stover (CS) using 1-ethyl-3-methylimidazolium acetate ([C 2C 1Im][OAc] ) and cholinium lysinate ([Ch][Lys] ) was conducted. The impact of 50% and 15% biomass loading for milled and non-milled CS on IL-PT was evaluated, as well the impact of 20 and 5 mg enzyme/g glucan on saccharification efficiency. The glucose and xylose released were generated from 32 conditions – 2 ionic liquids (ILs), 2 temperatures, 2 particle sizes (S), 2 solid loadings, and 2 enzyme loadings. Statistical analysis indicates that sugar yields were correlated with lignin andmore » xylan removal and depends on the factors, where S did not explain variation in sugar yields. Both ILs were effective in pretreating large particle sized CS, without compromising sugar yields. The knowledge from material and energy balances is an essential step in directing optimization of sugar recovery at desirable process conditions.« less

  14. Empirical validation of statistical parametric mapping for group imaging of fast neural activity using electrical impedance tomography.

    PubMed

    Packham, B; Barnes, G; Dos Santos, G Sato; Aristovich, K; Gilad, O; Ghosh, A; Oh, T; Holder, D

    2016-06-01

    Electrical impedance tomography (EIT) allows for the reconstruction of internal conductivity from surface measurements. A change in conductivity occurs as ion channels open during neural activity, making EIT a potential tool for functional brain imaging. EIT images can have  >10 000 voxels, which means statistical analysis of such images presents a substantial multiple testing problem. One way to optimally correct for these issues and still maintain the flexibility of complicated experimental designs is to use random field theory. This parametric method estimates the distribution of peaks one would expect by chance in a smooth random field of a given size. Random field theory has been used in several other neuroimaging techniques but never validated for EIT images of fast neural activity, such validation can be achieved using non-parametric techniques. Both parametric and non-parametric techniques were used to analyze a set of 22 images collected from 8 rats. Significant group activations were detected using both techniques (corrected p  <  0.05). Both parametric and non-parametric analyses yielded similar results, although the latter was less conservative. These results demonstrate the first statistical analysis of such an image set and indicate that such an analysis is an approach for EIT images of neural activity.

  15. Empirical validation of statistical parametric mapping for group imaging of fast neural activity using electrical impedance tomography

    PubMed Central

    Packham, B; Barnes, G; dos Santos, G Sato; Aristovich, K; Gilad, O; Ghosh, A; Oh, T; Holder, D

    2016-01-01

    Abstract Electrical impedance tomography (EIT) allows for the reconstruction of internal conductivity from surface measurements. A change in conductivity occurs as ion channels open during neural activity, making EIT a potential tool for functional brain imaging. EIT images can have  >10 000 voxels, which means statistical analysis of such images presents a substantial multiple testing problem. One way to optimally correct for these issues and still maintain the flexibility of complicated experimental designs is to use random field theory. This parametric method estimates the distribution of peaks one would expect by chance in a smooth random field of a given size. Random field theory has been used in several other neuroimaging techniques but never validated for EIT images of fast neural activity, such validation can be achieved using non-parametric techniques. Both parametric and non-parametric techniques were used to analyze a set of 22 images collected from 8 rats. Significant group activations were detected using both techniques (corrected p  <  0.05). Both parametric and non-parametric analyses yielded similar results, although the latter was less conservative. These results demonstrate the first statistical analysis of such an image set and indicate that such an analysis is an approach for EIT images of neural activity. PMID:27203477

  16. A lymphocyte spatial distribution graph-based method for automated classification of recurrence risk on lung cancer images

    NASA Astrophysics Data System (ADS)

    Garciá-Arteaga, Juan D.; Corredor, Germán.; Wang, Xiangxue; Velcheti, Vamsidhar; Madabhushi, Anant; Romero, Eduardo

    2017-11-01

    Tumor-infiltrating lymphocytes occurs when various classes of white blood cells migrate from the blood stream towards the tumor, infiltrating it. The presence of TIL is predictive of the response of the patient to therapy. In this paper, we show how the automatic detection of lymphocytes in digital H and E histopathological images and the quantitative evaluation of the global lymphocyte configuration, evaluated through global features extracted from non-parametric graphs, constructed from the lymphocytes' detected positions, can be correlated to the patient's outcome in early-stage non-small cell lung cancer (NSCLC). The method was assessed on a tissue microarray cohort composed of 63 NSCLC cases. From the evaluated graphs, minimum spanning trees and K-nn showed the highest predictive ability, yielding F1 Scores of 0.75 and 0.72 and accuracies of 0.67 and 0.69, respectively. The predictive power of the proposed methodology indicates that graphs may be used to develop objective measures of the infiltration grade of tumors, which can, in turn, be used by pathologists to improve the decision making and treatment planning processes.

  17. Multipollutant measurement error in air pollution epidemiology studies arising from predicting exposures with penalized regression splines

    PubMed Central

    Bergen, Silas; Sheppard, Lianne; Kaufman, Joel D.; Szpiro, Adam A.

    2016-01-01

    Summary Air pollution epidemiology studies are trending towards a multi-pollutant approach. In these studies, exposures at subject locations are unobserved and must be predicted using observed exposures at misaligned monitoring locations. This induces measurement error, which can bias the estimated health effects and affect standard error estimates. We characterize this measurement error and develop an analytic bias correction when using penalized regression splines to predict exposure. Our simulations show bias from multi-pollutant measurement error can be severe, and in opposite directions or simultaneously positive or negative. Our analytic bias correction combined with a non-parametric bootstrap yields accurate coverage of 95% confidence intervals. We apply our methodology to analyze the association of systolic blood pressure with PM2.5 and NO2 in the NIEHS Sister Study. We find that NO2 confounds the association of systolic blood pressure with PM2.5 and vice versa. Elevated systolic blood pressure was significantly associated with increased PM2.5 and decreased NO2. Correcting for measurement error bias strengthened these associations and widened 95% confidence intervals. PMID:27789915

  18. Predicting ecosystem dynamics at regional scales: an evaluation of a terrestrial biosphere model for the forests of northeastern North America.

    PubMed

    Medvigy, David; Moorcroft, Paul R

    2012-01-19

    Terrestrial biosphere models are important tools for diagnosing both the current state of the terrestrial carbon cycle and forecasting terrestrial ecosystem responses to global change. While there are a number of ongoing assessments of the short-term predictive capabilities of terrestrial biosphere models using flux-tower measurements, to date there have been relatively few assessments of their ability to predict longer term, decadal-scale biomass dynamics. Here, we present the results of a regional-scale evaluation of the Ecosystem Demography version 2 (ED2)-structured terrestrial biosphere model, evaluating the model's predictions against forest inventory measurements for the northeast USA and Quebec from 1985 to 1995. Simulations were conducted using a default parametrization, which used parameter values from the literature, and a constrained model parametrization, which had been developed by constraining the model's predictions against 2 years of measurements from a single site, Harvard Forest (42.5° N, 72.1° W). The analysis shows that the constrained model parametrization offered marked improvements over the default model formulation, capturing large-scale variation in patterns of biomass dynamics despite marked differences in climate forcing, land-use history and species-composition across the region. These results imply that data-constrained parametrizations of structured biosphere models such as ED2 can be successfully used for regional-scale ecosystem prediction and forecasting. We also assess the model's ability to capture sub-grid scale heterogeneity in the dynamics of biomass growth and mortality of different sizes and types of trees, and then discuss the implications of these analyses for further reducing the remaining biases in the model's predictions.

  19. A Bit Stream Scalable Speech/Audio Coder Combining Enhanced Regular Pulse Excitation and Parametric Coding

    NASA Astrophysics Data System (ADS)

    Riera-Palou, Felip; den Brinker, Albertus C.

    2007-12-01

    This paper introduces a new audio and speech broadband coding technique based on the combination of a pulse excitation coder and a standardized parametric coder, namely, MPEG-4 high-quality parametric coder. After presenting a series of enhancements to regular pulse excitation (RPE) to make it suitable for the modeling of broadband signals, it is shown how pulse and parametric codings complement each other and how they can be merged to yield a layered bit stream scalable coder able to operate at different points in the quality bit rate plane. The performance of the proposed coder is evaluated in a listening test. The major result is that the extra functionality of the bit stream scalability does not come at the price of a reduced performance since the coder is competitive with standardized coders (MP3, AAC, SSC).

  20. Measurement of the photon statistics and the noise figure of a fiber-optic parametric amplifier.

    PubMed

    Voss, Paul L; Tang, Renyong; Kumar, Prem

    2003-04-01

    We report measurement of the noise statistics of spontaneous parametric fluorescence in a fiber parametric amplifier with single-mode, single-photon resolution. We employ optical homodyne tomography for this purpose, which also provides a self-calibrating measurement of the noise figure of the amplifier. The measured photon statistics agree with quantum-mechanical predictions, and the amplifier's noise figure is found to be almost quantum limited.

  1. Direct reconstruction of cardiac PET kinetic parametric images using a preconditioned conjugate gradient approach

    PubMed Central

    Rakvongthai, Yothin; Ouyang, Jinsong; Guerin, Bastien; Li, Quanzheng; Alpert, Nathaniel M.; El Fakhri, Georges

    2013-01-01

    Purpose: Our research goal is to develop an algorithm to reconstruct cardiac positron emission tomography (PET) kinetic parametric images directly from sinograms and compare its performance with the conventional indirect approach. Methods: Time activity curves of a NCAT phantom were computed according to a one-tissue compartmental kinetic model with realistic kinetic parameters. The sinograms at each time frame were simulated using the activity distribution for the time frame. The authors reconstructed the parametric images directly from the sinograms by optimizing a cost function, which included the Poisson log-likelihood and a spatial regularization terms, using the preconditioned conjugate gradient (PCG) algorithm with the proposed preconditioner. The proposed preconditioner is a diagonal matrix whose diagonal entries are the ratio of the parameter and the sensitivity of the radioactivity associated with parameter. The authors compared the reconstructed parametric images using the direct approach with those reconstructed using the conventional indirect approach. Results: At the same bias, the direct approach yielded significant relative reduction in standard deviation by 12%–29% and 32%–70% for 50 × 106 and 10 × 106 detected coincidences counts, respectively. Also, the PCG method effectively reached a constant value after only 10 iterations (with numerical convergence achieved after 40–50 iterations), while more than 500 iterations were needed for CG. Conclusions: The authors have developed a novel approach based on the PCG algorithm to directly reconstruct cardiac PET parametric images from sinograms, and yield better estimation of kinetic parameters than the conventional indirect approach, i.e., curve fitting of reconstructed images. The PCG method increases the convergence rate of reconstruction significantly as compared to the conventional CG method. PMID:24089922

  2. Direct reconstruction of cardiac PET kinetic parametric images using a preconditioned conjugate gradient approach.

    PubMed

    Rakvongthai, Yothin; Ouyang, Jinsong; Guerin, Bastien; Li, Quanzheng; Alpert, Nathaniel M; El Fakhri, Georges

    2013-10-01

    Our research goal is to develop an algorithm to reconstruct cardiac positron emission tomography (PET) kinetic parametric images directly from sinograms and compare its performance with the conventional indirect approach. Time activity curves of a NCAT phantom were computed according to a one-tissue compartmental kinetic model with realistic kinetic parameters. The sinograms at each time frame were simulated using the activity distribution for the time frame. The authors reconstructed the parametric images directly from the sinograms by optimizing a cost function, which included the Poisson log-likelihood and a spatial regularization terms, using the preconditioned conjugate gradient (PCG) algorithm with the proposed preconditioner. The proposed preconditioner is a diagonal matrix whose diagonal entries are the ratio of the parameter and the sensitivity of the radioactivity associated with parameter. The authors compared the reconstructed parametric images using the direct approach with those reconstructed using the conventional indirect approach. At the same bias, the direct approach yielded significant relative reduction in standard deviation by 12%-29% and 32%-70% for 50 × 10(6) and 10 × 10(6) detected coincidences counts, respectively. Also, the PCG method effectively reached a constant value after only 10 iterations (with numerical convergence achieved after 40-50 iterations), while more than 500 iterations were needed for CG. The authors have developed a novel approach based on the PCG algorithm to directly reconstruct cardiac PET parametric images from sinograms, and yield better estimation of kinetic parameters than the conventional indirect approach, i.e., curve fitting of reconstructed images. The PCG method increases the convergence rate of reconstruction significantly as compared to the conventional CG method.

  3. A Unified and Comprehensible View of Parametric and Kernel Methods for Genomic Prediction with Application to Rice.

    PubMed

    Jacquin, Laval; Cao, Tuong-Vi; Ahmadi, Nourollah

    2016-01-01

    One objective of this study was to provide readers with a clear and unified understanding of parametric statistical and kernel methods, used for genomic prediction, and to compare some of these in the context of rice breeding for quantitative traits. Furthermore, another objective was to provide a simple and user-friendly R package, named KRMM, which allows users to perform RKHS regression with several kernels. After introducing the concept of regularized empirical risk minimization, the connections between well-known parametric and kernel methods such as Ridge regression [i.e., genomic best linear unbiased predictor (GBLUP)] and reproducing kernel Hilbert space (RKHS) regression were reviewed. Ridge regression was then reformulated so as to show and emphasize the advantage of the kernel "trick" concept, exploited by kernel methods in the context of epistatic genetic architectures, over parametric frameworks used by conventional methods. Some parametric and kernel methods; least absolute shrinkage and selection operator (LASSO), GBLUP, support vector machine regression (SVR) and RKHS regression were thereupon compared for their genomic predictive ability in the context of rice breeding using three real data sets. Among the compared methods, RKHS regression and SVR were often the most accurate methods for prediction followed by GBLUP and LASSO. An R function which allows users to perform RR-BLUP of marker effects, GBLUP and RKHS regression, with a Gaussian, Laplacian, polynomial or ANOVA kernel, in a reasonable computation time has been developed. Moreover, a modified version of this function, which allows users to tune kernels for RKHS regression, has also been developed and parallelized for HPC Linux clusters. The corresponding KRMM package and all scripts have been made publicly available.

  4. High-temperature Tensile Properties and Creep Life Assessment of 25Cr35NiNb Micro-alloyed Steel

    NASA Astrophysics Data System (ADS)

    Ghatak, Amitava; Robi, P. S.

    2016-05-01

    Reformer tubes in petrochemical industries are exposed to high temperatures and gas pressure for prolonged period. Exposure of these tubes at severe operating conditions results in change in the microstructure and degradation of mechanical properties which may lead to premature failure. The present work highlights the high-temperature tensile properties and remaining creep life prediction using Larson-Miller parametric technique of service exposed 25Cr35NiNb micro-alloyed reformer tube. Young's modulus, yield strength, and ultimate tensile strength of the steel are lower than the virgin material and decreases with the increase in temperature. Ductility continuously increases with the increase in temperature up to 1000 °C. Strain hardening exponent increases up to 600 °C, beyond which it starts decreasing. The tensile properties are discussed with reference to microstructure and fractographs. Based on Larson-Miller technique, a creep life of at least 8.3 years is predicted for the service exposed material at 800 °C and 5 MPa.

  5. High Excitation Rydberg Levels of Fe I from the ATMOS Solar Spectrum at 2.5 and 7 microns

    NASA Technical Reports Server (NTRS)

    Schoenfeld, W. G.; Chang, E. S.; Geller, M.; Johansson, S.; Nave, G.; Sauval, A. J.; Grevesse, N.

    1995-01-01

    The quadrupole-polarization theory has been applied to the 3d(sup 6)4S(D-6)4f and 5g subconfigurations of Fe I by a parametric fit, and the fitted parameters are used to predict levels in the 6g and 6h subconfigurations. Using the predicted values, we have computed the 4f-6g and 5g-6h transition arrays and made identifications in the ATMOS infrared solar spectrum. The newly identified 6g and 6h levels, based on ATMOS wavenumbers, are combined with the 5g levels and found to agree with the theoretical values with a root mean-squared-deviation of 0.042/ cm. Our approach yields a polarizability of 28.07 a(sub o, sup 2) and a quadrupole moment of 0.4360 +/- 0.0010 ea(sup 2, sub o) for Fe II, as well as an improved ionization potential of 63737.700 +/- 0.010/ cm for Fe I.

  6. PARAMETRIC DISTANCE WEIGHTING OF LANDSCAPE INFLUENCE ON STREAMS

    EPA Science Inventory

    We present a parametric model for estimating the areas within watersheds whose land use best predicts indicators of stream ecological condition. We regress a stream response variable on the distance-weighted proportion of watershed area that has a specific land use, such as agric...

  7. Brayton Power Conversion System Parametric Design Modelling for Nuclear Electric Propulsion

    NASA Technical Reports Server (NTRS)

    Ashe, Thomas L.; Otting, William D.

    1993-01-01

    The parametrically based closed Brayton cycle (CBC) computer design model was developed for inclusion into the NASA LeRC overall Nuclear Electric Propulsion (NEP) end-to-end systems model. The code is intended to provide greater depth to the NEP system modeling which is required to more accurately predict the impact of specific technology on system performance. The CBC model is parametrically based to allow for conducting detailed optimization studies and to provide for easy integration into an overall optimizer driver routine. The power conversion model includes the modeling of the turbines, alternators, compressors, ducting, and heat exchangers (hot-side heat exchanger and recuperator). The code predicts performance to significant detail. The system characteristics determined include estimates of mass, efficiency, and the characteristic dimensions of the major power conversion system components. These characteristics are parametrically modeled as a function of input parameters such as the aerodynamic configuration (axial or radial), turbine inlet temperature, cycle temperature ratio, power level, lifetime, materials, and redundancy.

  8. Monitoring waterbird abundance in wetlands: The importance of controlling results for variation in water depth

    USGS Publications Warehouse

    Bolduc, F.; Afton, A.D.

    2008-01-01

    Wetland use by waterbirds is highly dependent on water depth, and depth requirements generally vary among species. Furthermore, water depth within wetlands often varies greatly over time due to unpredictable hydrological events, making comparisons of waterbird abundance among wetlands difficult as effects of habitat variables and water depth are confounded. Species-specific relationships between bird abundance and water depth necessarily are non-linear; thus, we developed a methodology to correct waterbird abundance for variation in water depth, based on the non-parametric regression of these two variables. Accordingly, we used the difference between observed and predicted abundances from non-parametric regression (analogous to parametric residuals) as an estimate of bird abundance at equivalent water depths. We scaled this difference to levels of observed and predicted abundances using the formula: ((observed - predicted abundance)/(observed + predicted abundance)) ?? 100. This estimate also corresponds to the observed:predicted abundance ratio, which allows easy interpretation of results. We illustrated this methodology using two hypothetical species that differed in water depth and wetland preferences. Comparisons of wetlands, using both observed and relative corrected abundances, indicated that relative corrected abundance adequately separates the effect of water depth from the effect of wetlands. ?? 2008 Elsevier B.V.

  9. Quantum theory of the far-off-resonance continuous-wave Raman laser: Heisenberg-Langevin approach

    NASA Astrophysics Data System (ADS)

    Roos, P. A.; Murphy, S. K.; Meng, L. S.; Carlsten, J. L.; Ralph, T. C.; White, A. G.; Brasseur, J. K.

    2003-07-01

    We present the quantum theory of the far-off-resonance continuous-wave Raman laser using the Heisenberg-Langevin approach. We show that the simplified quantum Langevin equations for this system are mathematically identical to those of the nondegenerate optical parametric oscillator in the time domain with the following associations: pump ↔ pump, Stokes ↔ signal, and Raman coherence ↔ idler. We derive analytical results for both the steady-state behavior and the time-dependent noise spectra, using standard linearization procedures. In the semiclassical limit, these results match with previous purely semiclassical treatments, which yield excellent agreement with experimental observations. The analytical time-dependent results predict perfect photon statistics conversion from the pump to the Stokes and nonclassical behavior under certain operational conditions.

  10. Parametric Improper Integrals, Wallis Formula and Catalan Numbers

    ERIC Educational Resources Information Center

    Dana-Picard, Thierry; Zeitoun, David G.

    2012-01-01

    We present a sequence of improper integrals, for which a closed formula can be computed using Wallis formula and a non-straightforward recurrence formula. This yields a new integral presentation for Catalan numbers.

  11. Parametric improper integrals, Wallis formula and Catalan numbers

    NASA Astrophysics Data System (ADS)

    Dana-Picard, Thierry; Zeitoun, David G.

    2012-06-01

    We present a sequence of improper integrals, for which a closed formula can be computed using Wallis formula and a non-straightforward recurrence formula. This yields a new integral presentation for Catalan numbers.

  12. GREENHOUSE GAS (GHG) VERIFICATION GUIDELINE SERIES: ANR Pipeline Company PARAMETRIC EMISSIONS MONITORING SYSTEM (PEMS) VERSION 1.0

    EPA Science Inventory

    The Environmental Technology Verification report discusses the technology and performance of the Parametric Emissions Monitoring System (PEMS) manufactured by ANR Pipeline Company, a subsidiary of Coastal Corporation, now El Paso Corporation. The PEMS predicts carbon doixide (CO2...

  13. Empirical Prediction of Aircraft Landing Gear Noise

    NASA Technical Reports Server (NTRS)

    Golub, Robert A. (Technical Monitor); Guo, Yue-Ping

    2005-01-01

    This report documents a semi-empirical/semi-analytical method for landing gear noise prediction. The method is based on scaling laws of the theory of aerodynamic noise generation and correlation of these scaling laws with current available test data. The former gives the method a sound theoretical foundation and the latter quantitatively determines the relations between the parameters of the landing gear assembly and the far field noise, enabling practical predictions of aircraft landing gear noise, both for parametric trends and for absolute noise levels. The prediction model is validated by wind tunnel test data for an isolated Boeing 737 landing gear and by flight data for the Boeing 777 airplane. In both cases, the predictions agree well with data, both in parametric trends and in absolute noise levels.

  14. A parametric study of hard tissue injury prediction using finite elements: consideration of geometric complexity, subfailure material properties, CT-thresholding, and element characteristics.

    PubMed

    Arregui-Dalmases, Carlos; Del Pozo, Eduardo; Duprey, Sonia; Lopez-Valdes, Francisco J; Lau, Anthony; Subit, Damien; Kent, Richard

    2010-06-01

    The objectives of this study were to examine the axial response of the clavicle under quasistatic compressions replicating the body boundary conditions and to quantify the sensitivity of finite element-predicted fracture in the clavicle to several parameters. Clavicles were harvested from 14 donors (age range 14-56 years). Quasistatic axial compression tests were performed using a custom rig designed to replicate in situ boundary conditions. Prior to testing, high-resolution computed tomography (CT) scans were taken of each clavicle. From those images, finite element models were constructed. Factors varied parametrically included the density used to threshold cortical bone in the CT scans, the presence of trabecular bone, the mesh density, Young's modulus, the maximum stress, and the element type (shell vs. solid, triangular vs. quadrilateral surface elements). The experiments revealed significant variability in the peak force (2.41 +/- 0.72 kN) and displacement to peak force (4.9 +/- 1.1 mm), with age (p < .05) and with some geometrical traits of the specimens. In the finite element models, the failure force and location were moderately dependent upon the Young's modulus. The fracture force was highly sensitive to the yield stress (80-110 MPa). Neither fracture location nor force was strongly dependent on mesh density as long as the element size was less than 5 x 5 mm(2). Both the fracture location and force were strongly dependent upon the threshold density used to define the thickness of the cortical shell.

  15. Methodology for balancing design and process tradeoffs for deep-subwavelength technologies

    NASA Astrophysics Data System (ADS)

    Graur, Ioana; Wagner, Tina; Ryan, Deborah; Chidambarrao, Dureseti; Kumaraswamy, Anand; Bickford, Jeanne; Styduhar, Mark; Wang, Lee

    2011-04-01

    For process development of deep-subwavelength technologies, it has become accepted practice to use model-based simulation to predict systematic and parametric failures. Increasingly, these techniques are being used by designers to ensure layout manufacturability, as an alternative to, or complement to, restrictive design rules. The benefit of model-based simulation tools in the design environment is that manufacturability problems are addressed in a design-aware way by making appropriate trade-offs, e.g., between overall chip density and manufacturing cost and yield. The paper shows how library elements and the full ASIC design flow benefit from eliminating hot spots and improving design robustness early in the design cycle. It demonstrates a path to yield optimization and first time right designs implemented in leading edge technologies. The approach described herein identifies those areas in the design that could benefit from being fixed early, leading to design updates and avoiding later design churn by careful selection of design sensitivities. This paper shows how to achieve this goal by using simulation tools incorporating various models from sparse to rigorously physical, pattern detection and pattern matching, checking and validating failure thresholds.

  16. Kernel-based whole-genome prediction of complex traits: a review.

    PubMed

    Morota, Gota; Gianola, Daniel

    2014-01-01

    Prediction of genetic values has been a focus of applied quantitative genetics since the beginning of the 20th century, with renewed interest following the advent of the era of whole genome-enabled prediction. Opportunities offered by the emergence of high-dimensional genomic data fueled by post-Sanger sequencing technologies, especially molecular markers, have driven researchers to extend Ronald Fisher and Sewall Wright's models to confront new challenges. In particular, kernel methods are gaining consideration as a regression method of choice for genome-enabled prediction. Complex traits are presumably influenced by many genomic regions working in concert with others (clearly so when considering pathways), thus generating interactions. Motivated by this view, a growing number of statistical approaches based on kernels attempt to capture non-additive effects, either parametrically or non-parametrically. This review centers on whole-genome regression using kernel methods applied to a wide range of quantitative traits of agricultural importance in animals and plants. We discuss various kernel-based approaches tailored to capturing total genetic variation, with the aim of arriving at an enhanced predictive performance in the light of available genome annotation information. Connections between prediction machines born in animal breeding, statistics, and machine learning are revisited, and their empirical prediction performance is discussed. Overall, while some encouraging results have been obtained with non-parametric kernels, recovering non-additive genetic variation in a validation dataset remains a challenge in quantitative genetics.

  17. Order parameter description of walk-off effect on pattern selection in degenerate optical parametric oscillators

    NASA Astrophysics Data System (ADS)

    Taki, Majid; San Miguel, Maxi; Santagiustina, Marco

    2000-02-01

    Degenerate optical parametric oscillators can exhibit both uniformly translating fronts and nonuniformly translating envelope fronts under the walk-off effect. The nonlinear dynamics near threshold is shown to be described by a real convective Swift-Hohenberg equation, which provides the main characteristics of the walk-off effect on pattern selection. The predictions of the selected wave vector and the absolute instability threshold are in very good quantitative agreement with numerical solutions found from the equations describing the optical parametric oscillator.

  18. Critical insight into the influence of the potential energy surface on fission dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mazurek, K.; Grand Accelerateur National d'Ions Lourds; Schmitt, C.

    The present work is dedicated to a careful investigation of the influence of the potential energy surface on the fission process. The time evolution of nuclei at high excitation energy and angular momentum is studied by means of three-dimensional Langevin calculations performed for two different parametrizations of the macroscopic potential: the Finite Range Liquid Drop Model (FRLDM) and the Lublin-Strasbourg Drop (LSD) prescription. Depending on the mass of the system, the topology of the potential throughout the deformation space of interest in fission is observed to noticeably differ within these two approaches, due to the treatment of curvature effects. Whenmore » utilized in the dynamical calculation as the driving potential, the FRLDM and LSD models yield similar results in the heavy-mass region, whereas the predictions can be strongly dependent on the Potential Energy Surface (PES) for medium-mass nuclei. In particular, the mass, charge, and total kinetic energy distributions of the fission fragments are found to be narrower with the LSD prescription. The influence of critical model parameters on our findings is carefully investigated. The present study sheds light on the experimental conditions and signatures well suited for constraining the parametrization of the macroscopic potential. Its implication regarding the interpretation of available experimental data is briefly discussed.« less

  19. Communication: Analytic continuation of the virial series through the critical point using parametric approximants.

    PubMed

    Barlow, Nathaniel S; Schultz, Andrew J; Weinstein, Steven J; Kofke, David A

    2015-08-21

    The mathematical structure imposed by the thermodynamic critical point motivates an approximant that synthesizes two theoretically sound equations of state: the parametric and the virial. The former is constructed to describe the critical region, incorporating all scaling laws; the latter is an expansion about zero density, developed from molecular considerations. The approximant is shown to yield an equation of state capable of accurately describing properties over a large portion of the thermodynamic parameter space, far greater than that covered by each treatment alone.

  20. 500 MW peak power degenerated optical parametric amplifier delivering 52 fs pulses at 97 kHz repetition rate.

    PubMed

    Rothhardt, J; Hädrich, S; Röser, F; Limpert, J; Tünnermann, A

    2008-06-09

    We present a high peak power degenerated parametric amplifier operating at 1030 nm and 97 kHz repetition rate. Pulses of a state-of-the art fiber chirped-pulse amplification (FCPA) system with 840 fs pulse duration and 410 microJ pulse energy are used as pump and seed source for a two stage optical parametric amplifier. Additional spectral broadening of the seed signal in a photonic crystal fiber creates enough bandwidth for ultrashort pulse generation. Subsequent amplification of the broadband seed signal in two 1 mm BBO crystals results in 41 microJ output pulse energy. Compression in a SF 11 prism compressor yields 37 microJ pulses as short as 52 fs. Thus, pulse shortening of more than one order of magnitude is achieved. Further scaling in terms of average power and pulse energy seems possible and will be discussed, since both concepts involved, the fiber laser and the parametric amplifier have the reputation to be immune against thermo-optical effects.

  1. Parametric symmetries in exactly solvable real and PT symmetric complex potentials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yadav, Rajesh Kumar, E-mail: rajeshastrophysics@gmail.com; Khare, Avinash, E-mail: khare@physics.unipune.ac.in; Bagchi, Bijan, E-mail: bbagchi123@gmail.com

    In this paper, we discuss the parametric symmetries in different exactly solvable systems characterized by real or complex PT symmetric potentials. We focus our attention on the conventional potentials such as the generalized Pöschl Teller (GPT), Scarf-I, and PT symmetric Scarf-II which are invariant under certain parametric transformations. The resulting set of potentials is shown to yield a completely different behavior of the bound state solutions. Further, the supersymmetric partner potentials acquire different forms under such parametric transformations leading to new sets of exactly solvable real and PT symmetric complex potentials. These potentials are also observed to be shape invariantmore » (SI) in nature. We subsequently take up a study of the newly discovered rationally extended SI potentials, corresponding to the above mentioned conventional potentials, whose bound state solutions are associated with the exceptional orthogonal polynomials (EOPs). We discuss the transformations of the corresponding Casimir operator employing the properties of the so(2, 1) algebra.« less

  2. Test of the Chevallier-Polarski-Linder parametrization for rapid dark energy equation of state transitions

    NASA Astrophysics Data System (ADS)

    Linden, Sebastian; Virey, Jean-Marc

    2008-07-01

    We test the robustness and flexibility of the Chevallier-Polarski-Linder (CPL) parametrization of the dark energy equation of state w(z)=w0+wa(z)/(1+z) in recovering a four-parameter steplike fiducial model. We constrain the parameter space region of the underlying fiducial model where the CPL parametrization offers a reliable reconstruction. It turns out that non-negligible biases leak into the results for recent (z<2.5) rapid transitions, but that CPL yields a good reconstruction in all other cases. The presented analysis is performed with supernova Ia data as forecasted for a space mission like SNAP/JDEM, combined with future expectations for the cosmic microwave background shift parameter R and the baryonic acoustic oscillation parameter A.

  3. Increasing Crop Diversity Mitigates Weather Variations and Improves Yield Stability

    PubMed Central

    Gaudin, Amélie C. M.; Tolhurst, Tor N.; Ker, Alan P.; Janovicek, Ken; Tortora, Cristina; Martin, Ralph C.; Deen, William

    2015-01-01

    Cropping sequence diversification provides a systems approach to reduce yield variations and improve resilience to multiple environmental stresses. Yield advantages of more diverse crop rotations and their synergistic effects with reduced tillage are well documented, but few studies have quantified the impact of these management practices on yields and their stability when soil moisture is limiting or in excess. Using yield and weather data obtained from a 31-year long term rotation and tillage trial in Ontario, we tested whether crop rotation diversity is associated with greater yield stability when abnormal weather conditions occur. We used parametric and non-parametric approaches to quantify the impact of rotation diversity (monocrop, 2-crops, 3-crops without or with one or two legume cover crops) and tillage (conventional or reduced tillage) on yield probabilities and the benefits of crop diversity under different soil moisture and temperature scenarios. Although the magnitude of rotation benefits varied with crops, weather patterns and tillage, yield stability significantly increased when corn and soybean were integrated into more diverse rotations. Introducing small grains into short corn-soybean rotation was enough to provide substantial benefits on long-term soybean yields and their stability while the effects on corn were mostly associated with the temporal niche provided by small grains for underseeded red clover or alfalfa. Crop diversification strategies increased the probability of harnessing favorable growing conditions while decreasing the risk of crop failure. In hot and dry years, diversification of corn-soybean rotations and reduced tillage increased yield by 7% and 22% for corn and soybean respectively. Given the additional advantages associated with cropping system diversification, such a strategy provides a more comprehensive approach to lowering yield variability and improving the resilience of cropping systems to multiple environmental stresses. This could help to sustain future yield levels in challenging production environments. PMID:25658914

  4. Parametric estimate of the relative photon yields from the glasma and the quark-gluon plasma in heavy-ion collisions

    DOE PAGES

    Berges, Jürgen; Reygers, Klaus; Tanji, Naoto; ...

    2017-05-09

    Recent classical-statistical numerical simulations have established the “bottom-up” thermalization scenario of Baier et al. [Phys. Lett. B 502, 51 (2001)] as the correct weak coupling effective theory for thermalization in ultrarelativistic heavy-ion collisions. In this paper, we perform a parametric study of photon production in the various stages of this bottom-up framework to ascertain the relative contribution of the off-equilibrium “glasma” relative to that of a thermalized quark-gluon plasma. Taking into account the constraints imposed by the measured charged hadron multiplicities at Relativistic Heavy Ion Collider (RHIC) and the Large Hadron Collider (LHC), we find that glasma contributions are importantmore » especially for large values of the saturation scale at both energies. Finally, these nonequilibrium effects should therefore be taken into account in studies where weak coupling methods are employed to compute photon yields.« less

  5. Alternative evaluation metrics for risk adjustment methods.

    PubMed

    Park, Sungchul; Basu, Anirban

    2018-06-01

    Risk adjustment is instituted to counter risk selection by accurately equating payments with expected expenditures. Traditional risk-adjustment methods are designed to estimate accurate payments at the group level. However, this generates residual risks at the individual level, especially for high-expenditure individuals, thereby inducing health plans to avoid those with high residual risks. To identify an optimal risk-adjustment method, we perform a comprehensive comparison of prediction accuracies at the group level, at the tail distributions, and at the individual level across 19 estimators: 9 parametric regression, 7 machine learning, and 3 distributional estimators. Using the 2013-2014 MarketScan database, we find that no one estimator performs best in all prediction accuracies. Generally, machine learning and distribution-based estimators achieve higher group-level prediction accuracy than parametric regression estimators. However, parametric regression estimators show higher tail distribution prediction accuracy and individual-level prediction accuracy, especially at the tails of the distribution. This suggests that there is a trade-off in selecting an appropriate risk-adjustment method between estimating accurate payments at the group level and lower residual risks at the individual level. Our results indicate that an optimal method cannot be determined solely on the basis of statistical metrics but rather needs to account for simulating plans' risk selective behaviors. Copyright © 2018 John Wiley & Sons, Ltd.

  6. Model Adaptation in Parametric Space for POD-Galerkin Models

    NASA Astrophysics Data System (ADS)

    Gao, Haotian; Wei, Mingjun

    2017-11-01

    The development of low-order POD-Galerkin models is largely motivated by the expectation to use the model developed with a set of parameters at their native values to predict the dynamic behaviors of the same system under different parametric values, in other words, a successful model adaptation in parametric space. However, most of time, even small deviation of parameters from their original value may lead to large deviation or unstable results. It has been shown that adding more information (e.g. a steady state, mean value of a different unsteady state, or an entire different set of POD modes) may improve the prediction of flow with other parametric states. For a simple case of the flow passing a fixed cylinder, an orthogonal mean mode at a different Reynolds number may stabilize the POD-Galerkin model when Reynolds number is changed. For a more complicated case of the flow passing an oscillatory cylinder, a global POD-Galerkin model is first applied to handle the moving boundaries, then more information (e.g. more POD modes) is required to predicate the flow under different oscillatory frequencies. Supported by ARL.

  7. Modelling and multi-parametric control for delivery of anaesthetic agents.

    PubMed

    Dua, Pinky; Dua, Vivek; Pistikopoulos, Efstratios N

    2010-06-01

    This article presents model predictive controllers (MPCs) and multi-parametric model-based controllers for delivery of anaesthetic agents. The MPC can take into account constraints on drug delivery rates and state of the patient but requires solving an optimization problem at regular time intervals. The multi-parametric controller has all the advantages of the MPC and does not require repetitive solution of optimization problem for its implementation. This is achieved by obtaining the optimal drug delivery rates as a set of explicit functions of the state of the patient. The derivation of the controllers relies on using detailed models of the system. A compartmental model for the delivery of three drugs for anaesthesia is developed. The key feature of this model is that mean arterial pressure, cardiac output and unconsciousness of the patient can be simultaneously regulated. This is achieved by using three drugs: dopamine (DP), sodium nitroprusside (SNP) and isoflurane. A number of dynamic simulation experiments are carried out for the validation of the model. The model is then used for the design of model predictive and multi-parametric controllers, and the performance of the controllers is analyzed.

  8. Non-parametric transient classification using adaptive wavelets

    NASA Astrophysics Data System (ADS)

    Varughese, Melvin M.; von Sachs, Rainer; Stephanou, Michael; Bassett, Bruce A.

    2015-11-01

    Classifying transients based on multiband light curves is a challenging but crucial problem in the era of GAIA and Large Synoptic Sky Telescope since the sheer volume of transients will make spectroscopic classification unfeasible. We present a non-parametric classifier that predicts the transient's class given training data. It implements two novel components: the use of the BAGIDIS wavelet methodology - a characterization of functional data using hierarchical wavelet coefficients - as well as the introduction of a ranked probability classifier on the wavelet coefficients that handles both the heteroscedasticity of the data in addition to the potential non-representativity of the training set. The classifier is simple to implement while a major advantage of the BAGIDIS wavelets is that they are translation invariant. Hence, BAGIDIS does not need the light curves to be aligned to extract features. Further, BAGIDIS is non-parametric so it can be used effectively in blind searches for new objects. We demonstrate the effectiveness of our classifier against the Supernova Photometric Classification Challenge to correctly classify supernova light curves as Type Ia or non-Ia. We train our classifier on the spectroscopically confirmed subsample (which is not representative) and show that it works well for supernova with observed light-curve time spans greater than 100 d (roughly 55 per cent of the data set). For such data, we obtain a Ia efficiency of 80.5 per cent and a purity of 82.4 per cent, yielding a highly competitive challenge score of 0.49. This indicates that our `model-blind' approach may be particularly suitable for the general classification of astronomical transients in the era of large synoptic sky surveys.

  9. Long-range parametric amplification of THz wave with absorption loss exceeding parametric gain.

    PubMed

    Wang, Tsong-Dong; Huang, Yen-Chieh; Chuang, Ming-Yun; Lin, Yen-Hou; Lee, Ching-Han; Lin, Yen-Yin; Lin, Fan-Yi; Kitaeva, Galiya Kh

    2013-01-28

    Optical parametric mixing is a popular scheme to generate an idler wave at THz frequencies, although the THz wave is often absorbing in the nonlinear optical material. It is widely suggested that the useful material length for co-directional parametric mixing with strong THz-wave absorption is comparable to the THz-wave absorption length in the material. Here we show that, even in the limit of the absorption loss exceeding parametric gain, the THz idler wave can grows monotonically from optical parametric amplification over a much longer distance in a nonlinear optical material until pump depletion. The coherent production of the non-absorbing signal wave can assist the growth of the highly absorbing idler wave. We also show that, for the case of an equal input pump and signal in difference frequency generation, the quick saturation of the THz idler wave predicted from a much simplified and yet popular plane-wave model fails when fast diffraction of the THz wave from the co-propagating optical mixing waves is considered.

  10. Grating lobe elimination in steerable parametric loudspeaker.

    PubMed

    Shi, Chuang; Gan, Woon-Seng

    2011-02-01

    In the past two decades, the majority of research on the parametric loudspeaker has concentrated on the nonlinear modeling of acoustic propagation and pre-processing techniques to reduce nonlinear distortion in sound reproduction. There are, however, very few studies on directivity control of the parametric loudspeaker. In this paper, we propose an equivalent circular Gaussian source array that approximates the directivity characteristics of the linear ultrasonic transducer array. By using this approximation, the directivity of the sound beam from the parametric loudspeaker can be predicted by the product directivity principle. New theoretical results, which are verified through measurements, are presented to show the effectiveness of the delay-and-sum beamsteering structure for the parametric loudspeaker. Unlike the conventional loudspeaker array, where the spacing between array elements must be less than half the wavelength to avoid spatial aliasing, the parametric loudspeaker can take advantage of grating lobe elimination to extend the spacing of ultrasonic transducer array to more than 1.5 wavelengths in a typical application.

  11. Technical Topic 3.2.2.d Bayesian and Non-Parametric Statistics: Integration of Neural Networks with Bayesian Networks for Data Fusion and Predictive Modeling

    DTIC Science & Technology

    2016-05-31

    and included explosives such as TATP, HMTD, RDX, RDX, ammonium nitrate , potassium perchlorate, potassium nitrate , sugar, and TNT. The approach...Distribution Unlimited UU UU UU UU 31-05-2016 15-Apr-2014 14-Jan-2015 Final Report: Technical Topic 3.2.2. d Bayesian and Non- parametric Statistics...of Papers published in non peer-reviewed journals: Final Report: Technical Topic 3.2.2. d Bayesian and Non-parametric Statistics: Integration of Neural

  12. Auroral photometry from the atmosphere Explorer satellite

    NASA Technical Reports Server (NTRS)

    Rees, M. H.; Abreu, V. J.

    1984-01-01

    Attention is given to the ability of remote sensing from space to yield quantitative auroral and ionospheric parametrers, in view of the auroral measurements made during two passes of the Explorer C satellite over the Poker Flat Optical Observatory and the Chatanika Radar Facility. The emission rate of the N2(+) 4278 A band computed from intensity measurements of energetic auroral electrons has tracked the same spetral feature that was measured remotely from the satellite over two decades of intensity, providing a stringent test for the measurement of atmospheric scattering effects. It also verifies the absolute intensity with respect to ground-based photometric measurements. In situ satellite measurments of ion densities and ground based electron density profile radar measurements provide a consistent picture of the ionospheric response to auroral input, while also predicting the observed optical emission rate.

  13. A simulation of air pollution model parameter estimation using data from a ground-based LIDAR remote sensor

    NASA Technical Reports Server (NTRS)

    Kibler, J. F.; Suttles, J. T.

    1977-01-01

    One way to obtain estimates of the unknown parameters in a pollution dispersion model is to compare the model predictions with remotely sensed air quality data. A ground-based LIDAR sensor provides relative pollution concentration measurements as a function of space and time. The measured sensor data are compared with the dispersion model output through a numerical estimation procedure to yield parameter estimates which best fit the data. This overall process is tested in a computer simulation to study the effects of various measurement strategies. Such a simulation is useful prior to a field measurement exercise to maximize the information content in the collected data. Parametric studies of simulated data matched to a Gaussian plume dispersion model indicate the trade offs available between estimation accuracy and data acquisition strategy.

  14. Large-scale subject-specific cerebral arterial tree modeling using automated parametric mesh generation for blood flow simulation.

    PubMed

    Ghaffari, Mahsa; Tangen, Kevin; Alaraj, Ali; Du, Xinjian; Charbel, Fady T; Linninger, Andreas A

    2017-12-01

    In this paper, we present a novel technique for automatic parametric mesh generation of subject-specific cerebral arterial trees. This technique generates high-quality and anatomically accurate computational meshes for fast blood flow simulations extending the scope of 3D vascular modeling to a large portion of cerebral arterial trees. For this purpose, a parametric meshing procedure was developed to automatically decompose the vascular skeleton, extract geometric features and generate hexahedral meshes using a body-fitted coordinate system that optimally follows the vascular network topology. To validate the anatomical accuracy of the reconstructed vasculature, we performed statistical analysis to quantify the alignment between parametric meshes and raw vascular images using receiver operating characteristic curve. Geometric accuracy evaluation showed an agreement with area under the curves value of 0.87 between the constructed mesh and raw MRA data sets. Parametric meshing yielded on-average, 36.6% and 21.7% orthogonal and equiangular skew quality improvement over the unstructured tetrahedral meshes. The parametric meshing and processing pipeline constitutes an automated technique to reconstruct and simulate blood flow throughout a large portion of the cerebral arterial tree down to the level of pial vessels. This study is the first step towards fast large-scale subject-specific hemodynamic analysis for clinical applications. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Surface spontaneous parametric down-conversion.

    PubMed

    Perina, Jan; Luks, Antonín; Haderka, Ondrej; Scalora, Michael

    2009-08-07

    Surface spontaneous parametric down-conversion is predicted as a consequence of continuity requirements for electric- and magnetic-field amplitudes at a discontinuity of chi;{(2)} nonlinearity. A generalization of the usual two-photon spectral amplitude is suggested to describe this effect. Examples of nonlinear layered structures and periodically poled nonlinear crystals show that surface contributions to spontaneous down-conversion can be important.

  16. Use of a machine learning framework to predict substance use disorder treatment success

    PubMed Central

    Kelmansky, Diana; van der Laan, Mark; Sahker, Ethan; Jones, DeShauna; Arndt, Stephan

    2017-01-01

    There are several methods for building prediction models. The wealth of currently available modeling techniques usually forces the researcher to judge, a priori, what will likely be the best method. Super learning (SL) is a methodology that facilitates this decision by combining all identified prediction algorithms pertinent for a particular prediction problem. SL generates a final model that is at least as good as any of the other models considered for predicting the outcome. The overarching aim of this work is to introduce SL to analysts and practitioners. This work compares the performance of logistic regression, penalized regression, random forests, deep learning neural networks, and SL to predict successful substance use disorders (SUD) treatment. A nationwide database including 99,013 SUD treatment patients was used. All algorithms were evaluated using the area under the receiver operating characteristic curve (AUC) in a test sample that was not included in the training sample used to fit the prediction models. AUC for the models ranged between 0.793 and 0.820. SL was superior to all but one of the algorithms compared. An explanation of SL steps is provided. SL is the first step in targeted learning, an analytic framework that yields double robust effect estimation and inference with fewer assumptions than the usual parametric methods. Different aspects of SL depending on the context, its function within the targeted learning framework, and the benefits of this methodology in the addiction field are discussed. PMID:28394905

  17. Use of a machine learning framework to predict substance use disorder treatment success.

    PubMed

    Acion, Laura; Kelmansky, Diana; van der Laan, Mark; Sahker, Ethan; Jones, DeShauna; Arndt, Stephan

    2017-01-01

    There are several methods for building prediction models. The wealth of currently available modeling techniques usually forces the researcher to judge, a priori, what will likely be the best method. Super learning (SL) is a methodology that facilitates this decision by combining all identified prediction algorithms pertinent for a particular prediction problem. SL generates a final model that is at least as good as any of the other models considered for predicting the outcome. The overarching aim of this work is to introduce SL to analysts and practitioners. This work compares the performance of logistic regression, penalized regression, random forests, deep learning neural networks, and SL to predict successful substance use disorders (SUD) treatment. A nationwide database including 99,013 SUD treatment patients was used. All algorithms were evaluated using the area under the receiver operating characteristic curve (AUC) in a test sample that was not included in the training sample used to fit the prediction models. AUC for the models ranged between 0.793 and 0.820. SL was superior to all but one of the algorithms compared. An explanation of SL steps is provided. SL is the first step in targeted learning, an analytic framework that yields double robust effect estimation and inference with fewer assumptions than the usual parametric methods. Different aspects of SL depending on the context, its function within the targeted learning framework, and the benefits of this methodology in the addiction field are discussed.

  18. Predicted effect of dynamic load on pitting fatigue life for low-contact-ratio spur gears

    NASA Technical Reports Server (NTRS)

    Lewicki, David G.

    1986-01-01

    How dynamic load affects the surface pitting fatigue life of external spur gears was predicted by using the NASA computer program TELSGE. Parametric studies were performed over a range of various gear parameters modeling low-contact-ratio involute spur gears. In general, gear life predictions based on dynamic loads differed significantly from those based on static loads, with the predictions being strongly influenced by the maximum dynamic load during contact. Gear mesh operating speed strongly affected predicted dynamic load and life. Meshes operating at a resonant speed or one-half the resonant speed had significantly shorter lives. Dynamic life factors for gear surface pitting fatigue were developed on the basis of the parametric studies. In general, meshes with higher contact ratios had higher dynamic life factors than meshes with lower contact ratios. A design chart was developed for hand calculations of dynamic life factors.

  19. Parametric analysis of ATM solar array.

    NASA Technical Reports Server (NTRS)

    Singh, B. K.; Adkisson, W. B.

    1973-01-01

    The paper discusses the methods used for the calculation of ATM solar array performance characteristics and provides the parametric analysis of solar panels used in SKYLAB. To predict the solar array performance under conditions other than test conditions, a mathematical model has been developed. Four computer programs have been used to convert the solar simulator test data to the parametric curves. The first performs module summations, the second determines average solar cell characteristics which will cause a mathematical model to generate a curve matching the test data, the third is a polynomial fit program which determines the polynomial equations for the solar cell characteristics versus temperature, and the fourth program uses the polynomial coefficients generated by the polynomial curve fit program to generate the parametric data.

  20. PRESS-based EFOR algorithm for the dynamic parametrical modeling of nonlinear MDOF systems

    NASA Astrophysics Data System (ADS)

    Liu, Haopeng; Zhu, Yunpeng; Luo, Zhong; Han, Qingkai

    2017-09-01

    In response to the identification problem concerning multi-degree of freedom (MDOF) nonlinear systems, this study presents the extended forward orthogonal regression (EFOR) based on predicted residual sums of squares (PRESS) to construct a nonlinear dynamic parametrical model. The proposed parametrical model is based on the non-linear autoregressive with exogenous inputs (NARX) model and aims to explicitly reveal the physical design parameters of the system. The PRESS-based EFOR algorithm is proposed to identify such a model for MDOF systems. By using the algorithm, we built a common-structured model based on the fundamental concept of evaluating its generalization capability through cross-validation. The resulting model aims to prevent over-fitting with poor generalization performance caused by the average error reduction ratio (AERR)-based EFOR algorithm. Then, a functional relationship is established between the coefficients of the terms and the design parameters of the unified model. Moreover, a 5-DOF nonlinear system is taken as a case to illustrate the modeling of the proposed algorithm. Finally, a dynamic parametrical model of a cantilever beam is constructed from experimental data. Results indicate that the dynamic parametrical model of nonlinear systems, which depends on the PRESS-based EFOR, can accurately predict the output response, thus providing a theoretical basis for the optimal design of modeling methods for MDOF nonlinear systems.

  1. Theoretical Analysis of Penalized Maximum-Likelihood Patlak Parametric Image Reconstruction in Dynamic PET for Lesion Detection.

    PubMed

    Yang, Li; Wang, Guobao; Qi, Jinyi

    2016-04-01

    Detecting cancerous lesions is a major clinical application of emission tomography. In a previous work, we studied penalized maximum-likelihood (PML) image reconstruction for lesion detection in static PET. Here we extend our theoretical analysis of static PET reconstruction to dynamic PET. We study both the conventional indirect reconstruction and direct reconstruction for Patlak parametric image estimation. In indirect reconstruction, Patlak parametric images are generated by first reconstructing a sequence of dynamic PET images, and then performing Patlak analysis on the time activity curves (TACs) pixel-by-pixel. In direct reconstruction, Patlak parametric images are estimated directly from raw sinogram data by incorporating the Patlak model into the image reconstruction procedure. PML reconstruction is used in both the indirect and direct reconstruction methods. We use a channelized Hotelling observer (CHO) to assess lesion detectability in Patlak parametric images. Simplified expressions for evaluating the lesion detectability have been derived and applied to the selection of the regularization parameter value to maximize detection performance. The proposed method is validated using computer-based Monte Carlo simulations. Good agreements between the theoretical predictions and the Monte Carlo results are observed. Both theoretical predictions and Monte Carlo simulation results show the benefit of the indirect and direct methods under optimized regularization parameters in dynamic PET reconstruction for lesion detection, when compared with the conventional static PET reconstruction.

  2. Analyses of ACPL thermal/fluid conditioning system

    NASA Technical Reports Server (NTRS)

    Stephen, L. A.; Usher, L. H.

    1976-01-01

    Results of engineering analyses are reported. Initial computations were made using a modified control transfer function where the systems performance was characterized parametrically using an analytical model. The analytical model was revised to represent the latest expansion chamber fluid manifold design, and systems performance predictions were made. Parameters which were independently varied in these computations are listed. Systems predictions which were used to characterize performance are primarily transient computer plots comparing the deviation between average chamber temperature and the chamber temperature requirement. Additional computer plots were prepared. Results of parametric computations with the latest fluid manifold design are included.

  3. Potentials of satellite derived SIF products to constrain GPP simulated by the new ORCHIDEE-FluOR terrestrial model at the global scale

    NASA Astrophysics Data System (ADS)

    Bacour, C.; Maignan, F.; Porcar-Castell, A.; MacBean, N.; Goulas, Y.; Flexas, J.; Guanter, L.; Joiner, J.; Peylin, P.

    2016-12-01

    A new era for improving our knowledge of the terrestrial carbon cycle at the global scale has begun with recent studies on the relationships between remotely sensed Sun Induce Fluorescence (SIF) and plant photosynthetic activity (GPP), and the availability of such satellite-derived products now "routinely" produced from GOSAT, GOME-2, or OCO-2 observations. Assimilating SIF data into terrestrial ecosystem models (TEMs) represents a novel opportunity to reduce the uncertainty of their prediction with respect to carbon-climate feedbacks, in particular the uncertainties resulting from inaccurate parameter values. A prerequisite is a correct representation in TEMs of the several drivers of plant fluorescence from the leaf to the canopy scale, and in particular the competing processes of photochemistry and non photochemical quenching (NPQ).In this study, we present the first results of a global scale assimilation of GOME-2 SIF products within a new version of the ORCHIDEE land surface model including a physical module of plant fluorescence. At the leaf level, the regulation of fluorescence yield is simulated both by the photosynthesis module of ORCHIDEE to calculate the photochemical yield and by a parametric model to estimate NPQ. The latter has been calibrated on leaf fluorescence measurements performed for boreal coniferous and Mediterranean vegetation species. A parametric representation of the SCOPE radiative transfer model is used to model the plant fluorescence fluxes for PSI and PSII and the scaling up to the canopy level. The ORCHIDEE-FluOR model is firstly evaluated with respect to in situ measurements of plant fluorescence flux and photochemical yield for scots pine and wheat. The potentials of SIF data to constrain the modelled GPP are evaluated by assimilating one year of GOME-2-SIF products within ORCHIDEE-FluOR. We investigate in particular the changes in the spatial patterns of GPP following the optimization of the photosynthesis and phenology parameters. We analyze the differences obtained using a simpler fluorescence model in ORCHIDEE hypothesizing a linear relationship between SIF and GPP, and an independent simultaneous assimilation of three data-streams (in situ flux measurements, satellite derived NDVI and atmospheric CO2 concentrations).

  4. A New Hybrid-Multiscale SSA Prediction of Non-Stationary Time Series

    NASA Astrophysics Data System (ADS)

    Ghanbarzadeh, Mitra; Aminghafari, Mina

    2016-02-01

    Singular spectral analysis (SSA) is a non-parametric method used in the prediction of non-stationary time series. It has two parameters, which are difficult to determine and very sensitive to their values. Since, SSA is a deterministic-based method, it does not give good results when the time series is contaminated with a high noise level and correlated noise. Therefore, we introduce a novel method to handle these problems. It is based on the prediction of non-decimated wavelet (NDW) signals by SSA and then, prediction of residuals by wavelet regression. The advantages of our method are the automatic determination of parameters and taking account of the stochastic structure of time series. As shown through the simulated and real data, we obtain better results than SSA, a non-parametric wavelet regression method and Holt-Winters method.

  5. PARAMETRIC AND NON PARAMETRIC (MARS: MULTIVARIATE ADDITIVE REGRESSION SPLINES) LOGISTIC REGRESSIONS FOR PREDICTION OF A DICHOTOMOUS RESPONSE VARIABLE WITH AN EXAMPLE FOR PRESENCE/ABSENCE OF AMPHIBIANS

    EPA Science Inventory

    The purpose of this report is to provide a reference manual that could be used by investigators for making informed use of logistic regression using two methods (standard logistic regression and MARS). The details for analyses of relationships between a dependent binary response ...

  6. Cluster-level statistical inference in fMRI datasets: The unexpected behavior of random fields in high dimensions.

    PubMed

    Bansal, Ravi; Peterson, Bradley S

    2018-06-01

    Identifying regional effects of interest in MRI datasets usually entails testing a priori hypotheses across many thousands of brain voxels, requiring control for false positive findings in these multiple hypotheses testing. Recent studies have suggested that parametric statistical methods may have incorrectly modeled functional MRI data, thereby leading to higher false positive rates than their nominal rates. Nonparametric methods for statistical inference when conducting multiple statistical tests, in contrast, are thought to produce false positives at the nominal rate, which has thus led to the suggestion that previously reported studies should reanalyze their fMRI data using nonparametric tools. To understand better why parametric methods may yield excessive false positives, we assessed their performance when applied both to simulated datasets of 1D, 2D, and 3D Gaussian Random Fields (GRFs) and to 710 real-world, resting-state fMRI datasets. We showed that both the simulated 2D and 3D GRFs and the real-world data contain a small percentage (<6%) of very large clusters (on average 60 times larger than the average cluster size), which were not present in 1D GRFs. These unexpectedly large clusters were deemed statistically significant using parametric methods, leading to empirical familywise error rates (FWERs) as high as 65%: the high empirical FWERs were not a consequence of parametric methods failing to model spatial smoothness accurately, but rather of these very large clusters that are inherently present in smooth, high-dimensional random fields. In fact, when discounting these very large clusters, the empirical FWER for parametric methods was 3.24%. Furthermore, even an empirical FWER of 65% would yield on average less than one of those very large clusters in each brain-wide analysis. Nonparametric methods, in contrast, estimated distributions from those large clusters, and therefore, by construct rejected the large clusters as false positives at the nominal FWERs. Those rejected clusters were outlying values in the distribution of cluster size but cannot be distinguished from true positive findings without further analyses, including assessing whether fMRI signal in those regions correlates with other clinical, behavioral, or cognitive measures. Rejecting the large clusters, however, significantly reduced the statistical power of nonparametric methods in detecting true findings compared with parametric methods, which would have detected most true findings that are essential for making valid biological inferences in MRI data. Parametric analyses, in contrast, detected most true findings while generating relatively few false positives: on average, less than one of those very large clusters would be deemed a true finding in each brain-wide analysis. We therefore recommend the continued use of parametric methods that model nonstationary smoothness for cluster-level, familywise control of false positives, particularly when using a Cluster Defining Threshold of 2.5 or higher, and subsequently assessing rigorously the biological plausibility of the findings, even for large clusters. Finally, because nonparametric methods yielded a large reduction in statistical power to detect true positive findings, we conclude that the modest reduction in false positive findings that nonparametric analyses afford does not warrant a re-analysis of previously published fMRI studies using nonparametric techniques. Copyright © 2018 Elsevier Inc. All rights reserved.

  7. An application of quantile random forests for predictive mapping of forest attributes

    Treesearch

    E.A. Freeman; G.G. Moisen

    2015-01-01

    Increasingly, random forest models are used in predictive mapping of forest attributes. Traditional random forests output the mean prediction from the random trees. Quantile regression forests (QRF) is an extension of random forests developed by Nicolai Meinshausen that provides non-parametric estimates of the median predicted value as well as prediction quantiles. It...

  8. Conserving the Birds of Uganda’s Banana-Coffee Arc: Land Sparing and Land Sharing Compared

    PubMed Central

    Hulme, Mark F.; Vickery, Juliet A.; Green, Rhys E.; Phalan, Ben; Chamberlain, Dan E.; Pomeroy, Derek E.; Nalwanga, Dianah; Mushabe, David; Katebaka, Raymond; Bolwig, Simon; Atkinson, Philip W.

    2013-01-01

    Reconciling the aims of feeding an ever more demanding human population and conserving biodiversity is a difficult challenge. Here, we explore potential solutions by assessing whether land sparing (farming for high yield, potentially enabling the protection of non-farmland habitat), land sharing (lower yielding farming with more biodiversity within farmland) or a mixed strategy would result in better bird conservation outcomes for a specified level of agricultural production. We surveyed forest and farmland study areas in southern Uganda, measuring the population density of 256 bird species and agricultural yield: food energy and gross income. Parametric non-linear functions relating density to yield were fitted. Species were identified as “winners” (total population size always at least as great with agriculture present as without it) or “losers” (total population sometimes or always reduced with agriculture present) for a range of targets for total agricultural production. For each target we determined whether each species would be predicted to have a higher total population with land sparing, land sharing or with any intermediate level of sparing at an intermediate yield. We found that most species were expected to have their highest total populations with land sparing, particularly loser species and species with small global range sizes. Hence, more species would benefit from high-yield farming if used as part of a strategy to reduce forest loss than from low-yield farming and land sharing, as has been found in Ghana and India in a previous study. We caution against advocacy for high-yield farming alone as a means to deliver land sparing if it is done without strong protection for natural habitats, other ecosystem services and social welfare. Instead, we suggest that conservationists explore how conservation and agricultural policies can be better integrated to deliver land sparing by, for example, combining land-use planning and agronomic support for small farmers. PMID:23390501

  9. Can stress biomarkers predict preterm birth in women with threatened preterm labor?

    PubMed

    García-Blanco, Ana; Diago, Vicente; Serrano De La Cruz, Verónica; Hervás, David; Cháfer-Pericás, Consuelo; Vento, Máximo

    2017-09-01

    Preterm birth is a major paediatric challenge difficult to prevent and with major adverse outcomes. Prenatal stress plays an important role on preterm birth; however, there are few stress-related models to predict preterm birth in women with Threatened Preterm Labor (TPL). The aim of this work is to study the influence of stress biomarkers on time until birth in TPL women. Eligible participants were pregnant women between 24 and 31 gestational weeks admitted to the hospital with TPL diagnosis (n=166). Stress-related biomarkers (α-amylase and cortisol) were determined in saliva samples after TPL diagnosis. Participants were followed-up until labor. A parametric survival model was constructed based on α-amylase, cortisol), TPL gestational week, age, parity, and multiple pregnancy. The model was adjusted using a logistic distribution and it was implemented as a nomogram to predict the labor probability at 7- and 14-day term. The time until labor was associated with cortisol (p=0.001), gestational week at TPL diagnosis (p=0.004), and age (p=0.02). Importantly, high cortisol levels at TPL diagnosis were predictive of latency to labor. Validation of the model yielded an optimum corrected AUC value of 0.63. High cortisol levels at TPL diagnosis may have an important role in the preterm birth prediction. Our statistical model implemented as a nomogram provided accurate predictions of individual prognosis of pregnant women. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Effect of quantum nuclear motion on hydrogen bonding

    NASA Astrophysics Data System (ADS)

    McKenzie, Ross H.; Bekker, Christiaan; Athokpam, Bijyalaxmi; Ramesh, Sai G.

    2014-05-01

    This work considers how the properties of hydrogen bonded complexes, X-H⋯Y, are modified by the quantum motion of the shared proton. Using a simple two-diabatic state model Hamiltonian, the analysis of the symmetric case, where the donor (X) and acceptor (Y) have the same proton affinity, is carried out. For quantitative comparisons, a parametrization specific to the O-H⋯O complexes is used. The vibrational energy levels of the one-dimensional ground state adiabatic potential of the model are used to make quantitative comparisons with a vast body of condensed phase data, spanning a donor-acceptor separation (R) range of about 2.4 - 3.0 Å, i.e., from strong to weak hydrogen bonds. The position of the proton (which determines the X-H bond length) and its longitudinal vibrational frequency, along with the isotope effects in both are described quantitatively. An analysis of the secondary geometric isotope effect, using a simple extension of the two-state model, yields an improved agreement of the predicted variation with R of frequency isotope effects. The role of bending modes is also considered: their quantum effects compete with those of the stretching mode for weak to moderate H-bond strengths. In spite of the economy in the parametrization of the model used, it offers key insights into the defining features of H-bonds, and semi-quantitatively captures several trends.

  11. Parametrically excited helicopter ground resonance dynamics with high blade asymmetries

    NASA Astrophysics Data System (ADS)

    Sanches, L.; Michon, G.; Berlioz, A.; Alazard, D.

    2012-07-01

    The present work is aimed at verifying the influence of high asymmetries in the variation of in-plane lead-lag stiffness of one blade on the ground resonance phenomenon in helicopters. The periodical equations of motions are analyzed by using Floquet's Theory (FM) and the boundaries of instabilities predicted. The stability chart obtained as a function of asymmetry parameters and rotor speed reveals a complex evolution of critical zones and the existence of bifurcation points at low rotor speed values. Additionally, it is known that when treated as parametric excitations; periodic terms may cause parametric resonances in dynamic systems, some of which can become unstable. Therefore, the helicopter is later considered as a parametrically excited system and the equations are treated analytically by applying the Method of Multiple Scales (MMS). A stability analysis is used to verify the existence of unstable parametric resonances with first and second-order sets of equations. The results are compared and validated with those obtained by Floquet's Theory. Moreover, an explanation is given for the presence of unstable motion at low rotor speeds due to parametric instabilities of the second order.

  12. Effective field theories for muonic hydrogen

    NASA Astrophysics Data System (ADS)

    Peset, Clara

    2017-03-01

    Experimental measurements of muonic hydrogen bound states have recently started to take place and provide a powerful setting in which to study the properties of QCD. We profit from the power of effective field theories (EFTs) to provide a theoretical framework in which to study muonic hydrogen in a model independent fashion. In particular, we compute expressions for the Lamb shift and the hyperfine splitting. These expressions include the leading logarithmic O(mμα6) terms, as well as the leading {\\cal O}≤ft( {{m_μ }{α ^5}{{m_μ ^2} \\over {Λ {{QCD}}^2}}} \\right) hadronic effects. Most remarkably, our analyses include the determination of the spin-dependent and spin-independent structure functions of the forward virtualphoton Compton tensor of the proton to O(p3) in HBET and including the Delta particle. Using these results we obtain the leading hadronic contributions to the Wilson coeffcients of the lepton-proton four fermion operators in NRQED. The spin-independent coeffcient yields a pure prediction for the two-photon exchange contribution to the muonic hydrogen Lamb shift, which is the main source of uncertainty in our computation. The spindependent coeffcient yields the prediction of the hyperfine splitting. The use of EFTs crucially helps us organizing the computation, in such a way that we can clearly address the parametric accuracy of our result. Furthermore, we review in the context of NRQED all the contributions to the energy shift of O(mμα5, as well as those that scale like mrα6× logarithms.

  13. Dynamic stability of spinning pretwisted beams subjected to axial random forces

    NASA Astrophysics Data System (ADS)

    Young, T. H.; Gau, C. Y.

    2003-11-01

    This paper studies the dynamic stability of a pretwisted cantilever beam spinning along its longitudinal axis and subjected to an axial random force at the free end. The axial force is assumed as the sum of a constant force and a random process with a zero mean. Due to this axial force, the beam may experience parametric random instability. In this work, the finite element method is first applied to yield discretized system equations. The stochastic averaging method is then adopted to obtain Ito's equations for the response amplitudes of the system. Finally the mean-square stability criterion is utilized to determine the stability condition of the system. Numerical results show that the stability boundary of the system converges as the first three modes are taken into calculation. Before the convergence is reached, the stability condition predicted is not conservative enough.

  14. Antibunching and unconventional photon blockade with Gaussian squeezed states

    NASA Astrophysics Data System (ADS)

    Lemonde, Marc-Antoine; Didier, Nicolas; Clerk, Aashish A.

    2014-12-01

    Photon antibunching is a quantum phenomenon typically observed in strongly nonlinear systems where photon blockade suppresses the probability of detecting two photons at the same time. Antibunching has also been reported with Gaussian states, where optimized amplitude squeezing yields classically forbidden values of the intensity correlation, g(2 )(0 ) <1 . As a consequence, observation of antibunching is not necessarily a signature of photon-photon interactions. To clarify the significance of the intensity correlations, we derive a sufficient condition for deducing whether a field is non-Gaussian based on a g(2 )(0 ) measurement. We then show that the Gaussian antibunching obtained with a degenerate parametric amplifier is close to the ideal case reached using dissipative squeezing protocols. We finally shed light on the so-called unconventional photon blockade effect predicted in a driven two-cavity setup with surprisingly weak Kerr nonlinearities, stressing that it is a particular realization of optimized Gaussian amplitude squeezing.

  15. Building and using a statistical 3D motion atlas for analyzing myocardial contraction in MRI

    NASA Astrophysics Data System (ADS)

    Rougon, Nicolas F.; Petitjean, Caroline; Preteux, Francoise J.

    2004-05-01

    We address the issue of modeling and quantifying myocardial contraction from 4D MR sequences, and present an unsupervised approach for building and using a statistical 3D motion atlas for the normal heart. This approach relies on a state-of-the-art variational non rigid registration (NRR) technique using generalized information measures, which allows for robust intra-subject motion estimation and inter-subject anatomical alignment. The atlas is built from a collection of jointly acquired tagged and cine MR exams in short- and long-axis views. Subject-specific non parametric motion estimates are first obtained by incremental NRR of tagged images onto the end-diastolic (ED) frame. Individual motion data are then transformed into the coordinate system of a reference subject using subject-to-reference mappings derived by NRR of cine ED images. Finally, principal component analysis of aligned motion data is performed for each cardiac phase, yielding a mean model and a set of eigenfields encoding kinematic ariability. The latter define an organ-dedicated hierarchical motion basis which enables parametric motion measurement from arbitrary tagged MR exams. To this end, the atlas is transformed into subject coordinates by reference-to-subject NRR of ED cine frames. Atlas-based motion estimation is then achieved by parametric NRR of tagged images onto the ED frame, yielding a compact description of myocardial contraction during diastole.

  16. Development of a ReaxFF reactive force field for ammonium nitrate and application to shock compression and thermal decomposition.

    PubMed

    Shan, Tzu-Ray; van Duin, Adri C T; Thompson, Aidan P

    2014-02-27

    We have developed a new ReaxFF reactive force field parametrization for ammonium nitrate. Starting with an existing nitramine/TATB ReaxFF parametrization, we optimized it to reproduce electronic structure calculations for dissociation barriers, heats of formation, and crystal structure properties of ammonium nitrate phases. We have used it to predict the isothermal pressure-volume curve and the unreacted principal Hugoniot states. The predicted isothermal pressure-volume curve for phase IV solid ammonium nitrate agreed with electronic structure calculations and experimental data within 10% error for the considered range of compression. The predicted unreacted principal Hugoniot states were approximately 17% stiffer than experimental measurements. We then simulated thermal decomposition during heating to 2500 K. Thermal decomposition pathways agreed with experimental findings.

  17. Applications of Genomic Selection in Breeding Wheat for Rust Resistance.

    PubMed

    Ornella, Leonardo; González-Camacho, Juan Manuel; Dreisigacker, Susanne; Crossa, Jose

    2017-01-01

    There are a lot of methods developed to predict untested phenotypes in schemes commonly used in genomic selection (GS) breeding. The use of GS for predicting disease resistance has its own particularities: (a) most populations shows additivity in quantitative adult plant resistance (APR); (b) resistance needs effective combinations of major and minor genes; and (c) phenotype is commonly expressed in ordinal categorical traits, whereas most parametric applications assume that the response variable is continuous and normally distributed. Machine learning methods (MLM) can take advantage of examples (data) that capture characteristics of interest from an unknown underlying probability distribution (i.e., data-driven). We introduce some state-of-the-art MLM capable to predict rust resistance in wheat. We also present two parametric R packages for the reader to be able to compare.

  18. Diagnostic tools for nearest neighbors techniques when used with satellite imagery

    Treesearch

    Ronald E. McRoberts

    2009-01-01

    Nearest neighbors techniques are non-parametric approaches to multivariate prediction that are useful for predicting both continuous and categorical forest attribute variables. Although some assumptions underlying nearest neighbor techniques are common to other prediction techniques such as regression, other assumptions are unique to nearest neighbor techniques....

  19. The geometry of distributional preferences and a non-parametric identification approach: The Equality Equivalence Test.

    PubMed

    Kerschbamer, Rudolf

    2015-05-01

    This paper proposes a geometric delineation of distributional preference types and a non-parametric approach for their identification in a two-person context. It starts with a small set of assumptions on preferences and shows that this set (i) naturally results in a taxonomy of distributional archetypes that nests all empirically relevant types considered in previous work; and (ii) gives rise to a clean experimental identification procedure - the Equality Equivalence Test - that discriminates between archetypes according to core features of preferences rather than properties of specific modeling variants. As a by-product the test yields a two-dimensional index of preference intensity.

  20. High-power parametric amplification of 11.8-fs laser pulses with carrier-envelope phase control.

    PubMed

    Zinkstok, R Th; Witte, S; Hogervorst, W; Eikema, K S E

    2005-01-01

    Phase-stable parametric chirped-pulse amplification of ultrashort pulses from a carrier-envelope phase-stabilized mode-locked Ti:sapphire oscillator (11.0 fs) to 0.25 mJ/pulse at 1 kHz is demonstrated. Compression with a grating compressor and a LCD shaper yields near-Fourier-limited 11.8-fs pulses with an energy of 0.12 mJ. The amplifier is pumped by 532-nm pulses from a synchronized mode-locked laser, Nd:YAG amplifier system. This approach is shown to be promising for the next generation of ultrafast amplifiers aimed at producing terawatt-level phase-controlled few-cycle laser pulses.

  1. A micromachined device describing over a hundred orders of parametric resonance

    NASA Astrophysics Data System (ADS)

    Jia, Yu; Du, Sijun; Arroyo, Emmanuelle; Seshia, Ashwin A.

    2018-04-01

    Parametric resonance in mechanical oscillators can onset from the periodic modulation of at least one of the system parameters, and the behaviour of the principal (1st order) parametric resonance has long been well established. However, the theoretically predicted higher orders of parametric resonance, in excess of the first few orders, have mostly been experimentally elusive due to the fast diminishing instability intervals. A recent paper experimentally reported up to 28 orders in a micromachined membrane oscillator. This paper reports the design and characterisation of a micromachined membrane oscillator with a segmented proof mass topology, in an attempt to amplify the inherent nonlinearities within the membrane layer. The resultant oscillator device exhibited up to over a hundred orders of parametric resonance, thus experimentally validating these ultra-high orders as well as overlapping instability transitions between these higher orders. This research introduces design possibilities for the transducer and dynamic communities, by exploiting the behaviour of these previously elusive higher order resonant regimes.

  2. Complete Michel parameter analysis of the inclusive semileptonic b{yields}c transition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dassinger, Benjamin; Feger, Robert; Mannel, Thomas

    2009-04-01

    We perform a complete 'Michel parameter' analysis of all possible helicity structures which can appear in the process B{yields}X{sub c}l{nu}{sub l}. We take into account the full set of operators parametrizing the effective Hamiltonian and include the complete one-loop QCD corrections as well as the nonperturbative contributions. The moments of the leptonic energy as well as the combined moments of the hadronic energy and hadronic invariant mass are calculated including the nonstandard contributions.

  3. Language Learning Strategy Use and Reading Achievement

    ERIC Educational Resources Information Center

    Ghafournia, Narjes

    2014-01-01

    The current study investigated the differences across the varying levels of EFL learners in the frequency and choice of learning strategies. Using a reading test, questionnaire, and parametric statistical analysis, the findings yielded up discrepancies among the participants in the implementation of language-learning strategies concerning their…

  4. Noise-figure limit of fiber-optical parametric amplifiers and wavelength converters: experimental investigation

    NASA Astrophysics Data System (ADS)

    Tang, Renyong; Voss, Paul L.; Lasri, Jacob; Devgan, Preetpaul; Kumar, Prem

    2004-10-01

    Recent theoretical work predicts that the quantum-limited noise figure of a chi(3)-based fiber-optical parametric amplifier operating as a phase-insensitive in-line amplifier or as a wavelength converter exceeds the standard 3-dB limit at high gain. The degradation of the noise figure is caused by the excess noise added by the unavoidable Raman gain and loss occurring at the signal and the converted wavelengths. We present detailed experimental evidence in support of this theory through measurements of the gain and noise-figure spectra for phase-insensitive parametric amplification and wavelength conversion in a continuous-wave amplifier made from 4.4 km of dispersion-shifted fiber. The theory is also extended to include the effect of distributed linear loss on the noise figure of such a long-length parametric amplifier and wavelength converter.

  5. Modeling personnel turnover in the parametric organization

    NASA Technical Reports Server (NTRS)

    Dean, Edwin B.

    1991-01-01

    A model is developed for simulating the dynamics of a newly formed organization, credible during all phases of organizational development. The model development process is broken down into the activities of determining the tasks required for parametric cost analysis (PCA), determining the skills required for each PCA task, determining the skills available in the applicant marketplace, determining the structure of the model, implementing the model, and testing it. The model, parameterized by the likelihood of job function transition, has demonstrated by the capability to represent the transition of personnel across functional boundaries within a parametric organization using a linear dynamical system, and the ability to predict required staffing profiles to meet functional needs at the desired time. The model can be extended by revisions of the state and transition structure to provide refinements in functional definition for the parametric and extended organization.

  6. Parametric Amplifier and Oscillator Based on Josephson Junction Circuitry

    NASA Astrophysics Data System (ADS)

    Yamamoto, T.; Koshino, K.; Nakamura, Y.

    While the demand for low-noise amplification is ubiquitous, applications where the quantum-limited noise performance is indispensable are not very common. Microwave parametric amplifiers with near quantum-limited noise performance were first demonstrated more than 20 years ago. However, there had been little effort until recently to improve the performance or the ease of use of these amplifiers, partly because of a lack of any urgent motivation. The emergence of the field of quantum information processing in superconducting systems has changed this situation dramatically. The need to reliably read out the state of a given qubit using a very weak microwave probe within a very short time has led to renewed interest in these quantum-limited microwave amplifiers, which are already widely used as tools in this field. Here, we describe the quantum mechanical theory for one particular parametric amplifier design, called the flux-driven Josephson parametric amplifier, which we developed in 2008. The theory predicts the performance of this parametric amplifier, including its gain, bandwidth, and noise temperature. We also present the phase detection capability of this amplifier when it is operated with a pump power that is above the threshold, i.e., as a parametric phase-locked oscillator or parametron.

  7. Incorporating parametric uncertainty into population viability analysis models

    USGS Publications Warehouse

    McGowan, Conor P.; Runge, Michael C.; Larson, Michael A.

    2011-01-01

    Uncertainty in parameter estimates from sampling variation or expert judgment can introduce substantial uncertainty into ecological predictions based on those estimates. However, in standard population viability analyses, one of the most widely used tools for managing plant, fish and wildlife populations, parametric uncertainty is often ignored in or discarded from model projections. We present a method for explicitly incorporating this source of uncertainty into population models to fully account for risk in management and decision contexts. Our method involves a two-step simulation process where parametric uncertainty is incorporated into the replication loop of the model and temporal variance is incorporated into the loop for time steps in the model. Using the piping plover, a federally threatened shorebird in the USA and Canada, as an example, we compare abundance projections and extinction probabilities from simulations that exclude and include parametric uncertainty. Although final abundance was very low for all sets of simulations, estimated extinction risk was much greater for the simulation that incorporated parametric uncertainty in the replication loop. Decisions about species conservation (e.g., listing, delisting, and jeopardy) might differ greatly depending on the treatment of parametric uncertainty in population models.

  8. Pitch-Learning Algorithm For Speech Encoders

    NASA Technical Reports Server (NTRS)

    Bhaskar, B. R. Udaya

    1988-01-01

    Adaptive algorithm detects and corrects errors in sequence of estimates of pitch period of speech. Algorithm operates in conjunction with techniques used to estimate pitch period. Used in such parametric and hybrid speech coders as linear predictive coders and adaptive predictive coders.

  9. A parametric approach to irregular fatigue prediction

    NASA Technical Reports Server (NTRS)

    Erismann, T. H.

    1972-01-01

    A parametric approach to irregular fatigue protection is presented. The method proposed consists of two parts: empirical determination of certain characteristics of a material by means of a relatively small number of well-defined standard tests, and arithmetical application of the results obtained to arbitrary loading histories. The following groups of parameters are thus taken into account: (1) the variations of the mean stress, (2) the interaction of these variations and the superposed oscillating stresses, (3) the spectrum of the oscillating-stress amplitudes, and (4) the sequence of the oscillating-stress amplitudes. It is pointed out that only experimental verification can throw sufficient light upon possibilities and limitations of this (or any other) prediction method.

  10. Wavelet Filtering to Reduce Conservatism in Aeroservoelastic Robust Stability Margins

    NASA Technical Reports Server (NTRS)

    Brenner, Marty; Lind, Rick

    1998-01-01

    Wavelet analysis for filtering and system identification was used to improve the estimation of aeroservoelastic stability margins. The conservatism of the robust stability margins was reduced with parametric and nonparametric time-frequency analysis of flight data in the model validation process. Nonparametric wavelet processing of data was used to reduce the effects of external desirableness and unmodeled dynamics. Parametric estimates of modal stability were also extracted using the wavelet transform. Computation of robust stability margins for stability boundary prediction depends on uncertainty descriptions derived from the data for model validation. F-18 high Alpha Research Vehicle aeroservoelastic flight test data demonstrated improved robust stability prediction by extension of the stability boundary beyond the flight regime.

  11. A Hybrid Wind-Farm Parametrization for Mesoscale and Climate Models

    NASA Astrophysics Data System (ADS)

    Pan, Yang; Archer, Cristina L.

    2018-04-01

    To better understand the potential impact of wind farms on weather and climate at the regional to global scales, a new hybrid wind-farm parametrization is proposed for mesoscale and climate models. The proposed parametrization is a hybrid model because it is not based on physical processes or conservation laws, but on the multiple linear regression of the results of large-eddy simulations (LES) with the geometric properties of the wind-farm layout (e.g., the blockage ratio and blockage distance). The innovative aspect is that each wind turbine is treated individually based on its position in the farm and on the wind direction by predicting the velocity upstream of each turbine. The turbine-induced forces and added turbulence kinetic energy (TKE) are first derived analytically and then implemented in the Weather Research and Forecasting model. Idealized simulations of the offshore Lillgrund wind farm are conducted. The wind-speed deficit and TKE predicted with the hybrid model are in excellent agreement with those from the LES results, while the wind-power production estimated with the hybrid model is within 10% of that observed. Three additional wind farms with larger inter-turbine spacing than at Lillgrund are also considered, and a similar agreement with LES results is found, proving that the hybrid parametrization works well with any wind farm regardless of the spacing between turbines. These results indicate the wind-turbine position, wind direction, and added TKE are essential in accounting for the wind-farm effects on the surroundings, for which the hybrid wind-farm parametrization is a promising tool.

  12. The measurement of acoustic properties of limited size panels by use of a parametric source

    NASA Astrophysics Data System (ADS)

    Humphrey, V. F.

    1985-01-01

    A method of measuring the acoustic properties of limited size panels immersed in water, with a truncated parametric array used as the acoustic source, is described. The insertion loss and reflection loss of thin metallic panels, typically 0·45 m square, were measured at normal incidence by using this technique. Results were obtained for a wide range of frequencies (10 to 100 kHz) and were found to be in good agreement with the theoretical predictions for plane waves. Measurements were also made of the insertion loss of aluminium, Perspex and G.R.P. panels for angles of incidence up to 50°. The broad bandwidth available from the parametric source permitted detailed measurements to be made over a wide frequency range using a single transmitting transducer. The small spot sizes obtainable with the parametric source also helped to reduce the significance of diffraction from edges of the panel under test.

  13. Josephson Parametric Reflection Amplifier with Integrated Directionality

    NASA Astrophysics Data System (ADS)

    Westig, M. P.; Klapwijk, T. M.

    2018-06-01

    A directional superconducting parametric amplifier in the GHz frequency range is designed and analyzed, suitable for low-power read-out of microwave kinetic inductance detectors employed in astrophysics and when combined with a nonreciprocal device at its input also for circuit quantum electrodynamics. It consists of a one-wavelength-long nondegenerate Josephson parametric reflection amplifier circuit. The device has two Josephson-junction oscillators, connected via a tailored impedance to an on-chip passive circuit which directs the in- to the output port. The amplifier provides a gain of 20 dB over a bandwidth of 220 MHz on the signal as well as on the idler portion of the amplified input and the total photon shot noise referred to the input corresponds to maximally approximately 1.3 photons per second per Hertz of bandwidth. We predict a factor of 4 increase in dynamic range compared to conventional Josephson parametric amplifiers.

  14. Parametrization of Stillinger-Weber potential based on valence force field model: application to single-layer MoS2 and black phosphorus

    NASA Astrophysics Data System (ADS)

    Jiang, Jin-Wu

    2015-08-01

    We propose parametrizing the Stillinger-Weber potential for covalent materials starting from the valence force-field model. All geometrical parameters in the Stillinger-Weber potential are determined analytically according to the equilibrium condition for each individual potential term, while the energy parameters are derived from the valence force-field model. This parametrization approach transfers the accuracy of the valence force field model to the Stillinger-Weber potential. Furthermore, the resulting Stilliinger-Weber potential supports stable molecular dynamics simulations, as each potential term is at an energy-minimum state separately at the equilibrium configuration. We employ this procedure to parametrize Stillinger-Weber potentials for single-layer MoS2 and black phosphorous. The obtained Stillinger-Weber potentials predict an accurate phonon spectrum and mechanical behaviors. We also provide input scripts of these Stillinger-Weber potentials used by publicly available simulation packages including GULP and LAMMPS.

  15. Parametrization of Stillinger-Weber potential based on valence force field model: application to single-layer MoS2 and black phosphorus.

    PubMed

    Jiang, Jin-Wu

    2015-08-07

    We propose parametrizing the Stillinger-Weber potential for covalent materials starting from the valence force-field model. All geometrical parameters in the Stillinger-Weber potential are determined analytically according to the equilibrium condition for each individual potential term, while the energy parameters are derived from the valence force-field model. This parametrization approach transfers the accuracy of the valence force field model to the Stillinger-Weber potential. Furthermore, the resulting Stilliinger-Weber potential supports stable molecular dynamics simulations, as each potential term is at an energy-minimum state separately at the equilibrium configuration. We employ this procedure to parametrize Stillinger-Weber potentials for single-layer MoS2 and black phosphorous. The obtained Stillinger-Weber potentials predict an accurate phonon spectrum and mechanical behaviors. We also provide input scripts of these Stillinger-Weber potentials used by publicly available simulation packages including GULP and LAMMPS.

  16. Study of parametric instability in gravitational wave detectors with silicon test masses

    NASA Astrophysics Data System (ADS)

    Zhang, Jue; Zhao, Chunnong; Ju, Li; Blair, David

    2017-03-01

    Parametric instability is an intrinsic risk in high power laser interferometer gravitational wave detectors, in which the optical cavity modes interact with the acoustic modes of the mirrors, leading to exponential growth of the acoustic vibration. In this paper, we investigate the potential parametric instability for a proposed next generation gravitational wave detector, the LIGO Voyager blue design, with cooled silicon test masses of size 45 cm in diameter and 55 cm in thickness. It is shown that there would be about two unstable modes per test mass at an arm cavity power of 3 MW, with the highest parametric gain of  ∼76. While this is less than the predicted number of unstable modes for Advanced LIGO (∼40 modes with max gain of  ∼32 at the designed operating power of 830 kW), the importance of developing suitable instability suppression schemes is emphasized.

  17. Modeling the directivity of parametric loudspeaker

    NASA Astrophysics Data System (ADS)

    Shi, Chuang; Gan, Woon-Seng

    2012-09-01

    The emerging applications of the parametric loudspeaker, such as 3D audio, demands accurate directivity control at the audible frequency (i.e. the difference frequency). Though the delay-and-sum beamforming has been proven adequate to adjust the steering angles of the parametric loudspeaker, accurate prediction of the mainlobe and sidelobes remains a challenging problem. It is mainly because of the approximations that are used to derive the directivity of the difference frequency from the directivity of the primary frequency, and the mismatches between the theoretical directivity and the measured directivity caused by system errors incurred at different stages of the implementation. In this paper, we propose a directivity model of the parametric loudspeaker. The directivity model consists of two tuning vectors corresponding to the spacing error and the weight error for the primary frequency. The directivity model adopts a modified form of the product directivity principle for the difference frequency to further improve the modeling accuracy.

  18. Neural Network Modeling for Gallium Arsenide IC Fabrication Process and Device Characteristics.

    NASA Astrophysics Data System (ADS)

    Creech, Gregory Lee, I.

    This dissertation presents research focused on the utilization of neurocomputing technology to achieve enhanced yield and effective yield prediction in integrated circuit (IC) manufacturing. Artificial neural networks are employed to model complex relationships between material and device characteristics at critical stages of the semiconductor fabrication process. Whole wafer testing was performed on the starting substrate material and during wafer processing at four critical steps: Ohmic or Post-Contact, Post-Recess, Post-Gate and Final, i.e., at completion of fabrication. Measurements taken and subsequently used in modeling include, among others, doping concentrations, layer thicknesses, planar geometries, layer-to-layer alignments, resistivities, device voltages, and currents. The neural network architecture used in this research is the multilayer perceptron neural network (MLPNN). The MLPNN is trained in the supervised mode using the generalized delta learning rule. It has one hidden layer and uses continuous perceptrons. The research focuses on a number of different aspects. First is the development of inter-process stage models. Intermediate process stage models are created in a progressive fashion. Measurements of material and process/device characteristics taken at a specific processing stage and any previous stages are used as input to the model of the next processing stage characteristics. As the wafer moves through the fabrication process, measurements taken at all previous processing stages are used as input to each subsequent process stage model. Secondly, the development of neural network models for the estimation of IC parametric yield is demonstrated. Measurements of material and/or device characteristics taken at earlier fabrication stages are used to develop models of the final DC parameters. These characteristics are computed with the developed models and compared to acceptance windows to estimate the parametric yield. A sensitivity analysis is performed on the models developed during this yield estimation effort. This is accomplished by analyzing the total disturbance of network outputs due to perturbed inputs. When an input characteristic bears no, or little, statistical or deterministic relationship to the output characteristics, it can be removed as an input. Finally, neural network models are developed in the inverse direction. Characteristics measured after the final processing step are used as the input to model critical in-process characteristics. The modeled characteristics are used for whole wafer mapping and its statistical characterization. It is shown that this characterization can be accomplished with minimal in-process testing. The concepts and methodologies used in the development of the neural network models are presented. The modeling results are provided and compared to the actual measured values of each characteristic. An in-depth discussion of these results and ideas for future research are presented.

  19. Nonlinear rotordynamics analysis. [Space Shuttle Main Engine turbopumps

    NASA Technical Reports Server (NTRS)

    Noah, Sherif T.

    1991-01-01

    Effective analysis tools were developed for predicting the nonlinear rotordynamic behavior of the Space Shuttle Main Engine (SSME) turbopumps under steady and transient operating conditions. Using these methods, preliminary parametric studies were conducted on both generic and actual HPOTP (high pressure oxygen turbopump) models. In particular, a novel modified harmonic balance/alternating Fourier transform (HB/AFT) method was developed and used to conduct a preliminary study of the effects of fluid, bearing and seal forces on the unbalanced response of a multi-disk rotor in the presence of bearing clearances. The method makes it possible to determine periodic, sub-, super-synchronous and chaotic responses of a rotor system. The method also yields information about the stability of the obtained response, thus allowing bifurcation analyses. This provides a more effective capability for predicting the response under transient conditions by searching in proximity of resonance peaks. Preliminary results were also obtained for the nonlinear transient response of an actual HPOTP model using an efficient, newly developed numerical method based on convolution integration. Currently, the HB/AFT is being extended for determining the aperiodic response of nonlinear systems. Initial results show the method to be promising.

  20. Nonparametric functional data estimation applied to ozone data: prediction and extreme value analysis.

    PubMed

    Quintela-del-Río, Alejandro; Francisco-Fernández, Mario

    2011-02-01

    The study of extreme values and prediction of ozone data is an important topic of research when dealing with environmental problems. Classical extreme value theory is usually used in air-pollution studies. It consists in fitting a parametric generalised extreme value (GEV) distribution to a data set of extreme values, and using the estimated distribution to compute return levels and other quantities of interest. Here, we propose to estimate these values using nonparametric functional data methods. Functional data analysis is a relatively new statistical methodology that generally deals with data consisting of curves or multi-dimensional variables. In this paper, we use this technique, jointly with nonparametric curve estimation, to provide alternatives to the usual parametric statistical tools. The nonparametric estimators are applied to real samples of maximum ozone values obtained from several monitoring stations belonging to the Automatic Urban and Rural Network (AURN) in the UK. The results show that nonparametric estimators work satisfactorily, outperforming the behaviour of classical parametric estimators. Functional data analysis is also used to predict stratospheric ozone concentrations. We show an application, using the data set of mean monthly ozone concentrations in Arosa, Switzerland, and the results are compared with those obtained by classical time series (ARIMA) analysis. Copyright © 2010 Elsevier Ltd. All rights reserved.

  1. A novel SURE-based criterion for parametric PSF estimation.

    PubMed

    Xue, Feng; Blu, Thierry

    2015-02-01

    We propose an unbiased estimate of a filtered version of the mean squared error--the blur-SURE (Stein's unbiased risk estimate)--as a novel criterion for estimating an unknown point spread function (PSF) from the degraded image only. The PSF is obtained by minimizing this new objective functional over a family of Wiener processings. Based on this estimated blur kernel, we then perform nonblind deconvolution using our recently developed algorithm. The SURE-based framework is exemplified with a number of parametric PSF, involving a scaling factor that controls the blur size. A typical example of such parametrization is the Gaussian kernel. The experimental results demonstrate that minimizing the blur-SURE yields highly accurate estimates of the PSF parameters, which also result in a restoration quality that is very similar to the one obtained with the exact PSF, when plugged into our recent multi-Wiener SURE-LET deconvolution algorithm. The highly competitive results obtained outline the great potential of developing more powerful blind deconvolution algorithms based on SURE-like estimates.

  2. Advanced Imaging Methods for Long-Baseline Optical Interferometry

    NASA Astrophysics Data System (ADS)

    Le Besnerais, G.; Lacour, S.; Mugnier, L. M.; Thiebaut, E.; Perrin, G.; Meimon, S.

    2008-11-01

    We address the data processing methods needed for imaging with a long baseline optical interferometer. We first describe parametric reconstruction approaches and adopt a general formulation of nonparametric image reconstruction as the solution of a constrained optimization problem. Within this framework, we present two recent reconstruction methods, Mira and Wisard, representative of the two generic approaches for dealing with the missing phase information. Mira is based on an implicit approach and a direct optimization of a Bayesian criterion while Wisard adopts a self-calibration approach and an alternate minimization scheme inspired from radio-astronomy. Both methods can handle various regularization criteria. We review commonly used regularization terms and introduce an original quadratic regularization called ldquosoft support constraintrdquo that favors the object compactness. It yields images of quality comparable to nonquadratic regularizations on the synthetic data we have processed. We then perform image reconstructions, both parametric and nonparametric, on astronomical data from the IOTA interferometer, and discuss the respective roles of parametric and nonparametric approaches for optical interferometric imaging.

  3. Separation and purification of enzymes by continuous pH-parametric pumping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, S.Y.; Lin, C.K.; Juang, L.Y.

    1985-10-01

    Trypsin and chymotrypsin were separated from porcine pancreas extract by continuous pH-parametric pumping. CHOM (chicken ovomucoid) was convalently bound to laboratory-prepared crab chitin with glutaraldehyde to form an affinity adsorbent of trypsin. The pH levels of top and bottom feeds were 8.0 and 2.5, respectively. Similar inhibitor, DKOM (duck ovomucoid), and pH levels 8.0 and 2.0 for top and bottom feeds, respectively, were used for separation and purification of chymotrypsin. e-Amino caproyl-D-tryptophan methyl ester was coupled to chitosan to form an affinity adsorbent for stem bromelain. The pH levels were 8.7 and 3.0. Separation continued fairly well with high yield,more » e.g., 95% recovery of trypsin after continuous pumping of 10 cycles. Optimum operational conditions for concentration and purification of these enzymes were investigated. The results showed that the continuous pH-parametric pumping coupled with affinity chromatography is effective for concentration and purification of enzymes. 19 references.« less

  4. BLIND EXTRACTION OF AN EXOPLANETARY SPECTRUM THROUGH INDEPENDENT COMPONENT ANALYSIS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Waldmann, I. P.; Tinetti, G.; Hollis, M. D. J.

    2013-03-20

    Blind-source separation techniques are used to extract the transmission spectrum of the hot-Jupiter HD189733b recorded by the Hubble/NICMOS instrument. Such a 'blind' analysis of the data is based on the concept of independent component analysis. The detrending of Hubble/NICMOS data using the sole assumption that nongaussian systematic noise is statistically independent from the desired light-curve signals is presented. By not assuming any prior or auxiliary information but the data themselves, it is shown that spectroscopic errors only about 10%-30% larger than parametric methods can be obtained for 11 spectral bins with bin sizes of {approx}0.09 {mu}m. This represents a reasonablemore » trade-off between a higher degree of objectivity for the non-parametric methods and smaller standard errors for the parametric de-trending. Results are discussed in light of previous analyses published in the literature. The fact that three very different analysis techniques yield comparable spectra is a strong indication of the stability of these results.« less

  5. Integrative genetic risk prediction using non-parametric empirical Bayes classification.

    PubMed

    Zhao, Sihai Dave

    2017-06-01

    Genetic risk prediction is an important component of individualized medicine, but prediction accuracies remain low for many complex diseases. A fundamental limitation is the sample sizes of the studies on which the prediction algorithms are trained. One way to increase the effective sample size is to integrate information from previously existing studies. However, it can be difficult to find existing data that examine the target disease of interest, especially if that disease is rare or poorly studied. Furthermore, individual-level genotype data from these auxiliary studies are typically difficult to obtain. This article proposes a new approach to integrative genetic risk prediction of complex diseases with binary phenotypes. It accommodates possible heterogeneity in the genetic etiologies of the target and auxiliary diseases using a tuning parameter-free non-parametric empirical Bayes procedure, and can be trained using only auxiliary summary statistics. Simulation studies show that the proposed method can provide superior predictive accuracy relative to non-integrative as well as integrative classifiers. The method is applied to a recent study of pediatric autoimmune diseases, where it substantially reduces prediction error for certain target/auxiliary disease combinations. The proposed method is implemented in the R package ssa. © 2016, The International Biometric Society.

  6. Model-based mean square error estimators for k-nearest neighbour predictions and applications using remotely sensed data for forest inventories

    Treesearch

    Steen Magnussen; Ronald E. McRoberts; Erkki O. Tomppo

    2009-01-01

    New model-based estimators of the uncertainty of pixel-level and areal k-nearest neighbour (knn) predictions of attribute Y from remotely-sensed ancillary data X are presented. Non-parametric functions predict Y from scalar 'Single Index Model' transformations of X. Variance functions generated...

  7. Parametrization of Drag and Turbulence for Urban Neighbourhoods with Trees

    NASA Astrophysics Data System (ADS)

    Krayenhoff, E. S.; Santiago, J.-L.; Martilli, A.; Christen, A.; Oke, T. R.

    2015-08-01

    Urban canopy parametrizations designed to be coupled with mesoscale models must predict the integrated effect of urban obstacles on the flow at each height in the canopy. To assess these neighbourhood-scale effects, results of microscale simulations may be horizontally-averaged. Obstacle-resolving computational fluid dynamics (CFD) simulations of neutrally-stratified flow through canopies of blocks (buildings) with varying distributions and densities of porous media (tree foliage) are conducted, and the spatially-averaged impacts on the flow of these building-tree combinations are assessed. The accuracy with which a one-dimensional (column) model with a one-equation (-) turbulence scheme represents spatially-averaged CFD results is evaluated. Individual physical mechanisms by which trees and buildings affect flow in the column model are evaluated in terms of relative importance. For the treed urban configurations considered, effects of buildings and trees may be considered independently. Building drag coefficients and length scale effects need not be altered due to the presence of tree foliage; therefore, parametrization of spatially-averaged flow through urban neighbourhoods with trees is greatly simplified. The new parametrization includes only source and sink terms significant for the prediction of spatially-averaged flow profiles: momentum drag due to buildings and trees (and the associated wake production of turbulent kinetic energy), modification of length scales by buildings, and enhanced dissipation of turbulent kinetic energy due to the small scale of tree foliage elements. Coefficients for the Santiago and Martilli (Boundary-Layer Meteorol 137: 417-439, 2010) parametrization of building drag coefficients and length scales are revised. Inclusion of foliage terms from the new parametrization in addition to the Santiago and Martilli building terms reduces root-mean-square difference (RMSD) of the column model streamwise velocity component and turbulent kinetic energy relative to the CFD model by 89 % in the canopy and 71 % above the canopy on average for the highest leaf area density scenarios tested: . RMSD values with the new parametrization are less than 20 % of mean layer magnitude for the streamwise velocity component within and above the canopy, and for above-canopy turbulent kinetic energy; RMSD values for within-canopy turbulent kinetic energy are negligible for most scenarios. The foliage-related portion of the new parametrization is required for scenarios with tree foliage of equal or greater height than the buildings, and for scenarios with foliage below roof height for building plan area densities less than approximately 0.25.

  8. Algorithms and Parametric Studies for Assessing Effects of Two-Point Contact

    DOT National Transportation Integrated Search

    1984-02-01

    This report describes analyses conducted to assess the effects of two-point wheel rail contact on a single wheel on the prediction of wheel-rail forces, and for including these effects in a computer program for predicting curving behavior of rail veh...

  9. Empirical study of the dependence of the results of multivariable flexible survival analyses on model selection strategy.

    PubMed

    Binquet, C; Abrahamowicz, M; Mahboubi, A; Jooste, V; Faivre, J; Bonithon-Kopp, C; Quantin, C

    2008-12-30

    Flexible survival models, which avoid assumptions about hazards proportionality (PH) or linearity of continuous covariates effects, bring the issues of model selection to a new level of complexity. Each 'candidate covariate' requires inter-dependent decisions regarding (i) its inclusion in the model, and representation of its effects on the log hazard as (ii) either constant over time or time-dependent (TD) and, for continuous covariates, (iii) either loglinear or non-loglinear (NL). Moreover, 'optimal' decisions for one covariate depend on the decisions regarding others. Thus, some efficient model-building strategy is necessary.We carried out an empirical study of the impact of the model selection strategy on the estimates obtained in flexible multivariable survival analyses of prognostic factors for mortality in 273 gastric cancer patients. We used 10 different strategies to select alternative multivariable parametric as well as spline-based models, allowing flexible modeling of non-parametric (TD and/or NL) effects. We employed 5-fold cross-validation to compare the predictive ability of alternative models.All flexible models indicated significant non-linearity and changes over time in the effect of age at diagnosis. Conventional 'parametric' models suggested the lack of period effect, whereas more flexible strategies indicated a significant NL effect. Cross-validation confirmed that flexible models predicted better mortality. The resulting differences in the 'final model' selected by various strategies had also impact on the risk prediction for individual subjects.Overall, our analyses underline (a) the importance of accounting for significant non-parametric effects of covariates and (b) the need for developing accurate model selection strategies for flexible survival analyses. Copyright 2008 John Wiley & Sons, Ltd.

  10. Widely tunable optical parametric oscillation in a Kerr microresonator.

    PubMed

    Sayson, Noel Lito B; Webb, Karen E; Coen, Stéphane; Erkintalo, Miro; Murdoch, Stuart G

    2017-12-15

    We report on the first experimental demonstration of widely tunable parametric sideband generation in a Kerr microresonator. Specifically, by pumping a silica microsphere in the normal dispersion regime, we achieve the generation of phase-matched four-wave mixing sidebands at large frequency detunings from the pump. Thanks to the role of higher-order dispersion in enabling phase matching, small variations of the pump wavelength translate into very large and controllable changes in the wavelengths of the generated sidebands: we experimentally demonstrate over 720 nm of tunability using a low-power continuous-wave pump laser in the C-band. We also derive simple theoretical predictions for the phase-matched sideband frequencies and discuss the predictions in light of the discrete cavity resonance frequencies. Our experimentally measured sideband wavelengths are in very good agreement with theoretical predictions obtained from our simple phase-matching analysis.

  11. On-Line Robust Modal Stability Prediction using Wavelet Processing

    NASA Technical Reports Server (NTRS)

    Brenner, Martin J.; Lind, Rick

    1998-01-01

    Wavelet analysis for filtering and system identification has been used to improve the estimation of aeroservoelastic stability margins. The conservatism of the robust stability margins is reduced with parametric and nonparametric time- frequency analysis of flight data in the model validation process. Nonparametric wavelet processing of data is used to reduce the effects of external disturbances and unmodeled dynamics. Parametric estimates of modal stability are also extracted using the wavelet transform. Computation of robust stability margins for stability boundary prediction depends on uncertainty descriptions derived from the data for model validation. The F-18 High Alpha Research Vehicle aeroservoelastic flight test data demonstrates improved robust stability prediction by extension of the stability boundary beyond the flight regime. Guidelines and computation times are presented to show the efficiency and practical aspects of these procedures for on-line implementation. Feasibility of the method is shown for processing flight data from time- varying nonstationary test points.

  12. CP violation experiment at Fermilab

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hsiung, Yee B.

    1990-07-01

    The E731 experiment at Fermilab has searched for direct'' CP violation in K{sup 0} {yields} {pi}{pi}, which is parametrized by {var epsilon}{prime}/{var epsilon}. For the first time, in 20% of the data set, all four modes of the K{sub L,S} {yields} {pi}{sup +}{pi}{sup {minus}} ({pi}{sup 0}{pi}{sup 0}) were collected simultaneously, providing a great check on the systematic uncertainty. The result is Re({var epsilon}{prime}/{var epsilon}) = {minus}0.0004 {plus minus} 0.0014 (stat) {plus minus} 0.0006(syst), which provides no evidence for direct'' CP violation. The CPT symmetry has also been tested by measuring the phase difference {Delta}{phi} = {phi}{sub 00} {minus} {phi}{sub {plusmore » minus}} between the two CP violating parameters {eta}{sub 00} and {eta}{sub {plus minus}}. We fine {Delta}{phi} = {minus}0.3{degrees} {plus minus} 2.4{degree}(stat) {plus minus} 1.2{degree}(syst). Using this together with the world average {phi}{sub {plus minus}}, we fine that the phase of the K{sup 0}-{bar K}{sup 0} mixing parameter {var epsilon} is 44.5{degree} {plus minus} 1.5{degree}. Both of these results agree well with the predictions of CPT symmetry. 17 refs., 10 figs.« less

  13. Analytical estimation on divergence and flutter vibrations of symmetrical three-phase induction stator via field-synchronous coordinates

    NASA Astrophysics Data System (ADS)

    Xia, Ying; Wang, Shiyu; Sun, Wenjia; Xiu, Jie

    2017-01-01

    The electromagnetically induced parametric vibration of the symmetrical three-phase induction stator is examined. While it can be analyzed by an approximate analytical or numerical method, more accurate and simple analytical method is desirable. This work proposes a new method based on the field-synchronous coordinates. A mechanical-electromagnetic coupling model is developed under this frame such that a time-invariant governing equation with gyroscopic term can be developed. With the general vibration theory, the eigenvalue is formulated; the transition curves between the stable and unstable regions, and response are all determined as closed-form expressions of basic mechanical-electromagnetic parameters. The dependence of these parameters on the instability behaviors is demonstrated. The results imply that the divergence and flutter instabilities can occur even for symmetrical motors with balanced, constant amplitude and sinusoidal voltage. To verify the analytical predictions, this work also builds up a time-variant model of the same system under the conventional inertial frame. The Floquét theory is employed to predict the parametric instability and the numerical integration is used to obtain the parametric response. The parametric instability and response are both well compared against those under the field-synchronous coordinates. The proposed field-synchronous coordinates allows a quick estimation on the electromagnetically induced vibration. The convenience offered by the body-fixed coordinates is discussed across various fields.

  14. Parametric manipulation of the conflict signal and control-state adaptation.

    PubMed

    Forster, Sarah E; Carter, Cameron S; Cohen, Jonathan D; Cho, Raymond Y

    2011-04-01

    Mechanisms by which the brain monitors and modulates performance are an important focus of recent research. The conflict-monitoring hypothesis posits that the ACC detects conflict between competing response pathways which, in turn, signals for enhanced control. The N2, an ERP component that has been localized to ACC, has been observed after high conflict stimuli. As a candidate index of the conflict signal, the N2 would be expected to be sensitive to the degree of response conflict present, a factor that depends on both the features of external stimuli and the internal control state. In the present study, we sought to explore the relationship between N2 amplitude and these variables through use of a modified Eriksen flankers task in which target-distracter compatibility was parametrically varied. We hypothesized that greater target-distracter incompatibility would result in higher levels of response conflict, as indexed by both behavior and the N2 component. Consistent with this prediction, there were parametric degradations in behavioral performance and increases in N2 amplitudes with increasing incompatibility. Further, increasingly incompatible stimuli led to the predicted parametric increases in control on subsequent incompatible trials as evidenced by enhanced performance and reduced N2 amplitudes. These findings suggest that the N2 component and associated behavioral performance are finely sensitive to the degree of response conflict present and to the control adjustments that result from modulations in conflict.

  15. Parametric dependence of density limits in the Tokamak Experiment for Technology Oriented Research (TEXTOR): Comparison of thermal instability theory with experiment

    NASA Astrophysics Data System (ADS)

    Kelly, F. A.; Stacey, W. M.; Rapp, J.

    2001-11-01

    The observed dependence of the TEXTOR [Tokamak Experiment for Technology Oriented Research: E. Hintz, P. Bogen, H. A. Claassen et al., Contributions to High Temperature Plasma Physics, edited by K. H. Spatschek and J. Uhlenbusch (Akademie Verlag, Berlin, 1994), p. 373] density limit on global parameters (I, B, P, etc.) and wall conditioning is compared with the predicted density limit parametric scaling of thermal instability theory. It is necessary first to relate the edge parameters of the thermal instability theory to n¯ and the other global parameters. The observed parametric dependence of the density limit in TEXTOR is generally consistent with the predicted density limit scaling of thermal instability theory. The observed wall conditioning dependence of the density limit can be reconciled with the theory in terms of the radiative emissivity temperature dependence of different impurities in the plasma edge. The thermal instability theory also provides an explanation of why symmetric detachment precedes radiative collapse for most low power shots, while a multifaceted asymmetric radiation from the edge MARFE precedes detachment for most high power shots.

  16. A capacitive ultrasonic transducer based on parametric resonance.

    PubMed

    Surappa, Sushruta; Satir, Sarp; Levent Degertekin, F

    2017-07-24

    A capacitive ultrasonic transducer based on a parametric resonator structure is described and experimentally demonstrated. The transducer structure, which we call capacitive parametric ultrasonic transducer (CPUT), uses a parallel plate capacitor with a movable membrane as part of a degenerate parametric series RLC resonator circuit with a resonance frequency of f o . When the capacitor plate is driven with an incident harmonic ultrasonic wave at the pump frequency of 2f o with sufficient amplitude, the RLC circuit becomes unstable and ultrasonic energy can be efficiently converted to an electrical signal at f o frequency in the RLC circuit. An important characteristic of the CPUT is that unlike other electrostatic transducers, it does not require DC bias or permanent charging to be used as a receiver. We describe the operation of the CPUT using an analytical model and numerical simulations, which shows drive amplitude dependent operation regimes including parametric resonance when a certain threshold is exceeded. We verify these predictions by experiments with a micromachined membrane based capacitor structure in immersion where ultrasonic waves incident at 4.28 MHz parametrically drive a signal with significant amplitude in the 2.14 MHz RLC circuit. With its unique features, the CPUT can be particularly advantageous for applications such as wireless power transfer for biomedical implants and acoustic sensing.

  17. Thermal-structural modeling of polymer Bragg grating waveguides illuminated by a light emitting diode.

    PubMed

    Joon Kim, Kyoung; Bar-Cohen, Avram; Han, Bongtae

    2012-02-20

    This study reports both analytical and numerical thermal-structural models of polymer Bragg grating (PBG) waveguides illuminated by a light emitting diode (LED). A polymethyl methacrylate (PMMA) Bragg grating (BG) waveguide is chosen as an analysis vehicle to explore parametric effects of incident optical powers and substrate materials on the thermal-structural behavior of the BG. Analytical models are verified by comparing analytically predicted average excess temperatures, and thermally induced axial strains and stresses with numerical predictions. A parametric study demonstrates that the PMMA substrate induces more adverse effects, such as higher excess temperatures, complex axial temperature profiles, and greater and more complicated thermally induced strains in the BG compared with the Si substrate. © 2012 Optical Society of America

  18. Approximating prediction uncertainty for random forest regression models

    Treesearch

    John W. Coulston; Christine E. Blinn; Valerie A. Thomas; Randolph H. Wynne

    2016-01-01

    Machine learning approaches such as random forest have increased for the spatial modeling and mapping of continuous variables. Random forest is a non-parametric ensemble approach, and unlike traditional regression approaches there is no direct quantification of prediction error. Understanding prediction uncertainty is important when using model-based continuous maps as...

  19. Parametric Model of an Aerospike Rocket Engine

    NASA Technical Reports Server (NTRS)

    Korte, J. J.

    2000-01-01

    A suite of computer codes was assembled to simulate the performance of an aerospike engine and to generate the engine input for the Program to Optimize Simulated Trajectories. First an engine simulator module was developed that predicts the aerospike engine performance for a given mixture ratio, power level, thrust vectoring level, and altitude. This module was then used to rapidly generate the aerospike engine performance tables for axial thrust, normal thrust, pitching moment, and specific thrust. Parametric engine geometry was defined for use with the engine simulator module. The parametric model was also integrated into the iSIGHTI multidisciplinary framework so that alternate designs could be determined. The computer codes were used to support in-house conceptual studies of reusable launch vehicle designs.

  20. Parametric Model of an Aerospike Rocket Engine

    NASA Technical Reports Server (NTRS)

    Korte, J. J.

    2000-01-01

    A suite of computer codes was assembled to simulate the performance of an aerospike engine and to generate the engine input for the Program to Optimize Simulated Trajectories. First an engine simulator module was developed that predicts the aerospike engine performance for a given mixture ratio, power level, thrust vectoring level, and altitude. This module was then used to rapidly generate the aerospike engine performance tables for axial thrust, normal thrust, pitching moment, and specific thrust. Parametric engine geometry was defined for use with the engine simulator module. The parametric model was also integrated into the iSIGHT multidisciplinary framework so that alternate designs could be determined. The computer codes were used to support in-house conceptual studies of reusable launch vehicle designs.

  1. Driven Bose-Hubbard model with a parametrically modulated harmonic trap

    NASA Astrophysics Data System (ADS)

    Mann, N.; Bakhtiari, M. Reza; Massel, F.; Pelster, A.; Thorwart, M.

    2017-04-01

    We investigate a one-dimensional Bose-Hubbard model in a parametrically driven global harmonic trap. The delicate interplay of both the local interaction of the atoms in the lattice and the driving of the global trap allows us to control the dynamical stability of the trapped quantum many-body state. The impact of the atomic interaction on the dynamical stability of the driven quantum many-body state is revealed in the regime of weak interaction by analyzing a discretized Gross-Pitaevskii equation within a Gaussian variational ansatz, yielding a Mathieu equation for the condensate width. The parametric resonance condition is shown to be modified by the atom interaction strength. In particular, the effective eigenfrequency is reduced for growing interaction in the mean-field regime. For a stronger interaction, the impact of the global parametric drive is determined by the numerically exact time-evolving block decimation scheme. When the trapped bosons in the lattice are in a Mott insulating state, the absorption of energy from the driving field is suppressed due to the strongly reduced local compressibility of the quantum many-body state. In particular, we find that the width of the local Mott region shows a breathing dynamics. Finally, we observe that the global modulation also induces an effective time-independent inhomogeneous hopping strength for the atoms.

  2. A Lévy-flight diffusion model to predict transgenic pollen dispersal.

    PubMed

    Vallaeys, Valentin; Tyson, Rebecca C; Lane, W David; Deleersnijder, Eric; Hanert, Emmanuel

    2017-01-01

    The containment of genetically modified (GM) pollen is an issue of significant concern for many countries. For crops that are bee-pollinated, model predictions of outcrossing rates depend on the movement hypothesis used for the pollinators. Previous work studying pollen spread by honeybees, the most important pollinator worldwide, was based on the assumption that honeybee movement can be well approximated by Brownian motion. A number of recent studies, however, suggest that pollinating insects such as bees perform Lévy flights in their search for food. Such flight patterns yield much larger rates of spread, and so the Brownian motion assumption might significantly underestimate the risk associated with GM pollen outcrossing in conventional crops. In this work, we propose a mechanistic model for pollen dispersal in which the bees perform truncated Lévy flights. This assumption leads to a fractional-order diffusion model for pollen that can be tuned to model motion ranging from pure Brownian to pure Lévy. We parametrize our new model by taking the same pollen dispersal dataset used in Brownian motion modelling studies. By numerically solving the model equations, we show that the isolation distances required to keep outcrossing levels below a certain threshold are substantially increased by comparison with the original predictions, suggesting that isolation distances may need to be much larger than originally thought. © 2017 The Author(s).

  3. Creep failure of a reactor pressure vessel lower head under severe accident conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pilch, M.M.; Ludwigsen, J.S.; Chu, T.Y.

    A severe accident in a nuclear power plant could result in the relocation of large quantities of molten core material onto the lower head of he reactor pressure vessel (RPV). In the absence of inherent cooling mechanisms, failure of the RPV ultimately becomes possible under the combined effects of system pressure and the thermal heat-up of the lower head. Sandia National Laboratories has performed seven experiments at 1:5th scale simulating creep failure of a RPV lower head. This paper describes a modeling program that complements the experimental program. Analyses have been performed using the general-purpose finite-element code ABAQUS-5.6. In ordermore » to make ABAQUS solve the specific problem at hand, a material constitutive model that utilizes temperature dependent properties has been developed and attached to ABAQUS-executable through its UMAT utility. Analyses of the LHF-1 experiment predict instability-type failure. Predicted strains are delayed relative to the observed strain histories. Parametric variations on either the yield stress, creep rate, or both (within the range of material property data) can bring predictions into agreement with experiment. The analysis indicates that it is necessary to conduct material property tests on the actual material used in the experimental program. The constitutive model employed in the present analyses is the subject of a separate publication.« less

  4. A Lévy-flight diffusion model to predict transgenic pollen dispersal

    PubMed Central

    Vallaeys, Valentin; Tyson, Rebecca C.; Lane, W. David; Deleersnijder, Eric

    2017-01-01

    The containment of genetically modified (GM) pollen is an issue of significant concern for many countries. For crops that are bee-pollinated, model predictions of outcrossing rates depend on the movement hypothesis used for the pollinators. Previous work studying pollen spread by honeybees, the most important pollinator worldwide, was based on the assumption that honeybee movement can be well approximated by Brownian motion. A number of recent studies, however, suggest that pollinating insects such as bees perform Lévy flights in their search for food. Such flight patterns yield much larger rates of spread, and so the Brownian motion assumption might significantly underestimate the risk associated with GM pollen outcrossing in conventional crops. In this work, we propose a mechanistic model for pollen dispersal in which the bees perform truncated Lévy flights. This assumption leads to a fractional-order diffusion model for pollen that can be tuned to model motion ranging from pure Brownian to pure Lévy. We parametrize our new model by taking the same pollen dispersal dataset used in Brownian motion modelling studies. By numerically solving the model equations, we show that the isolation distances required to keep outcrossing levels below a certain threshold are substantially increased by comparison with the original predictions, suggesting that isolation distances may need to be much larger than originally thought. PMID:28123097

  5. Parametrization study of the land multiparameter VTI elastic waveform inversion

    NASA Astrophysics Data System (ADS)

    He, W.; Plessix, R.-É.; Singh, S.

    2018-06-01

    Multiparameter inversion of seismic data remains challenging due to the trade-off between the different elastic parameters and the non-uniqueness of the solution. The sensitivity of the seismic data to a given subsurface elastic parameter depends on the source and receiver ray/wave path orientations at the subsurface point. In a high-frequency approximation, this is commonly analysed through the study of the radiation patterns that indicate the sensitivity of each parameter versus the incoming (from the source) and outgoing (to the receiver) angles. In practice, this means that the inversion result becomes sensitive to the choice of parametrization, notably because the null-space of the inversion depends on this choice. We can use a least-overlapping parametrization that minimizes the overlaps between the radiation patterns, in this case each parameter is only sensitive in a restricted angle domain, or an overlapping parametrization that contains a parameter sensitive to all angles, in this case overlaps between the radiation parameters occur. Considering a multiparameter inversion in an elastic vertically transverse isotropic medium and a complex land geological setting, we show that the inversion with the least-overlapping parametrization gives less satisfactory results than with the overlapping parametrization. The difficulties come from the complex wave paths that make difficult to predict the areas of sensitivity of each parameter. This shows that the parametrization choice should not only be based on the radiation pattern analysis but also on the angular coverage at each subsurface point that depends on geology and the acquisition layout.

  6. Activation barrier scaling and crossover for noise-induced switching in micromechanical parametric oscillators.

    PubMed

    Chan, H B; Stambaugh, C

    2007-08-10

    We explore fluctuation-induced switching in parametrically driven micromechanical torsional oscillators. The oscillators possess one, two, or three stable attractors depending on the modulation frequency. Noise induces transitions between the coexisting attractors. Near the bifurcation points, the activation barriers are found to have a power law dependence on frequency detuning with critical exponents that are in agreement with predicted universal scaling relationships. At large detuning, we observe a crossover to a different power law dependence with an exponent that is device specific.

  7. Parametric Trace Slicing

    NASA Technical Reports Server (NTRS)

    Rosu, Grigore (Inventor); Chen, Feng (Inventor); Chen, Guo-fang; Wu, Yamei; Meredith, Patrick O. (Inventor)

    2014-01-01

    A program trace is obtained and events of the program trace are traversed. For each event identified in traversing the program trace, a trace slice of which the identified event is a part is identified based on the parameter instance of the identified event. For each trace slice of which the identified event is a part, the identified event is added to an end of a record of the trace slice. These parametric trace slices can be used in a variety of different manners, such as for monitoring, mining, and predicting.

  8. Parametric system identification of catamaran for improving controller design

    NASA Astrophysics Data System (ADS)

    Timpitak, Surasak; Prempraneerach, Pradya; Pengwang, Eakkachai

    2018-01-01

    This paper presents an estimation of simplified dynamic model for only surge- and yaw- motions of catamaran by using system identification (SI) techniques to determine associated unknown parameters. These methods will enhance the performance of designing processes for the motion control system of Unmanned Surface Vehicle (USV). The simulation results demonstrate an effective way to solve for damping forces and to determine added masses by applying least-square and AutoRegressive Exogenous (ARX) methods. Both methods are then evaluated according to estimated parametric errors from the vehicle’s dynamic model. The ARX method, which yields better estimated accuracy, can then be applied to identify unknown parameters as well as to help improving a controller design of a real unmanned catamaran.

  9. Design for disassembly and sustainability assessment to support aircraft end-of-life treatment

    NASA Astrophysics Data System (ADS)

    Savaria, Christian

    Gas turbine engine design is a multidisciplinary and iterative process. Many design iterations are necessary to address the challenges among the disciplines. In the creation of a new engine architecture, the design time is crucial in capturing new business opportunities. At the detail design phase, it was proven very difficult to correct an unsatisfactory design. To overcome this difficulty, the concept of Multi-Disciplinary Optimization (MDO) at the preliminary design phase (Preliminary MDO or PMDO) is used allowing more freedom to perform changes in the design. PMDO also reduces the design time at the preliminary design phase. The concept of PMDO was used was used to create parametric models, and new correlations for high pressure gas turbine housing and shroud segments towards a new design process. First, dedicated parametric models were created because of their reusability and versatility. Their ease of use compared to non-parameterized models allows more design iterations thus reduces set up and design time. Second, geometry correlations were created to minimize the number of parameters used in turbine housing and shroud segment design. Since the turbine housing and the shroud segment geometries are required in tip clearance analyses, care was taken as to not oversimplify the parametric formulation. In addition, a user interface was developed to interact with the parametric models and improve the design time. Third, the cooling flow predictions require many engine parameters (i.e. geometric and performance parameters and air properties) and a reference shroud segments. A second correlation study was conducted to minimize the number of engine parameters required in the cooling flow predictions and to facilitate the selection of a reference shroud segment. Finally, the parametric models, the geometry correlations, and the user interface resulted in a time saving of 50% and an increase in accuracy of 56% in the new design system compared to the existing design system. Also, regarding the cooling flow correlations, the number of engine parameters was reduced by a factor of 6 to create a simplified prediction model and hence a faster shroud segment selection process. None

  10. Increased Reliability for Single-Case Research Results: Is the Bootstrap the Answer?

    ERIC Educational Resources Information Center

    Parker, Richard I.

    2006-01-01

    There is need for objective and reliable single-case research (SCR) results in the movement toward evidence-based interventions (EBI), for inclusion in meta-analyses, and for funding accountability in clinical contexts. Yet SCR deals with data that often do not conform to parametric data assumptions and that yield results of low reliability. A…

  11. A Distributional Difference-in-Difference Evaluation of the Response of School Expenditures to Reforms and Tax Limits

    ERIC Educational Resources Information Center

    McMillen, Daniel P.; Singell, Larry D., Jr.

    2010-01-01

    Prior work uses a parametric approach to study the distributional effects of school finance reform and finds evidence that reform yields greater equality of school expenditures by lowering spending in high-spending districts (leveling down) or increasing spending in low-spending districts (leveling up). We develop a kernel density…

  12. Formation of algae growth constitutive relations for improved algae modeling.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gharagozloo, Patricia E.; Drewry, Jessica Louise.

    This SAND report summarizes research conducted as a part of a two year Laboratory Directed Research and Development (LDRD) project to improve our abilities to model algal cultivation. Algae-based biofuels have generated much excitement due to their potentially large oil yield from relatively small land use and without interfering with the food or water supply. Algae mitigate atmospheric CO2 through metabolism. Efficient production of algal biofuels could reduce dependence on foreign oil by providing a domestic renewable energy source. Important factors controlling algal productivity include temperature, nutrient concentrations, salinity, pH, and the light-to-biomass conversion rate. Computational models allow for inexpensivemore » predictions of algae growth kinetics in these non-ideal conditions for various bioreactor sizes and geometries without the need for multiple expensive measurement setups. However, these models need to be calibrated for each algal strain. In this work, we conduct a parametric study of key marine algae strains and apply the findings to a computational model.« less

  13. Quantum and classical noise in practical quantum-cryptography systems based on polarization-entangled photons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Castelletto, S.; Degiovanni, I.P.; Rastello, M.L.

    2003-02-01

    Quantum-cryptography key distribution (QCKD) experiments have been recently reported using polarization-entangled photons. However, in any practical realization, quantum systems suffer from either unwanted or induced interactions with the environment and the quantum measurement system, showing up as quantum and, ultimately, statistical noise. In this paper, we investigate how an ideal polarization entanglement in spontaneous parametric down-conversion (SPDC) suffers quantum noise in its practical implementation as a secure quantum system, yielding errors in the transmitted bit sequence. Since all SPDC-based QCKD schemes rely on the measurement of coincidence to assert the bit transmission between the two parties, we bundle up themore » overall quantum and statistical noise in an exhaustive model to calculate the accidental coincidences. This model predicts the quantum-bit error rate and the sifted key and allows comparisons between different security criteria of the hitherto proposed QCKD protocols, resulting in an objective assessment of performances and advantages of different systems.« less

  14. Josephson parametric converter saturation and higher order effects

    NASA Astrophysics Data System (ADS)

    Liu, G.; Chien, T.-C.; Cao, X.; Lanes, O.; Alpern, E.; Pekker, D.; Hatridge, M.

    2017-11-01

    Microwave parametric amplifiers based on Josephson junctions have become indispensable components of many quantum information experiments. One key limitation which has not been well predicted by theory is the gain saturation behavior which limits the amplifier's ability to process large amplitude signals. The typical explanation for this behavior in phase-preserving amplifiers based on three-wave mixing, such as the Josephson Parametric Converter, is pump depletion, in which the consumption of pump photons to produce amplification results in a reduction in gain. However, in this work, we present experimental data and theoretical calculations showing that the fourth-order Kerr nonlinearities inherent in Josephson junctions are the dominant factor. The Kerr-based theory has the unusual property of causing saturation to both lower and higher gains, depending on bias conditions. This work presents an efficient methodology for optimizing device performance in the presence of Kerr nonlinearities while retaining device tunability and points to the necessity of controlling higher-order Hamiltonian terms to make further improvements in parametric devices.

  15. Study of magnetic resonance with parametric modulation in a potassium vapor cell

    NASA Astrophysics Data System (ADS)

    Zhang, Rui; Wang, Zhiguo; Peng, Xiang; Li, Wenhao; Li, Songjian; Guo, Hong; Cream Team

    2017-04-01

    A typical magnetic-resonance scheme employs a static bias magnetic field and an orthogonal driving magnetic field oscillating at the Larmor frequency, at which the atomic polarization precesses around the static magnetic field. We demonstrate in a potassium vapor cell the variations of the resonance condition and the spin precession dynamics resulting from the parametric modulation of the bias field, which are in well agreement with theoretical predictions from the Bloch equation. We show that, the driving magnetic field with the frequency detuned by different harmonics of the parametric modulation frequency can lead to resonance as well. Also, a series of frequency sidebands centered at the driving frequency and spaced by the parametric modulation frequency can be observed in the precession of the atomic polarization. These effects could be used in different atomic magnetometry applications. This work is supported by the National Science Fund for Distinguished Young Scholars of China (Grant No. 61225003) and the National Natural Science Foundation of China (Grant Nos. 61531003 and 61571018).

  16. Near-self-imaging cavity for three-mode optoacoustic parametric amplifiers using silicon microresonators.

    PubMed

    Liu, Jian; Torres, F A; Ma, Yubo; Zhao, C; Ju, L; Blair, D G; Chao, S; Roch-Jeune, I; Flaminio, R; Michel, C; Liu, K-Y

    2014-02-10

    Three-mode optoacoustic parametric amplifiers (OAPAs), in which a pair of photon modes are strongly coupled to an acoustic mode, provide a general platform for investigating self-cooling, parametric instability and very sensitive transducers. Their realization requires an optical cavity with tunable transverse modes and a high quality-factor mirror resonator. This paper presents the design of a table-top OAPA based on a near-self-imaging cavity design, using a silicon torsional microresonator. The design achieves a tuning coefficient for the optical mode spacing of 2.46  MHz/mm. This allows tuning of the mode spacing between amplification and self-cooling regimes of the OAPA device. Based on demonstrated resonator parameters (frequencies ∼400  kHz and quality-factors ∼7.5×10(5) we predict that the OAPA can achieve parametric instability with 1.6 μW of input power and mode cooling by a factor of 1.9×10(4) with 30 mW of input power.

  17. Model risk for European-style stock index options.

    PubMed

    Gençay, Ramazan; Gibson, Rajna

    2007-01-01

    In empirical modeling, there have been two strands for pricing in the options literature, namely the parametric and nonparametric models. Often, the support for the nonparametric methods is based on a benchmark such as the Black-Scholes (BS) model with constant volatility. In this paper, we study the stochastic volatility (SV) and stochastic volatility random jump (SVJ) models as parametric benchmarks against feedforward neural network (FNN) models, a class of neural network models. Our choice for FNN models is due to their well-studied universal approximation properties of an unknown function and its partial derivatives. Since the partial derivatives of an option pricing formula are risk pricing tools, an accurate estimation of the unknown option pricing function is essential for pricing and hedging. Our findings indicate that FNN models offer themselves as robust option pricing tools, over their sophisticated parametric counterparts in predictive settings. There are two routes to explain the superiority of FNN models over the parametric models in forecast settings. These are nonnormality of return distributions and adaptive learning.

  18. Model and parametric uncertainty in source-based kinematic models of earthquake ground motion

    USGS Publications Warehouse

    Hartzell, Stephen; Frankel, Arthur; Liu, Pengcheng; Zeng, Yuehua; Rahman, Shariftur

    2011-01-01

    Four independent ground-motion simulation codes are used to model the strong ground motion for three earthquakes: 1994 Mw 6.7 Northridge, 1989 Mw 6.9 Loma Prieta, and 1999 Mw 7.5 Izmit. These 12 sets of synthetics are used to make estimates of the variability in ground-motion predictions. In addition, ground-motion predictions over a grid of sites are used to estimate parametric uncertainty for changes in rupture velocity. We find that the combined model uncertainty and random variability of the simulations is in the same range as the variability of regional empirical ground-motion data sets. The majority of the standard deviations lie between 0.5 and 0.7 natural-log units for response spectra and 0.5 and 0.8 for Fourier spectra. The estimate of model epistemic uncertainty, based on the different model predictions, lies between 0.2 and 0.4, which is about one-half of the estimates for the standard deviation of the combined model uncertainty and random variability. Parametric uncertainty, based on variation of just the average rupture velocity, is shown to be consistent in amplitude with previous estimates, showing percentage changes in ground motion from 50% to 300% when rupture velocity changes from 2.5 to 2.9 km/s. In addition, there is some evidence that mean biases can be reduced by averaging ground-motion estimates from different methods.

  19. Serum and Plasma Metabolomic Biomarkers for Lung Cancer.

    PubMed

    Kumar, Nishith; Shahjaman, Md; Mollah, Md Nurul Haque; Islam, S M Shahinul; Hoque, Md Aminul

    2017-01-01

    In drug invention and early disease prediction of lung cancer, metabolomic biomarker detection is very important. Mortality rate can be decreased, if cancer is predicted at the earlier stage. Recent diagnostic techniques for lung cancer are not prognosis diagnostic techniques. However, if we know the name of the metabolites, whose intensity levels are considerably changing between cancer subject and control subject, then it will be easy to early diagnosis the disease as well as to discover the drug. Therefore, in this paper we have identified the influential plasma and serum blood sample metabolites for lung cancer and also identified the biomarkers that will be helpful for early disease prediction as well as for drug invention. To identify the influential metabolites, we considered a parametric and a nonparametric test namely student׳s t-test as parametric and Kruskal-Wallis test as non-parametric test. We also categorized the up-regulated and down-regulated metabolites by the heatmap plot and identified the biomarkers by support vector machine (SVM) classifier and pathway analysis. From our analysis, we got 27 influential (p-value<0.05) metabolites from plasma sample and 13 influential (p-value<0.05) metabolites from serum sample. According to the importance plot through SVM classifier, pathway analysis and correlation network analysis, we declared 4 metabolites (taurine, aspertic acid, glutamine and pyruvic acid) as plasma biomarker and 3 metabolites (aspartic acid, taurine and inosine) as serum biomarker.

  20. Parametric sensitivity analysis of an agro-economic model of management of irrigation water

    NASA Astrophysics Data System (ADS)

    El Ouadi, Ihssan; Ouazar, Driss; El Menyari, Younesse

    2015-04-01

    The current work aims to build an analysis and decision support tool for policy options concerning the optimal allocation of water resources, while allowing a better reflection on the issue of valuation of water by the agricultural sector in particular. Thus, a model disaggregated by farm type was developed for the rural town of Ait Ben Yacoub located in the east Morocco. This model integrates economic, agronomic and hydraulic data and simulates agricultural gross margin across in this area taking into consideration changes in public policy and climatic conditions, taking into account the competition for collective resources. To identify the model input parameters that influence over the results of the model, a parametric sensitivity analysis is performed by the "One-Factor-At-A-Time" approach within the "Screening Designs" method. Preliminary results of this analysis show that among the 10 parameters analyzed, 6 parameters affect significantly the objective function of the model, it is in order of influence: i) Coefficient of crop yield response to water, ii) Average daily gain in weight of livestock, iii) Exchange of livestock reproduction, iv) maximum yield of crops, v) Supply of irrigation water and vi) precipitation. These 6 parameters register sensitivity indexes ranging between 0.22 and 1.28. Those results show high uncertainties on these parameters that can dramatically skew the results of the model or the need to pay particular attention to their estimates. Keywords: water, agriculture, modeling, optimal allocation, parametric sensitivity analysis, Screening Designs, One-Factor-At-A-Time, agricultural policy, climate change.

  1. flexsurv: A Platform for Parametric Survival Modeling in R

    PubMed Central

    Jackson, Christopher H.

    2018-01-01

    flexsurv is an R package for fully-parametric modeling of survival data. Any parametric time-to-event distribution may be fitted if the user supplies a probability density or hazard function, and ideally also their cumulative versions. Standard survival distributions are built in, including the three and four-parameter generalized gamma and F distributions. Any parameter of any distribution can be modeled as a linear or log-linear function of covariates. The package also includes the spline model of Royston and Parmar (2002), in which both baseline survival and covariate effects can be arbitrarily flexible parametric functions of time. The main model-fitting function, flexsurvreg, uses the familiar syntax of survreg from the standard survival package (Therneau 2016). Censoring or left-truncation are specified in ‘Surv’ objects. The models are fitted by maximizing the full log-likelihood, and estimates and confidence intervals for any function of the model parameters can be printed or plotted. flexsurv also provides functions for fitting and predicting from fully-parametric multi-state models, and connects with the mstate package (de Wreede, Fiocco, and Putter 2011). This article explains the methods and design principles of the package, giving several worked examples of its use. PMID:29593450

  2. Can blood and semen presepsin levels in males predict pregnancy in couples undergoing intra-cytoplasmic sperm injection?

    PubMed

    Ovayolu, Ali; Arslanbuğa, Cansev Yilmaz; Gun, Ismet; Devranoglu, Belgin; Ozdemir, Arman; Cakar, Sule Eren

    2016-01-01

    To determine whether semen and plasma presepsin values measured in men with normozoospermia and oligoasthenospermia undergoing invitro-fertilization would be helpful in predicting ongoing pregnancy and live birth. Group-I was defined as patients who had pregnancy after treatment and Group-II comprised those with no pregnancy. Semen and blood presepsin values were subsequently compared between the groups. Parametric comparisons were performed using Student's t-test, and non-parametric comparisons were conducted using the Mann-Whitney U test. There were 42 patients in Group-I and 72 in Group-II. In the context of successful pregnancy and live birth, semen presepsin values were statistically significantly higher in Group-I than in Group-II (p= 0.004 and p= 0.037, respectively). The most appropriate semen presepsin cut-off value for predicting both ongoing pregnancy and live birth was calculated as 199 pg/mL. Accordingly, their sensitivity was 64.5% to 59.3%, their specificity was 57.0% to 54.2%, and their positive predictive value was 37.0% to 29.6%, respectively; their negative predictive value was 80.4% in both instances. Semen presepsin values could be a new marker that may enable the prediction of successful pregnancy and/or live birth. Its negative predictive values are especially high.

  3. A Robust Adaptive Autonomous Approach to Optimal Experimental Design

    NASA Astrophysics Data System (ADS)

    Gu, Hairong

    Experimentation is the fundamental tool of scientific inquiries to understand the laws governing the nature and human behaviors. Many complex real-world experimental scenarios, particularly in quest of prediction accuracy, often encounter difficulties to conduct experiments using an existing experimental procedure for the following two reasons. First, the existing experimental procedures require a parametric model to serve as the proxy of the latent data structure or data-generating mechanism at the beginning of an experiment. However, for those experimental scenarios of concern, a sound model is often unavailable before an experiment. Second, those experimental scenarios usually contain a large number of design variables, which potentially leads to a lengthy and costly data collection cycle. Incompetently, the existing experimental procedures are unable to optimize large-scale experiments so as to minimize the experimental length and cost. Facing the two challenges in those experimental scenarios, the aim of the present study is to develop a new experimental procedure that allows an experiment to be conducted without the assumption of a parametric model while still achieving satisfactory prediction, and performs optimization of experimental designs to improve the efficiency of an experiment. The new experimental procedure developed in the present study is named robust adaptive autonomous system (RAAS). RAAS is a procedure for sequential experiments composed of multiple experimental trials, which performs function estimation, variable selection, reverse prediction and design optimization on each trial. Directly addressing the challenges in those experimental scenarios of concern, function estimation and variable selection are performed by data-driven modeling methods to generate a predictive model from data collected during the course of an experiment, thus exempting the requirement of a parametric model at the beginning of an experiment; design optimization is performed to select experimental designs on the fly of an experiment based on their usefulness so that fewest designs are needed to reach useful inferential conclusions. Technically, function estimation is realized by Bayesian P-splines, variable selection is realized by Bayesian spike-and-slab prior, reverse prediction is realized by grid-search and design optimization is realized by the concepts of active learning. The present study demonstrated that RAAS achieves statistical robustness by making accurate predictions without the assumption of a parametric model serving as the proxy of latent data structure while the existing procedures can draw poor statistical inferences if a misspecified model is assumed; RAAS also achieves inferential efficiency by taking fewer designs to acquire useful statistical inferences than non-optimal procedures. Thus, RAAS is expected to be a principled solution to real-world experimental scenarios pursuing robust prediction and efficient experimentation.

  4. Verification of a three-dimensional resin transfer molding process simulation model

    NASA Technical Reports Server (NTRS)

    Fingerson, John C.; Loos, Alfred C.; Dexter, H. Benson

    1995-01-01

    Experimental evidence was obtained to complete the verification of the parameters needed for input to a three-dimensional finite element model simulating the resin flow and cure through an orthotropic fabric preform. The material characterizations completed include resin kinetics and viscosity models, as well as preform permeability and compaction models. The steady-state and advancing front permeability measurement methods are compared. The results indicate that both methods yield similar permeabilities for a plain weave, bi-axial fiberglass fabric. Also, a method to determine principal directions and permeabilities is discussed and results are shown for a multi-axial warp knit preform. The flow of resin through a blade-stiffened preform was modeled and experiments were completed to verify the results. The predicted inlet pressure was approximately 65% of the measured value. A parametric study was performed to explain differences in measured and predicted flow front advancement and inlet pressures. Furthermore, PR-500 epoxy resin/IM7 8HS carbon fabric flat panels were fabricated by the Resin Transfer Molding process. Tests were completed utilizing both perimeter injection and center-port injection as resin inlet boundary conditions. The mold was instrumented with FDEMS sensors, pressure transducers, and thermocouples to monitor the process conditions. Results include a comparison of predicted and measured inlet pressures and flow front position. For the perimeter injection case, the measured inlet pressure and flow front results compared well to the predicted results. The results of the center-port injection case showed that the predicted inlet pressure was approximately 50% of the measured inlet pressure. Also, measured flow front position data did not agree well with the predicted results. Possible reasons for error include fiber deformation at the resin inlet and a lag in FDEMS sensor wet-out due to low mold pressures.

  5. A strategy for improved computational efficiency of the method of anchored distributions

    NASA Astrophysics Data System (ADS)

    Over, Matthew William; Yang, Yarong; Chen, Xingyuan; Rubin, Yoram

    2013-06-01

    This paper proposes a strategy for improving the computational efficiency of model inversion using the method of anchored distributions (MAD) by "bundling" similar model parametrizations in the likelihood function. Inferring the likelihood function typically requires a large number of forward model (FM) simulations for each possible model parametrization; as a result, the process is quite expensive. To ease this prohibitive cost, we present an approximation for the likelihood function called bundling that relaxes the requirement for high quantities of FM simulations. This approximation redefines the conditional statement of the likelihood function as the probability of a set of similar model parametrizations "bundle" replicating field measurements, which we show is neither a model reduction nor a sampling approach to improving the computational efficiency of model inversion. To evaluate the effectiveness of these modifications, we compare the quality of predictions and computational cost of bundling relative to a baseline MAD inversion of 3-D flow and transport model parameters. Additionally, to aid understanding of the implementation we provide a tutorial for bundling in the form of a sample data set and script for the R statistical computing language. For our synthetic experiment, bundling achieved a 35% reduction in overall computational cost and had a limited negative impact on predicted probability distributions of the model parameters. Strategies for minimizing error in the bundling approximation, for enforcing similarity among the sets of model parametrizations, and for identifying convergence of the likelihood function are also presented.

  6. Parameterization models for pesticide exposure via crop consumption.

    PubMed

    Fantke, Peter; Wieland, Peter; Juraske, Ronnie; Shaddick, Gavin; Itoiz, Eva Sevigné; Friedrich, Rainer; Jolliet, Olivier

    2012-12-04

    An approach for estimating human exposure to pesticides via consumption of six important food crops is presented that can be used to extend multimedia models applied in health risk and life cycle impact assessment. We first assessed the variation of model output (pesticide residues per kg applied) as a function of model input variables (substance, crop, and environmental properties) including their possible correlations using matrix algebra. We identified five key parameters responsible for between 80% and 93% of the variation in pesticide residues, namely time between substance application and crop harvest, degradation half-lives in crops and on crop surfaces, overall residence times in soil, and substance molecular weight. Partition coefficients also play an important role for fruit trees and tomato (Kow), potato (Koc), and lettuce (Kaw, Kow). Focusing on these parameters, we develop crop-specific models by parametrizing a complex fate and exposure assessment framework. The parametric models thereby reflect the framework's physical and chemical mechanisms and predict pesticide residues in harvest using linear combinations of crop, crop surface, and soil compartments. Parametric model results correspond well with results from the complex framework for 1540 substance-crop combinations with total deviations between a factor 4 (potato) and a factor 66 (lettuce). Predicted residues also correspond well with experimental data previously used to evaluate the complex framework. Pesticide mass in harvest can finally be combined with reduction factors accounting for food processing to estimate human exposure from crop consumption. All parametric models can be easily implemented into existing assessment frameworks.

  7. The Hubble flow of plateau inflation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coone, Dries; Roest, Diederik; Vennin, Vincent, E-mail: a.a.coone@rug.nl, E-mail: d.roest@rug.nl, E-mail: vincent.vennin@port.ac.uk

    2015-11-01

    In the absence of CMB precision measurements, a Taylor expansion has often been invoked to parametrize the Hubble flow function during inflation. The standard ''horizon flow'' procedure implicitly relies on this assumption. However, the recent Planck results indicate a strong preference for plateau inflation, which suggests the use of Padé approximants instead. We propose a novel method that provides analytic solutions of the flow equations for a given parametrization of the Hubble function. This method is illustrated in the Taylor and Padé cases, for low order expansions. We then present the results of a full numerical treatment scanning larger ordermore » expansions, and compare these parametrizations in terms of convergence, prior dependence, predictivity and compatibility with the data. Finally, we highlight the implications for potential reconstruction methods.« less

  8. An experimental and analytical investigation of proprotor whirl flutter

    NASA Technical Reports Server (NTRS)

    Kvaternik, R. G.; Kohn, J. S.

    1977-01-01

    The results of an experimental parametric investigation of whirl flutter are presented for a model consisting of a windmilling propeller-rotor, or proprotor, having blades with offset flapping hinges mounted on a rigid pylon with flexibility in pitch and yaw. The investigation was motivated by the need to establish a large data base from which to assess the predictability of whirl flutter for a proprotor since some question has been raised as to whether flutter in the forward whirl mode could be predicted with confidence. To provide the necessary data base, the parametric study included variation in the pylon pitch and yaw stiffnesses, flapping hinge offset, and blade kinematic pitch-flap coupling over a large range of advance ratios. Cases of forward whirl flutter and of backward whirl flutter are documented. Measured whirl flutter characteristics were shown to be in good agreement with predictions from two different linear stability analyses which employed simple, two dimensional, quasi-steady aerodynamics for the blade loading. On the basis of these results, it appears that proprotor whirl flutter, both forward and backward, can be predicted.

  9. Multi-Node Thermal System Model for Lithium-Ion Battery Packs: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shi, Ying; Smith, Kandler; Wood, Eric

    Temperature is one of the main factors that controls the degradation in lithium ion batteries. Accurate knowledge and control of cell temperatures in a pack helps the battery management system (BMS) to maximize cell utilization and ensure pack safety and service life. In a pack with arrays of cells, a cells temperature is not only affected by its own thermal characteristics but also by its neighbors, the cooling system and pack configuration, which increase the noise level and the complexity of cell temperatures prediction. This work proposes to model lithium ion packs thermal behavior using a multi-node thermal network model,more » which predicts the cell temperatures by zones. The model was parametrized and validated using commercial lithium-ion battery packs. neighbors, the cooling system and pack configuration, which increase the noise level and the complexity of cell temperatures prediction. This work proposes to model lithium ion packs thermal behavior using a multi-node thermal network model, which predicts the cell temperatures by zones. The model was parametrized and validated using commercial lithium-ion battery packs.« less

  10. From experiments to simulations: tracing Na+ distribution around roots under different transpiration rates and salinity levels

    NASA Astrophysics Data System (ADS)

    Perelman, Adi; Jorda, Helena; Vanderborght, Jan; Pohlmeier, Andreas; Lazarovitch, Naftali

    2017-04-01

    When salinity increases beyond a certain threshold it will result in reduced crop yield at a fixed rate, according to Maas and Hoffman model (1976). Thus, there is a great importance of predicting salinization and its impact on crops. Current models do not consider the impact of environmental conditions on plants salt tolerance, even though these conditions are affecting plant water uptake and therefore salt accumulation around the roots. Different factors, such as transpiration rates, can influence the plant sensitivity to salinity by influencing salt concentrations around the roots. Better parametrization of a model can help improving predicting the real effects of salinity on crop growth and yield. The aim of this research is to study Na+ distribution around roots at different scales using different non-invasive methods, and study how this distribution is being affected by transpiration rate and plant water uptake. Results from tomato plants growing on Rhizoslides (capillary paper growth system), show that Na+ concentration is higher at the root- substrate interface, compared with the bulk. Also, Na+ accumulation around the roots decreased under low transpiration rate, which is supporting our hypothesis. Additionally, Rhizoslides enable to study roots' growth rate and architecture under different salinity levels. Root system architecture was retrieved from photos taken during the experiment and enabled us to incorporate real root systems into a simulation. To observe the correlation of root system architectures and Na+ distribution in three dimensions, we used magnetic resonance imaging (MRI). MRI provides fine resolution of Na+ accumulation around a single root without disturbing the root system. With time, Na+ was accumulating only where roots were found in the soil and later on around specific roots. These data are being used for model calibration, which is expected to predict root water uptake in saline soils for different climatic conditions and different soil water availabilities.

  11. Broadly tunable picosecond ir source

    DOEpatents

    Campillo, A.J.; Hyer, R.C.; Shapiro, S.L.

    1980-04-23

    A picosecond traveling-wave parametric device capable of controlled spectral bandwidth and wavelength in the infrared is reported. Intense 1.064 ..mu..m picosecond pulses (1) pass through a 4.5 cm long LiNbO/sub 3/ optical parametric oscillator crystal (2) set at its degeneracy angle. A broad band emerges, and a simple grating (3) and mirror (4) arrangement is used to inject a selected narrow-band into a 2 cm long LiNbO/sub 3/ optical parametric amplifier crystal (5) along a second pump line. Typical input energies at 1.064 ..mu..m along both pump lines are 6 to 8 mJ for the oscillator and 10 mJ for the amplifier. This yields 1 mJ of tunable output in the range 1.98 to 2.38 ..mu..m which when down-converted in a 1 cm long CdSe crystal mixer (6) gives 2 ..mu..J of tunable radiation over the 14.8 to 18.5 ..mu..m region. The bandwidth and wavelength of both the 2 and 16 ..mu..m radiation output are controlled solely by the diffraction grating.

  12. Broadly tunable picosecond IR source

    DOEpatents

    Campillo, Anthony J.; Hyer, Ronald C.; Shapiro, Stanley J.

    1982-01-01

    A picosecond traveling-wave parametric device capable of controlled spectral bandwidth and wavelength in the infrared is reported. Intense 1.064 .mu.m picosecond pulses (1) pass through a 4.5 cm long LiNbO.sub.3 optical parametric oscillator crystal (2) set at its degeneracy angle. A broad band emerges, and a simple grating (3) and mirror (4) arrangement is used to inject a selected narrow-band into a 2 cm long LiNbO.sub.3 optical parametric amplifier crystal (5) along a second pump line. Typical input energies at 1.064 .mu.m along both pump lines are 6-8 mJ for the oscillator and 10 mJ for the amplifier. This yields 1 mJ of tunable output in the range 1.98 to 2.38 .mu.m which when down-converted in a 1 cm long CdSe crystal mixer (6) gives 2 .mu.J of tunable radiation over the 14.8 to 18.5 .mu.m region. The bandwidth and wavelength of both the 2 and 16 .mu.m radiation output are controlled solely by the diffraction grating.

  13. Influencing agent group behavior by adjusting cultural trait values.

    PubMed

    Tuli, Gaurav; Hexmoor, Henry

    2010-10-01

    Social reasoning and norms among individuals that share cultural traits are largely fashioned by those traits. We have explored predominant sociological and cultural traits. We offer a methodology for parametrically adjusting relevant traits. This exploratory study heralds a capability to deliberately tune cultural group traits in order to produce a desired group behavior. To validate our methodology, we implemented a prototypical-agent-based simulated test bed for demonstrating an exemplar from intelligence, surveillance, and reconnaissance scenario. A group of simulated agents traverses a hostile territory while a user adjusts their cultural group trait settings. Group and individual utilities are dynamically observed against parametric values for the selected traits. Uncertainty avoidance index and individualism are the cultural traits we examined in depth. Upon the user's training of the correspondence between cultural values and system utilities, users deliberately produce the desired system utilities by issuing changes to trait. Specific cultural traits are without meaning outside of their context. Efficacy and timely application of traits in a given context do yield desirable results. This paper heralds a path for the control of large systems via parametric cultural adjustments.

  14. Relative Critical Points

    NASA Astrophysics Data System (ADS)

    Lewis, Debra

    2013-05-01

    Relative equilibria of Lagrangian and Hamiltonian systems with symmetry are critical points of appropriate scalar functions parametrized by the Lie algebra (or its dual) of the symmetry group. Setting aside the structures - symplectic, Poisson, or variational - generating dynamical systems from such functions highlights the common features of their construction and analysis, and supports the construction of analogous functions in non-Hamiltonian settings. If the symmetry group is nonabelian, the functions are invariant only with respect to the isotropy subgroup of the given parameter value. Replacing the parametrized family of functions with a single function on the product manifold and extending the action using the (co)adjoint action on the algebra or its dual yields a fully invariant function. An invariant map can be used to reverse the usual perspective: rather than selecting a parametrized family of functions and finding their critical points, conditions under which functions will be critical on specific orbits, typically distinguished by isotropy class, can be derived. This strategy is illustrated using several well-known mechanical systems - the Lagrange top, the double spherical pendulum, the free rigid body, and the Riemann ellipsoids - and generalizations of these systems.

  15. Efficient Simulation of Tropical Cyclone Pathways with Stochastic Perturbations

    NASA Astrophysics Data System (ADS)

    Webber, R.; Plotkin, D. A.; Abbot, D. S.; Weare, J.

    2017-12-01

    Global Climate Models (GCMs) are known to statistically underpredict intense tropical cyclones (TCs) because they fail to capture the rapid intensification and high wind speeds characteristic of the most destructive TCs. Stochastic parametrization schemes have the potential to improve the accuracy of GCMs. However, current analysis of these schemes through direct sampling is limited by the computational expense of simulating a rare weather event at fine spatial gridding. The present work introduces a stochastically perturbed parametrization tendency (SPPT) scheme to increase simulated intensity of TCs. We adapt the Weighted Ensemble algorithm to simulate the distribution of TCs at a fraction of the computational effort required in direct sampling. We illustrate the efficiency of the SPPT scheme by comparing simulations at different spatial resolutions and stochastic parameter regimes. Stochastic parametrization and rare event sampling strategies have great potential to improve TC prediction and aid understanding of tropical cyclogenesis. Since rising sea surface temperatures are postulated to increase the intensity of TCs, these strategies can also improve predictions about climate change-related weather patterns. The rare event sampling strategies used in the current work are not only a novel tool for studying TCs, but they may also be applied to sampling any range of extreme weather events.

  16. Brain Signal Variability is Parametrically Modifiable

    PubMed Central

    Garrett, Douglas D.; McIntosh, Anthony R.; Grady, Cheryl L.

    2014-01-01

    Moment-to-moment brain signal variability is a ubiquitous neural characteristic, yet remains poorly understood. Evidence indicates that heightened signal variability can index and aid efficient neural function, but it is not known whether signal variability responds to precise levels of environmental demand, or instead whether variability is relatively static. Using multivariate modeling of functional magnetic resonance imaging-based parametric face processing data, we show here that within-person signal variability level responds to incremental adjustments in task difficulty, in a manner entirely distinct from results produced by examining mean brain signals. Using mixed modeling, we also linked parametric modulations in signal variability with modulations in task performance. We found that difficulty-related reductions in signal variability predicted reduced accuracy and longer reaction times within-person; mean signal changes were not predictive. We further probed the various differences between signal variance and signal means by examining all voxels, subjects, and conditions; this analysis of over 2 million data points failed to reveal any notable relations between voxel variances and means. Our results suggest that brain signal variability provides a systematic task-driven signal of interest from which we can understand the dynamic function of the human brain, and in a way that mean signals cannot capture. PMID:23749875

  17. Kinetic Effects in Parametric Instabilities of Finite Amplitude Alfven Waves in a Drifting Multi-Species Plasma

    NASA Astrophysics Data System (ADS)

    Maneva, Y. G.; Araneda, J. A.; Poedts, S.

    2014-12-01

    We consider parametric instabilities of finite-amplitude large-scale Alfven waves in a low-beta collisionless multi-species plasma, consisting of fluid electrons, kinetic protons and a drifting population of minor ions. Complementary to many theoretical studies, relying on fluid or multi-fluid approach, in this work we present the solutions of the parametric instability dispersion relation, including kinetic effects in the parallel direction, along the ambient magnetic field. This provides us with the opportunity to predict the importance of some wave-particle interactions like Landau damping of the daughter ion-acoustic waves for the given pump wave and plasma conditions. We apply the dispersion relation to plasma parameters, typical for low-beta collisionless solar wind close to the Sun. We compare the analytical solutions to the linear stage of hybrid numerical simulations and discuss the application of the model to the problems of preferential heating and differential acceleration of minor ions in the solar corona and the fast solar wind. The results of this study provide tools for prediction and interpretation of the magnetic field and particles data as expected from the future Solar Orbiter and Solar Probe Plus missions.

  18. Prediction of human gait trajectories during the SSP using a neuromusculoskeletal modeling: A challenge for parametric optimization.

    PubMed

    Seyed, Mohammadali Rahmati; Mostafa, Rostami; Borhan, Beigzadeh

    2018-04-27

    The parametric optimization techniques have been widely employed to predict human gait trajectories; however, their applications to reveal the other aspects of gait are questionable. The aim of this study is to investigate whether or not the gait prediction model is able to justify the movement trajectories for the higher average velocities. A planar, seven-segment model with sixteen muscle groups was used to represent human neuro-musculoskeletal dynamics. At first, the joint angles, ground reaction forces (GRFs) and muscle activations were predicted and validated for normal average velocity (1.55 m/s) in the single support phase (SSP) by minimizing energy expenditure, which is subject to the non-linear constraints of the gait. The unconstrained system dynamics of extended inverse dynamics (USDEID) approach was used to estimate muscle activations. Then by scaling time and applying the same procedure, the movement trajectories were predicted for higher average velocities (from 2.07 m/s to 4.07 m/s) and compared to the pattern of movement with fast walking speed. The comparison indicated a high level of compatibility between the experimental and predicted results, except for the vertical position of the center of gravity (COG). It was concluded that the gait prediction model can be effectively used to predict gait trajectories for higher average velocities.

  19. Simultaneous one-dimensional fluorescence lifetime measurements of OH and CO in premixed flames

    NASA Astrophysics Data System (ADS)

    Jonsson, Malin; Ehn, Andreas; Christensen, Moah; Aldén, Marcus; Bood, Joakim

    2014-04-01

    A method for simultaneous measurements of fluorescence lifetimes of two species along a line is described. The experimental setup is based on picosecond laser pulses from two tunable optical parametric generator/optical parametric amplifier systems together with a streak camera. With an appropriate optical time delay between the two laser pulses, whose wavelengths are tuned to excite two different species, laser-induced fluorescence can be both detected temporally and spatially resolved by the streak camera. Hence, our method enables one-dimensional imaging of fluorescence lifetimes of two species in the same streak camera recording. The concept is demonstrated for fluorescence lifetime measurements of CO and OH in a laminar methane/air flame on a Bunsen-type burner. Measurements were taken in flames with four different equivalence ratios, namely ϕ = 0.9, 1.0, 1.15, and 1.25. The measured one-dimensional lifetime profiles generally agree well with lifetimes calculated from quenching cross sections found in the literature and quencher concentrations predicted by the GRI 3.0 mechanism. For OH, there is a systematic deviation of approximately 30 % between calculated and measured lifetimes. It is found that this is mainly due to the adiabatic assumption regarding the flame and uncertainty in H2O quenching cross section. This emphasizes the strength of measuring the quenching rates rather than relying on models. The measurement concept might be useful for single-shot measurements of fluorescence lifetimes of several species pairs of vital importance in combustion processes, hence allowing fluorescence signals to be corrected for quenching and ultimately yield quantitative concentration profiles.

  20. Comparison of parametric duct-burning turbofan and non-afterburning turbojet engines in a Mach 2.7 transport

    NASA Technical Reports Server (NTRS)

    Whitlow, J. B., Jr.

    1975-01-01

    A parametric study was made of duct-burning turbofan and suppressed dry turbojet engines installed in a supersonic transport. A range of fan pressure ratios was considered for the separate-flow-fan engines. The turbofan engines were studied both with and without jet noise suppressors. Single- as well as dual-stream suppression was considered. Attention was concentrated on designs yielding sideline noises of FAR 36 (108 EPNdB) and below. Trades were made between thrust and wing area for a constant takeoff field length. The turbofans produced lower airplane gross weights than the turbojets at FAR 36 and below. The advantage for the turbofans increased as the sideline noise limit was reduced. Jet noise suppression, especially for the duct stream, was very beneficial for the turbofan engines as long as duct burning was permitted during takeoff. The maximum dry unsuppressed takeoff mode, however, yielded better results at extremely low noise levels. Noise levels as low as FAR 36-11 EPNdB were obtained with a turbofan in this takeoff mode, but at a considerable gross weight penalty relative to the best FAR 36 results.

  1. Metabolomic prediction of yield in hybrid rice.

    PubMed

    Xu, Shizhong; Xu, Yang; Gong, Liang; Zhang, Qifa

    2016-10-01

    Rice (Oryza sativa) provides a staple food source for more than 50% of the world's population. An increase in yield can significantly contribute to global food security. Hybrid breeding can potentially help to meet this goal because hybrid rice often shows a considerable increase in yield when compared with pure-bred cultivars. We recently developed a marker-guided prediction method for hybrid yield and showed a substantial increase in yield through genomic hybrid breeding. We now have transcriptomic and metabolomic data as potential resources for prediction. Using six prediction methods, including least absolute shrinkage and selection operator (LASSO), best linear unbiased prediction (BLUP), stochastic search variable selection, partial least squares, and support vector machines using the radial basis function and polynomial kernel function, we found that the predictability of hybrid yield can be further increased using these omic data. LASSO and BLUP are the most efficient methods for yield prediction. For high heritability traits, genomic data remain the most efficient predictors. When metabolomic data are used, the predictability of hybrid yield is almost doubled compared with genomic prediction. Of the 21 945 potential hybrids derived from 210 recombinant inbred lines, selection of the top 10 hybrids predicted from metabolites would lead to a ~30% increase in yield. We hypothesize that each metabolite represents a biologically built-in genetic network for yield; thus, using metabolites for prediction is equivalent to using information integrated from these hidden genetic networks for yield prediction. © 2016 The Authors The Plant Journal © 2016 John Wiley & Sons Ltd.

  2. A simplified model of a mechanical cooling tower with both a fill pack and a coil

    NASA Astrophysics Data System (ADS)

    Van Riet, Freek; Steenackers, Gunther; Verhaert, Ivan

    2017-11-01

    Cooling accounts for a large amount of the global primary energy consumption in buildings and industrial processes. A substantial part of this cooling demand is produced by mechanical cooling towers. Simulations benefit the sizing and integration of cooling towers in overall cooling networks. However, for these simulations fast-to-calculate and easy-to-parametrize models are required. In this paper, a new model is developed for a mechanical draught cooling tower with both a cooling coil and a fill pack. The model needs manufacturers' performance data at only three operational states (at varying air and water flow rates) to be parametrized. The model predicts the cooled, outgoing water temperature. These predictions were compared with experimental data for a wide range of operational states. The model was able to predict the temperature with a maximum absolute error of 0.59°C. The relative error of cooling capacity was mostly between ±5%.

  3. Diffusion Coefficients from Molecular Dynamics Simulations in Binary and Ternary Mixtures

    NASA Astrophysics Data System (ADS)

    Liu, Xin; Schnell, Sondre K.; Simon, Jean-Marc; Krüger, Peter; Bedeaux, Dick; Kjelstrup, Signe; Bardow, André; Vlugt, Thijs J. H.

    2013-07-01

    Multicomponent diffusion in liquids is ubiquitous in (bio)chemical processes. It has gained considerable and increasing interest as it is often the rate limiting step in a process. In this paper, we review methods for calculating diffusion coefficients from molecular simulation and predictive engineering models. The main achievements of our research during the past years can be summarized as follows: (1) we introduced a consistent method for computing Fick diffusion coefficients using equilibrium molecular dynamics simulations; (2) we developed a multicomponent Darken equation for the description of the concentration dependence of Maxwell-Stefan diffusivities. In the case of infinite dilution, the multicomponent Darken equation provides an expression for [InlineEquation not available: see fulltext.] which can be used to parametrize the generalized Vignes equation; and (3) a predictive model for self-diffusivities was proposed for the parametrization of the multicomponent Darken equation. This equation accurately describes the concentration dependence of self-diffusivities in weakly associating systems. With these methods, a sound framework for the prediction of mutual diffusion in liquids is achieved.

  4. Predicting astronaut radiation doses from major solar particle events using artificial intelligence

    NASA Astrophysics Data System (ADS)

    Tehrani, Nazila H.

    1998-06-01

    Space radiation is an important issue for manned space flight. For long missions outside of the Earth's magnetosphere, there are two major sources of exposure. Large Solar Particle Events (SPEs) consisting of numerous energetic protons and other heavy ions emitted by the Sun, and the Galactic Cosmic Rays (GCRs) that constitute an isotropic radiation field of low flux and high energy. In deep-space missions both SPEs and GCRs can be hazardous to the space crew. SPEs can provide an acute dose, which is a large dose over a short period of time. The acute doses from a large SPE that could be received by an astronaut with shielding as thick as a spacesuit maybe as large as 500 cGy. GCRs will not provide acute doses, but may increase the lifetime risk of cancer from prolonged exposures in a range of 40-50 cSv/yr. In this research, we are using artificial intelligence to model the dose-time profiles during a major solar particle event. Artificial neural networks are reliable approximators for nonlinear functions. In this study we design a dynamic network. This network has the ability to update its dose predictions as new input dose data is received while the event is occurring. To accomplish this temporal behavior of the system we use an innovative Sliding Time-Delay Neural Network (STDNN). By using a STDNN one can predict doses received from large SPEs while the event is happening. The parametric fits and actual calculated doses for the skin, eye and bone marrow are used. The parametric data set obtained by fitting the Weibull functional forms to the calculated dose points has been divided into two subsets. The STDNN has been trained using some of these parametric events. The other subset of parametric data and the actual doses are used for testing with the resulting weights and biases of the first set. This is done to show that the network can generalize. Results of this testing indicate that the STDNN is capable of predicting doses from events that it has not seen before.

  5. A general approach for predicting the behavior of the Supreme Court of the United States

    PubMed Central

    Bommarito, Michael J.; Blackman, Josh

    2017-01-01

    Building on developments in machine learning and prior work in the science of judicial prediction, we construct a model designed to predict the behavior of the Supreme Court of the United States in a generalized, out-of-sample context. To do so, we develop a time-evolving random forest classifier that leverages unique feature engineering to predict more than 240,000 justice votes and 28,000 cases outcomes over nearly two centuries (1816-2015). Using only data available prior to decision, our model outperforms null (baseline) models at both the justice and case level under both parametric and non-parametric tests. Over nearly two centuries, we achieve 70.2% accuracy at the case outcome level and 71.9% at the justice vote level. More recently, over the past century, we outperform an in-sample optimized null model by nearly 5%. Our performance is consistent with, and improves on the general level of prediction demonstrated by prior work; however, our model is distinctive because it can be applied out-of-sample to the entire past and future of the Court, not a single term. Our results represent an important advance for the science of quantitative legal prediction and portend a range of other potential applications. PMID:28403140

  6. Two-stage optical parametric chirped-pulse amplifier using sub-nanosecond pump pulse generated by stimulated Brillouin scattering compression

    NASA Astrophysics Data System (ADS)

    Ogino, Jumpei; Miyamoto, Sho; Matsuyama, Takahiro; Sueda, Keiichi; Yoshida, Hidetsugu; Tsubakimoto, Koji; Miyanaga, Noriaki

    2014-12-01

    We demonstrate optical parametric chirped-pulse amplification (OPCPA) based on two-beam pumping, using sub-nanosecond pulses generated by stimulated Brillouin scattering compression. Seed pulse energy, duration, and center wavelength were 5 nJ, 220 ps, and ˜1065 nm, respectively. The 532 nm pulse from a Q-switched Nd:YAG laser was compressed to ˜400 ps in heavy fluorocarbon FC-40 liquid. Stacking of two time-delayed pump pulses reduced the amplifier gain fluctuation. Using a walk-off-compensated two-stage OPCPA at a pump energy of 34 mJ, a total gain of 1.6 × 105 was obtained, yielding an output energy of 0.8 mJ. The amplified chirped pulse was compressed to 97 fs.

  7. X-1 to X-Wings: Developing a Parametric Cost Model

    NASA Technical Reports Server (NTRS)

    Sterk, Steve; McAtee, Aaron

    2015-01-01

    In todays cost-constrained environment, NASA needs an X-Plane database and parametric cost model that can quickly provide rough order of magnitude predictions of cost from initial concept to first fight of potential X-Plane aircraft. This paper takes a look at the steps taken in developing such a model and reports the results. The challenges encountered in the collection of historical data and recommendations for future database management are discussed. A step-by-step discussion of the development of Cost Estimating Relationships (CERs) is then covered.

  8. Genome-wide analysis of genetic susceptibility to language impairment in an isolated Chilean population

    PubMed Central

    Villanueva, Pia; Newbury, Dianne F; Jara, Lilian; De Barbieri, Zulema; Mirza, Ghazala; Palomino, Hernán M; Fernández, María Angélica; Cazier, Jean-Baptiste; Monaco, Anthony P; Palomino, Hernán

    2011-01-01

    Specific language impairment (SLI) is an unexpected deficit in the acquisition of language skills and affects between 5 and 8% of pre-school children. Despite its prevalence and high heritability, our understanding of the aetiology of this disorder is only emerging. In this paper, we apply genome-wide techniques to investigate an isolated Chilean population who exhibit an increased frequency of SLI. Loss of heterozygosity (LOH) mapping and parametric and non-parametric linkage analyses indicate that complex genetic factors are likely to underlie susceptibility to SLI in this population. Across all analyses performed, the most consistently implicated locus was on chromosome 7q. This locus achieved highly significant linkage under all three non-parametric models (max NPL=6.73, P=4.0 × 10−11). In addition, it yielded a HLOD of 1.24 in the recessive parametric linkage analyses and contained a segment that was homozygous in two affected individuals. Further, investigation of this region identified a two-SNP haplotype that occurs at an increased frequency in language-impaired individuals (P=0.008). We hypothesise that the linkage regions identified here, in particular that on chromosome 7, may contain variants that underlie the high prevalence of SLI observed in this isolated population and may be of relevance to other populations affected by language impairments. PMID:21248734

  9. A parametric model of muscle moment arm as a function of joint angle: application to the dorsiflexor muscle group in mice.

    PubMed

    Miller, S W; Dennis, R G

    1996-12-01

    A parametric model was developed to describe the relationship between muscle moment arm and joint angle. The model was applied to the dorsiflexor muscle group in mice, for which the moment arm was determined as a function of ankle angle. The moment arm was calculated from the torque measured about the ankle upon application of a known force along the line of action of the dorsiflexor muscle group. The dependence of the dorsiflexor moment arm on ankle angle was modeled as r = R sin(a + delta), where r is the moment arm calculated from the measured torque and a is the joint angle. A least-squares curve fit yielded values for R, the maximum moment arm, and delta, the angle at which the maximum moment arm occurs as offset from 90 degrees. Parametric models were developed for two strains of mice, and no differences were found between the moment arms determined for each strain. Values for the maximum moment arm, R, for the two different strains were 0.99 and 1.14 mm, in agreement with the limited data available from the literature. While in some cases moment arm data may be better fitted by a polynomial, use of the parametric model provides a moment arm relationship with meaningful anatomical constants, allowing for the direct comparison of moment arm characteristics between different strains and species.

  10. Web buckling behavior under in-plane compression and shear loads for web reinforced composite sandwich core

    NASA Astrophysics Data System (ADS)

    Toubia, Elias Anis

    Sandwich construction is one of the most functional forms of composite structures developed by the composite industry. Due to the increasing demand of web-reinforced core for composite sandwich construction, a research study is needed to investigate the web plate instability under shear, compression, and combined loading. If the web, which is an integral part of the three dimensional web core sandwich structure, happens to be slender with respect to one or two of its spatial dimensions, then buckling phenomena become an issue in that it must be quantified as part of a comprehensive strength model for a fiber reinforced core. In order to understand the thresholds of thickness, web weight, foam type, and whether buckling will occur before material yielding, a thorough investigation needs to be conducted, and buckling design equations need to be developed. Often in conducting a parametric study, a special purpose analysis is preferred over a general purpose analysis code, such as a finite element code, due to the cost and effort usually involved in generating a large number of results. A suitable methodology based on an energy method is presented to solve the stability of symmetrical and specially orthotropic laminated plates on an elastic foundation. Design buckling equations were developed for the web modeled as a laminated plate resting on elastic foundations. The proposed equations allow for parametric studies without limitation regarding foam stiffness, geometric dimensions, or mechanical properties. General behavioral trends of orthotropic and symmetrical anisotropic plates show pronounced contribution of the elastic foundation and fiber orientations on the buckling resistance of the plate. The effects of flexural anisotropy on the buckling behavior of long rectangular plates when subjected to pure shear loading are well represented in the model. The reliability of the buckling equations as a design tool is confirmed by comparison with experimental results. Comparing to predicted values, the experimental plate shear test results range between 15 and 35 percent, depending on the boundary conditions considered. The compression testing yielded conservative results, and as such, can provide a valuable tool for the designer.

  11. Estimation of k-ε parameters using surrogate models and jet-in-crossflow data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lefantzi, Sophia; Ray, Jaideep; Arunajatesan, Srinivasan

    2014-11-01

    We demonstrate a Bayesian method that can be used to calibrate computationally expensive 3D RANS (Reynolds Av- eraged Navier Stokes) models with complex response surfaces. Such calibrations, conditioned on experimental data, can yield turbulence model parameters as probability density functions (PDF), concisely capturing the uncertainty in the parameter estimates. Methods such as Markov chain Monte Carlo (MCMC) estimate the PDF by sampling, with each sample requiring a run of the RANS model. Consequently a quick-running surrogate is used instead to the RANS simulator. The surrogate can be very difficult to design if the model's response i.e., the dependence of themore » calibration variable (the observable) on the parameter being estimated is complex. We show how the training data used to construct the surrogate can be employed to isolate a promising and physically realistic part of the parameter space, within which the response is well-behaved and easily modeled. We design a classifier, based on treed linear models, to model the "well-behaved region". This classifier serves as a prior in a Bayesian calibration study aimed at estimating 3 k - ε parameters ( C μ, C ε2 , C ε1 ) from experimental data of a transonic jet-in-crossflow interaction. The robustness of the calibration is investigated by checking its predictions of variables not included in the cal- ibration data. We also check the limit of applicability of the calibration by testing at off-calibration flow regimes. We find that calibration yield turbulence model parameters which predict the flowfield far better than when the nomi- nal values of the parameters are used. Substantial improvements are still obtained when we use the calibrated RANS model to predict jet-in-crossflow at Mach numbers and jet strengths quite different from those used to generate the ex- perimental (calibration) data. Thus the primary reason for poor predictive skill of RANS, when using nominal values of the turbulence model parameters, was parametric uncertainty, which was rectified by calibration. Post-calibration, the dominant contribution to model inaccuraries are due to the structural errors in RANS.« less

  12. Pick a Color MARIA: Adaptive Sampling Enables the Rapid Identification of Complex Perovskite Nanocrystal Compositions with Defined Emission Characteristics.

    PubMed

    Bezinge, Leonard; Maceiczyk, Richard M; Lignos, Ioannis; Kovalenko, Maksym V; deMello, Andrew J

    2018-06-06

    Recent advances in the development of hybrid organic-inorganic lead halide perovskite (LHP) nanocrystals (NCs) have demonstrated their versatility and potential application in photovoltaics and as light sources through compositional tuning of optical properties. That said, due to their compositional complexity, the targeted synthesis of mixed-cation and/or mixed-halide LHP NCs still represents an immense challenge for traditional batch-scale chemistry. To address this limitation, we herein report the integration of a high-throughput segmented-flow microfluidic reactor and a self-optimizing algorithm for the synthesis of NCs with defined emission properties. The algorithm, named Multiparametric Automated Regression Kriging Interpolation and Adaptive Sampling (MARIA), iteratively computes optimal sampling points at each stage of an experimental sequence to reach a target emission peak wavelength based on spectroscopic measurements. We demonstrate the efficacy of the method through the synthesis of multinary LHP NCs, (Cs/FA)Pb(I/Br) 3 (FA = formamidinium) and (Rb/Cs/FA)Pb(I/Br) 3 NCs, using MARIA to rapidly identify reagent concentrations that yield user-defined photoluminescence peak wavelengths in the green-red spectral region. The procedure returns a robust model around a target output in far fewer measurements than systematic screening of parametric space and additionally enables the prediction of other spectral properties, such as, full-width at half-maximum and intensity, for conditions yielding NCs with similar emission peak wavelength.

  13. Invited review: A commentary on predictive cheese yield formulas.

    PubMed

    Emmons, D B; Modler, H W

    2010-12-01

    Predictive cheese yield formulas have evolved from one based only on casein and fat in 1895. Refinements have included moisture and salt in cheese and whey solids as separate factors, paracasein instead of casein, and exclusion of whey solids from moisture associated with cheese protein. The General, Barbano, and Van Slyke formulas were tested critically using yield and composition of milk, whey, and cheese from 22 vats of Cheddar cheese. The General formula is based on the sum of cheese components: fat, protein, moisture, salt, whey solids free of fat and protein, as well as milk salts associated with paracasein. The testing yielded unexpected revelations. It was startling that the sum of components in cheese was <100%; the mean was 99.51% (N × 6.31). The mean predicted yield was only 99.17% as a percentage of actual yields (PY%AY); PY%AY is a useful term for comparisons of yields among vats. The PY%AY correlated positively with the sum of components (SofC) in cheese. The apparent low estimation of SofC led to the idea of adjusting upwards, for each vat, the 5 measured components in the formula by the observed SofC, as a fraction. The mean of the adjusted predicted yields as percentages of actual yields was 99.99%. The adjusted forms of the General, Barbano, and Van Slyke formulas gave predicted yields equal to the actual yields. It was apparent that unadjusted yield formulas did not accurately predict yield; however, unadjusted PY%AY can be useful as a control tool for analyses of cheese and milk. It was unexpected that total milk protein in the adjusted General formula gave the same predicted yields as casein and paracasein, indicating that casein or paracasein may not always be necessary for successful yield prediction. The use of constants for recovery of fat and protein in the adjusted General formula gave adjusted predicted yields equal to actual yields, indicating that analyses of cheese for protein and fat may not always be necessary for yield prediction. Composition of cheese was estimated using a predictive formula; actual yield was needed for estimation of composition. Adjusted formulas are recommended for estimating target yields and cheese yield efficiency. Constants for solute exclusion, protein-associated milk salts, and whey solids could be used and reduced the complexity of the General formula. Normalization of fat recovery increased variability of predicted yields. Copyright © 2010 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  14. Computation of noise radiation from turbofans: A parametric study

    NASA Technical Reports Server (NTRS)

    Nallasamy, M.

    1995-01-01

    This report presents the results of a parametric study of the turbofan far-field noise radiation using a finite element technique. Several turbofan noise radiation characteristics of both the inlet and the aft ducts have been examined through the finite element solutions. The predicted far-field principal lobe angle variations with duct Mach number and cut-off ratio compare very well with the available analytical results. The solutions also show that the far-field lobe angle is only a function of cut-off ratio, and nearly independent of the mode number. These results indicate that the finite element codes are well suited for the prediction of noise radiation characteristics of a turbofan. The effects of variations in the aft duct geometry are examined. The ability of the codes to handle ducts with acoustic treatments is also demonstrated.

  15. Numerical analysis of the hot-gas-side and coolant-side heat transfer in liquid rocket engine combustors

    NASA Technical Reports Server (NTRS)

    Wang, Ten-See; Van, Luong

    1992-01-01

    The objective of this paper are to develop a multidisciplinary computational methodology to predict the hot-gas-side and coolant-side heat transfer and to use it in parametric studies to recommend optimized design of the coolant channels for a regeneratively cooled liquid rocket engine combustor. An integrated numerical model which incorporates CFD for the hot-gas thermal environment, and thermal analysis for the liner and coolant channels, was developed. This integrated CFD/thermal model was validated by comparing predicted heat fluxes with those of hot-firing test and industrial design methods for a 40 k calorimeter thrust chamber and the Space Shuttle Main Engine Main Combustion Chamber. Parametric studies were performed for the Advanced Main Combustion Chamber to find a strategy for a proposed combustion chamber coolant channel design.

  16. Parametric bicubic spline and CAD tools for complex targets shape modelling in physical optics radar cross section prediction

    NASA Astrophysics Data System (ADS)

    Delogu, A.; Furini, F.

    1991-09-01

    Increasing interest in radar cross section (RCS) reduction is placing new demands on theoretical, computation, and graphic techniques for calculating scattering properties of complex targets. In particular, computer codes capable of predicting the RCS of an entire aircraft at high frequency and of achieving RCS control with modest structural changes, are becoming of paramount importance in stealth design. A computer code, evaluating the RCS of arbitrary shaped metallic objects that are computer aided design (CAD) generated, and its validation with measurements carried out using ALENIA RCS test facilities are presented. The code, based on the physical optics method, is characterized by an efficient integration algorithm with error control, in order to contain the computer time within acceptable limits, and by an accurate parametric representation of the target surface in terms of bicubic splines.

  17. Summarizing techniques that combine three non-parametric scores to detect disease-associated 2-way SNP-SNP interactions.

    PubMed

    Sengupta Chattopadhyay, Amrita; Hsiao, Ching-Lin; Chang, Chien Ching; Lian, Ie-Bin; Fann, Cathy S J

    2014-01-01

    Identifying susceptibility genes that influence complex diseases is extremely difficult because loci often influence the disease state through genetic interactions. Numerous approaches to detect disease-associated SNP-SNP interactions have been developed, but none consistently generates high-quality results under different disease scenarios. Using summarizing techniques to combine a number of existing methods may provide a solution to this problem. Here we used three popular non-parametric methods-Gini, absolute probability difference (APD), and entropy-to develop two novel summary scores, namely principle component score (PCS) and Z-sum score (ZSS), with which to predict disease-associated genetic interactions. We used a simulation study to compare performance of the non-parametric scores, the summary scores, the scaled-sum score (SSS; used in polymorphism interaction analysis (PIA)), and the multifactor dimensionality reduction (MDR). The non-parametric methods achieved high power, but no non-parametric method outperformed all others under a variety of epistatic scenarios. PCS and ZSS, however, outperformed MDR. PCS, ZSS and SSS displayed controlled type-I-errors (<0.05) compared to GS, APDS, ES (>0.05). A real data study using the genetic-analysis-workshop 16 (GAW 16) rheumatoid arthritis dataset identified a number of interesting SNP-SNP interactions. © 2013 Elsevier B.V. All rights reserved.

  18. Comparison of four approaches to a rock facies classification problem

    USGS Publications Warehouse

    Dubois, M.K.; Bohling, Geoffrey C.; Chakrabarti, S.

    2007-01-01

    In this study, seven classifiers based on four different approaches were tested in a rock facies classification problem: classical parametric methods using Bayes' rule, and non-parametric methods using fuzzy logic, k-nearest neighbor, and feed forward-back propagating artificial neural network. Determining the most effective classifier for geologic facies prediction in wells without cores in the Panoma gas field, in Southwest Kansas, was the objective. Study data include 3600 samples with known rock facies class (from core) with each sample having either four or five measured properties (wire-line log curves), and two derived geologic properties (geologic constraining variables). The sample set was divided into two subsets, one for training and one for testing the ability of the trained classifier to correctly assign classes. Artificial neural networks clearly outperformed all other classifiers and are effective tools for this particular classification problem. Classical parametric models were inadequate due to the nature of the predictor variables (high dimensional and not linearly correlated), and feature space of the classes (overlapping). The other non-parametric methods tested, k-nearest neighbor and fuzzy logic, would need considerable improvement to match the neural network effectiveness, but further work, possibly combining certain aspects of the three non-parametric methods, may be justified. ?? 2006 Elsevier Ltd. All rights reserved.

  19. Self-tuning bistable parametric feedback oscillator: Near-optimal amplitude maximization without model information

    NASA Astrophysics Data System (ADS)

    Braun, David J.; Sutas, Andrius; Vijayakumar, Sethu

    2017-01-01

    Theory predicts that parametrically excited oscillators, tuned to operate under resonant condition, are capable of large-amplitude oscillation useful in diverse applications, such as signal amplification, communication, and analog computation. However, due to amplitude saturation caused by nonlinearity, lack of robustness to model uncertainty, and limited sensitivity to parameter modulation, these oscillators require fine-tuning and strong modulation to generate robust large-amplitude oscillation. Here we present a principle of self-tuning parametric feedback excitation that alleviates the above-mentioned limitations. This is achieved using a minimalistic control implementation that performs (i) self-tuning (slow parameter adaptation) and (ii) feedback pumping (fast parameter modulation), without sophisticated signal processing past observations. The proposed approach provides near-optimal amplitude maximization without requiring model-based control computation, previously perceived inevitable to implement optimal control principles in practical application. Experimental implementation of the theory shows that the oscillator self-tunes itself near to the onset of dynamic bifurcation to achieve extreme sensitivity to small resonant parametric perturbations. As a result, it achieves large-amplitude oscillations by capitalizing on the effect of nonlinearity, despite substantial model uncertainties and strong unforeseen external perturbations. We envision the present finding to provide an effective and robust approach to parametric excitation when it comes to real-world application.

  20. Random regression models using different functions to model test-day milk yield of Brazilian Holstein cows.

    PubMed

    Bignardi, A B; El Faro, L; Torres Júnior, R A A; Cardoso, V L; Machado, P F; Albuquerque, L G

    2011-10-31

    We analyzed 152,145 test-day records from 7317 first lactations of Holstein cows recorded from 1995 to 2003. Our objective was to model variations in test-day milk yield during the first lactation of Holstein cows by random regression model (RRM), using various functions in order to obtain adequate and parsimonious models for the estimation of genetic parameters. Test-day milk yields were grouped into weekly classes of days in milk, ranging from 1 to 44 weeks. The contemporary groups were defined as herd-test-day. The analyses were performed using a single-trait RRM, including the direct additive, permanent environmental and residual random effects. In addition, contemporary group and linear and quadratic effects of the age of cow at calving were included as fixed effects. The mean trend of milk yield was modeled with a fourth-order orthogonal Legendre polynomial. The additive genetic and permanent environmental covariance functions were estimated by random regression on two parametric functions, Ali and Schaeffer and Wilmink, and on B-spline functions of days in milk. The covariance components and the genetic parameters were estimated by the restricted maximum likelihood method. Results from RRM parametric and B-spline functions were compared to RRM on Legendre polynomials and with a multi-trait analysis, using the same data set. Heritability estimates presented similar trends during mid-lactation (13 to 31 weeks) and between week 37 and the end of lactation, for all RRM. Heritabilities obtained by multi-trait analysis were of a lower magnitude than those estimated by RRM. The RRMs with a higher number of parameters were more useful to describe the genetic variation of test-day milk yield throughout the lactation. RRM using B-spline and Legendre polynomials as base functions appears to be the most adequate to describe the covariance structure of the data.

  1. Control of entanglement dynamics in a system of three coupled quantum oscillators.

    PubMed

    Gonzalez-Henao, J C; Pugliese, E; Euzzor, S; Meucci, R; Roversi, J A; Arecchi, F T

    2017-08-30

    Dynamical control of entanglement and its connection with the classical concept of instability is an intriguing matter which deserves accurate investigation for its important role in information processing, cryptography and quantum computing. Here we consider a tripartite quantum system made of three coupled quantum parametric oscillators in equilibrium with a common heat bath. The introduced parametrization consists of a pulse train with adjustable amplitude and duty cycle representing a more general case for the perturbation. From the experimental observation of the instability in the classical system we are able to predict the parameter values for which the entangled states exist. A different amount of entanglement and different onset times emerge when comparing two and three quantum oscillators. The system and the parametrization considered here open new perspectives for manipulating quantum features at high temperatures.

  2. Can blood and semen presepsin levels in males predict pregnancy in couples undergoing intra-cytoplasmic sperm injection?

    PubMed Central

    Ovayolu, Ali; Arslanbuğa, Cansev Yilmaz; Gun, Ismet; Devranoglu, Belgin; Ozdemir, Arman; Cakar, Sule Eren

    2016-01-01

    Objective: To determine whether semen and plasma presepsin values measured in men with normozoospermia and oligoasthenospermia undergoing invitro-fertilization would be helpful in predicting ongoing pregnancy and live birth. Methods: Group-I was defined as patients who had pregnancy after treatment and Group-II comprised those with no pregnancy. Semen and blood presepsin values were subsequently compared between the groups. Parametric comparisons were performed using Student’s t-test, and non-parametric comparisons were conducted using the Mann-Whitney U test. Results: There were 42 patients in Group-I and 72 in Group-II. In the context of successful pregnancy and live birth, semen presepsin values were statistically significantly higher in Group-I than in Group-II (p= 0.004 and p= 0.037, respectively). The most appropriate semen presepsin cut-off value for predicting both ongoing pregnancy and live birth was calculated as 199 pg/mL. Accordingly, their sensitivity was 64.5% to 59.3%, their specificity was 57.0% to 54.2%, and their positive predictive value was 37.0% to 29.6%, respectively; their negative predictive value was 80.4% in both instances. Conclusion: Semen presepsin values could be a new marker that may enable the prediction of successful pregnancy and/or live birth. Its negative predictive values are especially high. PMID:27882005

  3. Predicting Great Lakes fish yields: tools and constraints

    USGS Publications Warehouse

    Lewis, C.A.; Schupp, D.H.; Taylor, W.W.; Collins, J.J.; Hatch, Richard W.

    1987-01-01

    Prediction of yield is a critical component of fisheries management. The development of sound yield prediction methodology and the application of the results of yield prediction are central to the evolution of strategies to achieve stated goals for Great Lakes fisheries and to the measurement of progress toward those goals. Despite general availability of species yield models, yield prediction for many Great Lakes fisheries has been poor due to the instability of the fish communities and the inadequacy of available data. A host of biological, institutional, and societal factors constrain both the development of sound predictions and their application to management. Improved predictive capability requires increased stability of Great Lakes fisheries through rehabilitation of well-integrated communities, improvement of data collection, data standardization and information-sharing mechanisms, and further development of the methodology for yield prediction. Most important is the creation of a better-informed public that will in turn establish the political will to do what is required.

  4. Smagorinsky-type diffusion in a high-resolution GCM

    NASA Astrophysics Data System (ADS)

    Schaefer-Rolffs, Urs; Becker, Erich

    2013-04-01

    The parametrization of the (horizontal) momentum diffusion is a paramount component of a Global Circulation Model (GCM). Aside from friction in the boundary layer, a relevant fraction of kinetic energy is dissipated in the free atmosphere, and it is known that a linear harmonic turbulence model is not sufficient to obtain a reasonable simulation of the kinetic energy spectrum. Therefore, often empirical hyper-diffusion schemes are employed, regardless of disadvantages like the violation of energy conservation and the second law of thermodynamics. At IAP we have developed an improved parametrization of the horizontal diffusion that is based on Smagorinsky's nonlinear and energy conservation formulation. This approach is extended by the dynamic Smagorinsky model (DSM) of M. Germano. In this new scheme, the mixing length is no longer a prescribed parameter but calculated dynamically from the resolved flow such as to preserve scale invariance for the horizontal energy cascade. The so-called Germano identity is solved by a tensor norm ansatz which yields a positive definite frictional heating. We present results from an investigation using the DSM as a parametrization of horizontal diffusion in a high-resolution version of the Kühlungborn Mechanistic general Circulation Model (KMCM) with spectral truncation at horizontal wavenumber 330. The DSM calculates the Smagorinsky parameter cS independent from the resolution scale. We find that this method yields an energy spectrum that exhibits a pronounced transition from a synoptic -3 to a mesoscale -5-3 slope at wavenumbers around 50. At the highest wavenumber end, a behaviour similar to that often obtained by tuning the hyper-diffusion is achieved self-consistently. This result is very sensitive to the explicit choice of the test filter in the DSM.

  5. Parametric binary dissection

    NASA Technical Reports Server (NTRS)

    Bokhari, Shahid H.; Crockett, Thomas W.; Nicol, David M.

    1993-01-01

    Binary dissection is widely used to partition non-uniform domains over parallel computers. This algorithm does not consider the perimeter, surface area, or aspect ratio of the regions being generated and can yield decompositions that have poor communication to computation ratio. Parametric Binary Dissection (PBD) is a new algorithm in which each cut is chosen to minimize load + lambda x(shape). In a 2 (or 3) dimensional problem, load is the amount of computation to be performed in a subregion and shape could refer to the perimeter (respectively surface) of that subregion. Shape is a measure of communication overhead and the parameter permits us to trade off load imbalance against communication overhead. When A is zero, the algorithm reduces to plain binary dissection. This algorithm can be used to partition graphs embedded in 2 or 3-d. Load is the number of nodes in a subregion, shape the number of edges that leave that subregion, and lambda the ratio of time to communicate over an edge to the time to compute at a node. An algorithm is presented that finds the depth d parametric dissection of an embedded graph with n vertices and e edges in O(max(n log n, de)) time, which is an improvement over the O(dn log n) time of plain binary dissection. Parallel versions of this algorithm are also presented; the best of these requires O((n/p) log(sup 3)p) time on a p processor hypercube, assuming graphs of bounded degree. How PBD is applied to 3-d unstructured meshes and yields partitions that are better than those obtained by plain dissection is described. Its application to the color image quantization problem is also discussed, in which samples in a high-resolution color space are mapped onto a lower resolution space in a way that minimizes the color error.

  6. The Impact of Three-Dimensional Effects on the Simulation of Turbulence Kinetic Energy in a Major Alpine Valley

    NASA Astrophysics Data System (ADS)

    Goger, Brigitta; Rotach, Mathias W.; Gohm, Alexander; Fuhrer, Oliver; Stiperski, Ivana; Holtslag, Albert A. M.

    2018-02-01

    The correct simulation of the atmospheric boundary layer (ABL) is crucial for reliable weather forecasts in truly complex terrain. However, common assumptions for model parametrizations are only valid for horizontally homogeneous and flat terrain. Here, we evaluate the turbulence parametrization of the numerical weather prediction model COSMO with a horizontal grid spacing of Δ x = 1.1 km for the Inn Valley, Austria. The long-term, high-resolution turbulence measurements of the i-Box measurement sites provide a useful data pool of the ABL structure in the valley and on slopes. We focus on days and nights when ABL processes dominate and a thermally-driven circulation is present. Simulations are performed for case studies with both a one-dimensional turbulence parametrization, which only considers the vertical turbulent exchange, and a hybrid turbulence parametrization, also including horizontal shear production and advection in the budget of turbulence kinetic energy (TKE). We find a general underestimation of TKE by the model with the one-dimensional turbulence parametrization. In the simulations with the hybrid turbulence parametrization, the modelled TKE has a more realistic structure, especially in situations when the TKE production is dominated by shear related to the afternoon up-valley flow, and during nights, when a stable ABL is present. The model performance also improves for stations on the slopes. An estimation of the horizontal shear production from the observation network suggests that three-dimensional effects are a relevant part of TKE production in the valley.

  7. The Impact of Three-Dimensional Effects on the Simulation of Turbulence Kinetic Energy in a Major Alpine Valley

    NASA Astrophysics Data System (ADS)

    Goger, Brigitta; Rotach, Mathias W.; Gohm, Alexander; Fuhrer, Oliver; Stiperski, Ivana; Holtslag, Albert A. M.

    2018-07-01

    The correct simulation of the atmospheric boundary layer (ABL) is crucial for reliable weather forecasts in truly complex terrain. However, common assumptions for model parametrizations are only valid for horizontally homogeneous and flat terrain. Here, we evaluate the turbulence parametrization of the numerical weather prediction model COSMO with a horizontal grid spacing of Δ x = 1.1 km for the Inn Valley, Austria. The long-term, high-resolution turbulence measurements of the i-Box measurement sites provide a useful data pool of the ABL structure in the valley and on slopes. We focus on days and nights when ABL processes dominate and a thermally-driven circulation is present. Simulations are performed for case studies with both a one-dimensional turbulence parametrization, which only considers the vertical turbulent exchange, and a hybrid turbulence parametrization, also including horizontal shear production and advection in the budget of turbulence kinetic energy (TKE). We find a general underestimation of TKE by the model with the one-dimensional turbulence parametrization. In the simulations with the hybrid turbulence parametrization, the modelled TKE has a more realistic structure, especially in situations when the TKE production is dominated by shear related to the afternoon up-valley flow, and during nights, when a stable ABL is present. The model performance also improves for stations on the slopes. An estimation of the horizontal shear production from the observation network suggests that three-dimensional effects are a relevant part of TKE production in the valley.

  8. Reliable estimates of predictive uncertainty for an Alpine catchment using a non-parametric methodology

    NASA Astrophysics Data System (ADS)

    Matos, José P.; Schaefli, Bettina; Schleiss, Anton J.

    2017-04-01

    Uncertainty affects hydrological modelling efforts from the very measurements (or forecasts) that serve as inputs to the more or less inaccurate predictions that are produced. Uncertainty is truly inescapable in hydrology and yet, due to the theoretical and technical hurdles associated with its quantification, it is at times still neglected or estimated only qualitatively. In recent years the scientific community has made a significant effort towards quantifying this hydrologic prediction uncertainty. Despite this, most of the developed methodologies can be computationally demanding, are complex from a theoretical point of view, require substantial expertise to be employed, and are constrained by a number of assumptions about the model error distribution. These assumptions limit the reliability of many methods in case of errors that show particular cases of non-normality, heteroscedasticity, or autocorrelation. The present contribution builds on a non-parametric data-driven approach that was developed for uncertainty quantification in operational (real-time) forecasting settings. The approach is based on the concept of Pareto optimality and can be used as a standalone forecasting tool or as a postprocessor. By virtue of its non-parametric nature and a general operating principle, it can be applied directly and with ease to predictions of streamflow, water stage, or even accumulated runoff. Also, it is a methodology capable of coping with high heteroscedasticity and seasonal hydrological regimes (e.g. snowmelt and rainfall driven events in the same catchment). Finally, the training and operation of the model are very fast, making it a tool particularly adapted to operational use. To illustrate its practical use, the uncertainty quantification method is coupled with a process-based hydrological model to produce statistically reliable forecasts for an Alpine catchment located in Switzerland. Results are presented and discussed in terms of their reliability and resolution.

  9. Free response approach in a parametric system

    NASA Astrophysics Data System (ADS)

    Huang, Dishan; Zhang, Yueyue; Shao, Hexi

    2017-07-01

    In this study, a new approach to predict the free response in a parametric system is investigated. It is proposed in the special form of a trigonometric series with an exponentially decaying function of time, based on the concept of frequency splitting. By applying harmonic balance, the parametric vibration equation is transformed into an infinite set of homogeneous linear equations, from which the principal oscillation frequency can be computed, and all coefficients of harmonic components can be obtained. With initial conditions, arbitrary constants in a general solution can be determined. To analyze the computational accuracy and consistency, an approach error function is defined, which is used to assess the computational error in the proposed approach and in the standard numerical approach based on the Runge-Kutta algorithm. Furthermore, an example of a dynamic model of airplane wing flutter on a turbine engine is given to illustrate the applicability of the proposed approach. Numerical solutions show that the proposed approach exhibits high accuracy in mathematical expression, and it is valuable for theoretical research and engineering applications of parametric systems.

  10. Real-time solution of linear computational problems using databases of parametric reduced-order models with arbitrary underlying meshes

    NASA Astrophysics Data System (ADS)

    Amsallem, David; Tezaur, Radek; Farhat, Charbel

    2016-12-01

    A comprehensive approach for real-time computations using a database of parametric, linear, projection-based reduced-order models (ROMs) based on arbitrary underlying meshes is proposed. In the offline phase of this approach, the parameter space is sampled and linear ROMs defined by linear reduced operators are pre-computed at the sampled parameter points and stored. Then, these operators and associated ROMs are transformed into counterparts that satisfy a certain notion of consistency. In the online phase of this approach, a linear ROM is constructed in real-time at a queried but unsampled parameter point by interpolating the pre-computed linear reduced operators on matrix manifolds and therefore computing an interpolated linear ROM. The proposed overall model reduction framework is illustrated with two applications: a parametric inverse acoustic scattering problem associated with a mockup submarine, and a parametric flutter prediction problem associated with a wing-tank system. The second application is implemented on a mobile device, illustrating the capability of the proposed computational framework to operate in real-time.

  11. How accurate are the parametrized correlation energies of the uniform electron gas?

    NASA Astrophysics Data System (ADS)

    Bhattarai, Puskar; Patra, Abhirup; Shahi, Chandra; Perdew, John P.

    2018-05-01

    Density functional approximations to the exchange-correlation energy are designed to be exact for an electron gas of uniform density parameter rs and relative spin polarization ζ , requiring a parametrization of the correlation energy per electron ɛc(rs,ζ ) . We consider three widely used parametrizations [J. P. Perdew and A. Zunger, Phys. Rev. B 23, 5048 (1981), 10.1103/PhysRevB.23.5048 or PZ81, S. H. Vosko, L. Wilk, and M. Nusair, Can. J. Phys. 58, 1200 (1980), 10.1139/p80-159 or VWN80, and J. P. Perdew and Y. Wang, Phys. Rev. B 45, 13244 (1992), 10.1103/PhysRevB.45.13244 or PW92] that interpolate the quantum Monte Carlo (QMC) correlation energies of Ceperley-Alder [Phys. Rev. Lett. 45, 566 (1980), 10.1103/PhysRevLett.45.566], while extrapolating them to known high-(rs→0 ) and low- (rs→∞ ) density limits. For the physically important range 0.5 ≤rs≤20 , they agree closely with one another, with differences of 0.01 eV (0.5%) or less between the latter two. The density parameter interpolation (DPI), designed to predict these energies by interpolation between the known high- and low-density limits, with almost no other input (and none for ζ =0 ), is also reasonably close, both in its original version and with corrections for ζ ≠0 . Moreover, the DPI and PW92 at rs=0.5 are very close to the high-density expansion. The larger discrepancies with the QMC of Spink et al. [Phys. Rev. B 88, 085121 (2013), 10.1103/PhysRevB.88.085121], of order 0.1 eV (5%) at rs=0.5 , are thus surprising, suggesting that the constraint-based PW92 and VWN80 parametrizations are more accurate than the QMC for rs<2 . For rs>2 , however, the QMC of Spink et al. confirms the dependence upon relative spin polarization predicted by the parametrizations.

  12. Mechanochemical Preparation of Organic Nitro Compounds

    DTIC Science & Technology

    selectivity were found to depend on the ratios of the reactants and the catalyst. A parametric study addressed the effects of milling time, temperature ...Aromatic compounds such as toluene are commercially nitrated using a combination of nitric acid with other strong acids. This process relies on the...was synthesized by milling toluene with sodium nitrate and molybdenum trioxide as a catalyst. Several parameters affecting the desired product yield and

  13. Sufficient Forecasting Using Factor Models

    PubMed Central

    Fan, Jianqing; Xue, Lingzhou; Yao, Jiawei

    2017-01-01

    We consider forecasting a single time series when there is a large number of predictors and a possible nonlinear effect. The dimensionality was first reduced via a high-dimensional (approximate) factor model implemented by the principal component analysis. Using the extracted factors, we develop a novel forecasting method called the sufficient forecasting, which provides a set of sufficient predictive indices, inferred from high-dimensional predictors, to deliver additional predictive power. The projected principal component analysis will be employed to enhance the accuracy of inferred factors when a semi-parametric (approximate) factor model is assumed. Our method is also applicable to cross-sectional sufficient regression using extracted factors. The connection between the sufficient forecasting and the deep learning architecture is explicitly stated. The sufficient forecasting correctly estimates projection indices of the underlying factors even in the presence of a nonparametric forecasting function. The proposed method extends the sufficient dimension reduction to high-dimensional regimes by condensing the cross-sectional information through factor models. We derive asymptotic properties for the estimate of the central subspace spanned by these projection directions as well as the estimates of the sufficient predictive indices. We further show that the natural method of running multiple regression of target on estimated factors yields a linear estimate that actually falls into this central subspace. Our method and theory allow the number of predictors to be larger than the number of observations. We finally demonstrate that the sufficient forecasting improves upon the linear forecasting in both simulation studies and an empirical study of forecasting macroeconomic variables. PMID:29731537

  14. Numerical investigation and Uncertainty Quantification of the Impact of the geological and geomechanical properties on the seismo-acoustic responses of underground chemical explosions

    NASA Astrophysics Data System (ADS)

    Ezzedine, S. M.; Pitarka, A.; Vorobiev, O.; Glenn, L.; Antoun, T.

    2017-12-01

    We have performed three-dimensional high resolution simulations of underground chemical explosions conducted recently in jointed rock outcrop as part of the Source Physics Experiments (SPE) being conducted at the Nevada National Security Site (NNSS). The main goal of the current study is to investigate the effects of the structural and geomechanical properties on the spall phenomena due to underground chemical explosions and its subsequent effect on the seismo-acoustic signature at far distances. Two parametric studies have been undertaken to assess the impact of different 1) conceptual geological models including a single layer and two layers model, with and without joints and with and without varying geomechanical properties, and 2) depth of bursts of the chemical explosions and explosion yields. Through these investigations we have explored not only the near-field response of the chemical explosions but also the far-field responses of the seismic and the acoustic signatures. The near-field simulations were conducted using the Eulerian and Lagrangian codes, GEODYN and GEODYN -L, respectively, while the far-field seismic simulations were conducted using the elastic wave propagation code, WPP, and the acoustic response using the Kirchhoff-Helmholtz-Rayleigh time-dependent approximation code, KHR. Though a series of simulations we have recorded the velocity field histories a) at the ground surface on an acoustic-source-patch for the acoustic simulations, and 2) on a seismic-source-box for the seismic simulations. We first analyzed the SPE3 experimental data and simulated results, then simulated SPE4-prime, SPE5, and SPE6 to anticipate their seismo-acoustic responses given conditions of uncertainties. SPE experiments were conducted in a granitic formation; we have extended the parametric study to include other geological settings such dolomite and alluvial formations. These parametric studies enabled us 1) investigating the geotechnical and geophysical key parameters that impact the seismo-acoustic responses of underground chemical explosions and 2) deciphering and ranking through a global sensitivity analysis the most important key parameters to be characterized on site to minimize uncertainties in prediction and discrimination.

  15. A Bayesian goodness of fit test and semiparametric generalization of logistic regression with measurement data.

    PubMed

    Schörgendorfer, Angela; Branscum, Adam J; Hanson, Timothy E

    2013-06-01

    Logistic regression is a popular tool for risk analysis in medical and population health science. With continuous response data, it is common to create a dichotomous outcome for logistic regression analysis by specifying a threshold for positivity. Fitting a linear regression to the nondichotomized response variable assuming a logistic sampling model for the data has been empirically shown to yield more efficient estimates of odds ratios than ordinary logistic regression of the dichotomized endpoint. We illustrate that risk inference is not robust to departures from the parametric logistic distribution. Moreover, the model assumption of proportional odds is generally not satisfied when the condition of a logistic distribution for the data is violated, leading to biased inference from a parametric logistic analysis. We develop novel Bayesian semiparametric methodology for testing goodness of fit of parametric logistic regression with continuous measurement data. The testing procedures hold for any cutoff threshold and our approach simultaneously provides the ability to perform semiparametric risk estimation. Bayes factors are calculated using the Savage-Dickey ratio for testing the null hypothesis of logistic regression versus a semiparametric generalization. We propose a fully Bayesian and a computationally efficient empirical Bayesian approach to testing, and we present methods for semiparametric estimation of risks, relative risks, and odds ratios when parametric logistic regression fails. Theoretical results establish the consistency of the empirical Bayes test. Results from simulated data show that the proposed approach provides accurate inference irrespective of whether parametric assumptions hold or not. Evaluation of risk factors for obesity shows that different inferences are derived from an analysis of a real data set when deviations from a logistic distribution are permissible in a flexible semiparametric framework. © 2013, The International Biometric Society.

  16. Suitability of parametric models to describe the hydraulic properties of an unsaturated coarse sand and gravel

    USGS Publications Warehouse

    Mace, Andy; Rudolph, David L.; Kachanoski , R. Gary

    1998-01-01

    The performance of parametric models used to describe soil water retention (SWR) properties and predict unsaturated hydraulic conductivity (K) as a function of volumetric water content (θ) is examined using SWR and K(θ) data for coarse sand and gravel sediments. Six 70 cm long, 10 cm diameter cores of glacial outwash were instrumented at eight depths with porous cup ten-siometers and time domain reflectometry probes to measure soil water pressure head (h) and θ, respectively, for seven unsaturated and one saturated steady-state flow conditions. Forty-two θ(h) and K(θ) relationships were measured from the infiltration tests on the cores. Of the four SWR models compared in the analysis, the van Genuchten (1980) equation with parameters m and n restricted according to the Mualem (m = 1 - 1/n) criterion is best suited to describe the θ(h) relationships. The accuracy of two models that predict K(θ) using parameter values derived from the SWR models was also evaluated. The model developed by van Genuchten (1980) based on the theoretical expression of Mualem (1976) predicted K(θ) more accurately than the van Genuchten (1980) model based on the theory of Burdine (1953). A sensitivity analysis shows that more accurate predictions of K(θ) are achieved using SWR model parameters derived with residual water content (θr) specified according to independent measurements of θ at values of h where θ/h ∼ 0 rather than model-fit θr values. The accuracy of the model K(θ) function improves markedly when at least one value of unsaturated K is used to scale the K(θ) function predicted using the saturated K. The results of this investigation indicate that the hydraulic properties of coarse-grained sediments can be accurately described using the parametric models. In addition, data collection efforts should focus on measuring at least one value of unsaturated hydraulic conductivity and as complete a set of SWR data as possible, particularly in the dry range.

  17. Conceptual design of reduced energy transports

    NASA Technical Reports Server (NTRS)

    Ardema, M. D.; Harper, M.; Smith, C. L.; Waters, M. H.; Williams, L. J.

    1975-01-01

    This paper reports the results of a conceptual design study of new, near-term fuel-conservative aircraft. A parametric study was made to determine the effects of cruise Mach number and fuel cost on the 'optimum' configuration characteristics and on economic performance. Supercritical wing technology and advanced engine cycles were assumed. For each design, the wing geometry was optimized to give maximum return on investment at a particular fuel cost. Based on the results of the parametric study, a reduced energy configuration was selected. Compared with existing transport designs, the reduced energy design has a higher aspect ratio wing with lower sweep, and cruises at a lower Mach number. It yields about 30% more seat-miles/gal than current wide-body aircraft. At the higher fuel costs anticipated in the future, the reduced energy design has about the same economic performance as existing designs.

  18. Impact Response Comparison Between Parametric Human Models and Postmortem Human Subjects with a Wide Range of Obesity Levels.

    PubMed

    Zhang, Kai; Cao, Libo; Wang, Yulong; Hwang, Eunjoo; Reed, Matthew P; Forman, Jason; Hu, Jingwen

    2017-10-01

    Field data analyses have shown that obesity significantly increases the occupant injury risks in motor vehicle crashes, but the injury assessment tools for people with obesity are largely lacking. The objectives of this study were to use a mesh morphing method to rapidly generate parametric finite element models with a wide range of obesity levels and to evaluate their biofidelity against impact tests using postmortem human subjects (PMHS). Frontal crash tests using three PMHS seated in a vehicle rear seat compartment with body mass index (BMI) from 24 to 40 kg/m 2 were selected. To develop the human models matching the PMHS geometry, statistical models of external body shape, rib cage, pelvis, and femur were applied to predict the target geometry using age, sex, stature, and BMI. A mesh morphing method based on radial basis functions was used to rapidly morph a baseline human model into the target geometry. The model-predicted body excursions and injury measures were compared to the PMHS tests. Comparisons of occupant kinematics and injury measures between the tests and simulations showed reasonable correlations across the wide range of BMI levels. The parametric human models have the capability to account for the obesity effects on the occupant impact responses and injury risks. © 2017 The Obesity Society.

  19. An analytical parametric study of the broadband noise from axial-flow fans

    NASA Technical Reports Server (NTRS)

    Chou, Shau-Tak; George, Albert R.

    1987-01-01

    The rotating dipole analysis of Ffowcs Williams and Hawkings (1969) is used to predict the far field noise radiation due to various rotor broadband noise mechanisms. Consideration is given to inflow turbulence noise, attached boundary layer/trailing-edge interaction noise, tip-vortex formation noise, and trailing-edge thickness noise. The parametric dependence of broadband noise from unducted axial-flow fans on several critical variables is studied theoretically. The angle of attack of the rotor blades, which is related to the rotor performance, is shown to be important to the trailing-edge noise and to the tip-vortex formation noise.

  20. Parametric decay of plasma waves near the upper-hybrid resonance

    DOE PAGES

    Dodin, I. Y.; Arefiev, A. V.

    2017-03-28

    An intense X wave propagating perpendicularly to dc magnetic field is unstable with respect to a parametric decay into an electron Bernstein wave and a lower-hybrid wave. A modified theory of this effect is proposed that extends to the high-intensity regime, where the instability rate γ ceases to be a linear function of the incident-wave amplitude. An explicit formula for γ is derived and expressed in terms of cold-plasma parameters. Here, theory predictions are in reasonable agreement with the results of the particle-in-cell simulations presented in a separate publication.

  1. Real-time prediction and gating of respiratory motion using an extended Kalman filter and Gaussian process regression

    NASA Astrophysics Data System (ADS)

    Bukhari, W.; Hong, S.-M.

    2015-01-01

    Motion-adaptive radiotherapy aims to deliver a conformal dose to the target tumour with minimal normal tissue exposure by compensating for tumour motion in real time. The prediction as well as the gating of respiratory motion have received much attention over the last two decades for reducing the targeting error of the treatment beam due to respiratory motion. In this article, we present a real-time algorithm for predicting and gating respiratory motion that utilizes a model-based and a model-free Bayesian framework by combining them in a cascade structure. The algorithm, named EKF-GPR+, implements a gating function without pre-specifying a particular region of the patient’s breathing cycle. The algorithm first employs an extended Kalman filter (LCM-EKF) to predict the respiratory motion and then uses a model-free Gaussian process regression (GPR) to correct the error of the LCM-EKF prediction. The GPR is a non-parametric Bayesian algorithm that yields predictive variance under Gaussian assumptions. The EKF-GPR+ algorithm utilizes the predictive variance from the GPR component to capture the uncertainty in the LCM-EKF prediction error and systematically identify breathing points with a higher probability of large prediction error in advance. This identification allows us to pause the treatment beam over such instances. EKF-GPR+ implements the gating function by using simple calculations based on the predictive variance with no additional detection mechanism. A sparse approximation of the GPR algorithm is employed to realize EKF-GPR+ in real time. Extensive numerical experiments are performed based on a large database of 304 respiratory motion traces to evaluate EKF-GPR+. The experimental results show that the EKF-GPR+ algorithm effectively reduces the prediction error in a root-mean-square (RMS) sense by employing the gating function, albeit at the cost of a reduced duty cycle. As an example, EKF-GPR+ reduces the patient-wise RMS error to 37%, 39% and 42% in percent ratios relative to no prediction for a duty cycle of 80% at lookahead lengths of 192 ms, 384 ms and 576 ms, respectively. The experiments also confirm that EKF-GPR+ controls the duty cycle with reasonable accuracy.

  2. Parametric Decay Instability and Dissipation of Low-frequency Alfvén Waves in Low-beta Turbulent Plasmas

    NASA Astrophysics Data System (ADS)

    Fu, Xiangrong; Li, Hui; Guo, Fan; Li, Xiaocan; Roytershteyn, Vadim

    2018-03-01

    Evolution of the parametric decay instability (PDI) of a circularly polarized Alfvén wave in a turbulent low-beta plasma background is investigated using 3D hybrid simulations. It is shown that the turbulence reduces the growth rate of PDI as compared to the linear theory predictions, but PDI can still exist. Interestingly, the damping rate of the ion acoustic mode (as the product of PDI) is also reduced as compared to the linear Vlasov predictions. Nonetheless, significant heating of ions in the direction parallel to the background magnetic field is observed due to resonant Landau damping of the ion acoustic waves. In low-beta turbulent plasmas, PDI can provide an important channel for energy dissipation of low-frequency Alfvén waves at a scale much larger than the ion kinetic scales, different from the traditional turbulence dissipation models.

  3. Incremental harmonic balance method for predicting amplitudes of a multi-d.o.f. non-linear wheel shimmy system with combined Coulomb and quadratic damping

    NASA Astrophysics Data System (ADS)

    Zhou, J. X.; Zhang, L.

    2005-01-01

    Incremental harmonic balance (IHB) formulations are derived for general multiple degrees of freedom (d.o.f.) non-linear autonomous systems. These formulations are developed for a concerned four-d.o.f. aircraft wheel shimmy system with combined Coulomb and velocity-squared damping. A multi-harmonic analysis is performed and amplitudes of limit cycles are predicted. Within a large range of parametric variations with respect to aircraft taxi velocity, the IHB method can, at a much cheaper cost, give results with high accuracy as compared with numerical results given by a parametric continuation method. In particular, the IHB method avoids the stiff problems emanating from numerical treatment of aircraft wheel shimmy system equations. The development is applicable to other vibration control systems that include commonly used dry friction devices or velocity-squared hydraulic dampers.

  4. Two-dimensional finite-element analyses of simulated rotor-fragment impacts against rings and beams compared with experiments

    NASA Technical Reports Server (NTRS)

    Stagliano, T. R.; Witmer, E. A.; Rodal, J. J. A.

    1979-01-01

    Finite element modeling alternatives as well as the utility and limitations of the two dimensional structural response computer code CIVM-JET 4B for predicting the transient, large deflection, elastic plastic, structural responses of two dimensional beam and/or ring structures which are subjected to rigid fragment impact were investigated. The applicability of the CIVM-JET 4B analysis and code for the prediction of steel containment ring response to impact by complex deformable fragments from a trihub burst of a T58 turbine rotor was studied. Dimensional analysis considerations were used in a parametric examination of data from engine rotor burst containment experiments and data from sphere beam impact experiments. The use of the CIVM-JET 4B computer code for making parametric structural response studies on both fragment-containment structure and fragment-deflector structure was illustrated. Modifications to the analysis/computation procedure were developed to alleviate restrictions.

  5. Decision tree methods: applications for classification and prediction.

    PubMed

    Song, Yan-Yan; Lu, Ying

    2015-04-25

    Decision tree methodology is a commonly used data mining method for establishing classification systems based on multiple covariates or for developing prediction algorithms for a target variable. This method classifies a population into branch-like segments that construct an inverted tree with a root node, internal nodes, and leaf nodes. The algorithm is non-parametric and can efficiently deal with large, complicated datasets without imposing a complicated parametric structure. When the sample size is large enough, study data can be divided into training and validation datasets. Using the training dataset to build a decision tree model and a validation dataset to decide on the appropriate tree size needed to achieve the optimal final model. This paper introduces frequently used algorithms used to develop decision trees (including CART, C4.5, CHAID, and QUEST) and describes the SPSS and SAS programs that can be used to visualize tree structure.

  6. Parametric studies with an atmospheric diffusion model that assesses toxic fuel hazards due to the ground clouds generated by rocket launches

    NASA Technical Reports Server (NTRS)

    Stewart, R. B.; Grose, W. L.

    1975-01-01

    Parametric studies were made with a multilayer atmospheric diffusion model to place quantitative limits on the uncertainty of predicting ground-level toxic rocket-fuel concentrations. Exhaust distributions in the ground cloud, cloud stabilized geometry, atmospheric coefficients, the effects of exhaust plume afterburning of carbon monoxide CO, assumed surface mixing-layer division in the model, and model sensitivity to different meteorological regimes were studied. Large-scale differences in ground-level predictions are quantitatively described. Cloud alongwind growth for several meteorological conditions is shown to be in error because of incorrect application of previous diffusion theory. In addition, rocket-plume calculations indicate that almost all of the rocket-motor carbon monoxide is afterburned to carbon dioxide CO2, thus reducing toxic hazards due to CO. The afterburning is also shown to have a significant effect on cloud stabilization height and on ground-level concentrations of exhaust products.

  7. Parametric Study on the Response of Compression-Loaded Composite Shells With Geometric and Material Imperfections

    NASA Technical Reports Server (NTRS)

    Hilburger, Mark W.; Starnes, James H., Jr.

    2004-01-01

    The results of a parametric study of the effects of initial imperfections on the buckling and postbuckling response of three unstiffened thinwalled compression-loaded graphite-epoxy cylindrical shells with different orthotropic and quasi-isotropic shell-wall laminates are presented. The imperfections considered include initial geometric shell-wall midsurface imperfections, shell-wall thickness variations, local shell-wall ply-gaps associated with the fabrication process, shell-end geometric imperfections, nonuniform applied end loads, and variations in the boundary conditions including the effects of elastic boundary conditions. A high-fidelity nonlinear shell analysis procedure that accurately accounts for the effects of these imperfections on the nonlinear responses and buckling loads of the shells is described. The analysis procedure includes a nonlinear static analysis that predicts stable response characteristics of the shells and a nonlinear transient analysis that predicts unstable response characteristics.

  8. Remotely sensed rice yield prediction using multi-temporal NDVI data derived from NOAA's-AVHRR.

    PubMed

    Huang, Jingfeng; Wang, Xiuzhen; Li, Xinxing; Tian, Hanqin; Pan, Zhuokun

    2013-01-01

    Grain-yield prediction using remotely sensed data have been intensively studied in wheat and maize, but such information is limited in rice, barley, oats and soybeans. The present study proposes a new framework for rice-yield prediction, which eliminates the influence of the technology development, fertilizer application, and management improvement and can be used for the development and implementation of provincial rice-yield predictions. The technique requires the collection of remotely sensed data over an adequate time frame and a corresponding record of the region's crop yields. Longer normalized-difference-vegetation-index (NDVI) time series are preferable to shorter ones for the purposes of rice-yield prediction because the well-contrasted seasons in a longer time series provide the opportunity to build regression models with a wide application range. A regression analysis of the yield versus the year indicated an annual gain in the rice yield of 50 to 128 kg ha(-1). Stepwise regression models for the remotely sensed rice-yield predictions have been developed for five typical rice-growing provinces in China. The prediction models for the remotely sensed rice yield indicated that the influences of the NDVIs on the rice yield were always positive. The association between the predicted and observed rice yields was highly significant without obvious outliers from 1982 to 2004. Independent validation found that the overall relative error is approximately 5.82%, and a majority of the relative errors were less than 5% in 2005 and 2006, depending on the study area. The proposed models can be used in an operational context to predict rice yields at the provincial level in China. The methodologies described in the present paper can be applied to any crop for which a sufficient time series of NDVI data and the corresponding historical yield information are available, as long as the historical yield increases significantly.

  9. Remotely Sensed Rice Yield Prediction Using Multi-Temporal NDVI Data Derived from NOAA's-AVHRR

    PubMed Central

    Huang, Jingfeng; Wang, Xiuzhen; Li, Xinxing; Tian, Hanqin; Pan, Zhuokun

    2013-01-01

    Grain-yield prediction using remotely sensed data have been intensively studied in wheat and maize, but such information is limited in rice, barley, oats and soybeans. The present study proposes a new framework for rice-yield prediction, which eliminates the influence of the technology development, fertilizer application, and management improvement and can be used for the development and implementation of provincial rice-yield predictions. The technique requires the collection of remotely sensed data over an adequate time frame and a corresponding record of the region's crop yields. Longer normalized-difference-vegetation-index (NDVI) time series are preferable to shorter ones for the purposes of rice-yield prediction because the well-contrasted seasons in a longer time series provide the opportunity to build regression models with a wide application range. A regression analysis of the yield versus the year indicated an annual gain in the rice yield of 50 to 128 kg ha−1. Stepwise regression models for the remotely sensed rice-yield predictions have been developed for five typical rice-growing provinces in China. The prediction models for the remotely sensed rice yield indicated that the influences of the NDVIs on the rice yield were always positive. The association between the predicted and observed rice yields was highly significant without obvious outliers from 1982 to 2004. Independent validation found that the overall relative error is approximately 5.82%, and a majority of the relative errors were less than 5% in 2005 and 2006, depending on the study area. The proposed models can be used in an operational context to predict rice yields at the provincial level in China. The methodologies described in the present paper can be applied to any crop for which a sufficient time series of NDVI data and the corresponding historical yield information are available, as long as the historical yield increases significantly. PMID:23967112

  10. Toward an Empirically-Based Parametric Explosion Spectral Model

    DTIC Science & Technology

    2010-09-01

    estimated (Richards and Kim, 2009). This archive could potentially provide 200 recordings of explosions at Semipalatinsk Test Site of the former Soviet...estimates of explosion yield, and prior work at the Nevada Test Site (NTS) (e.g., Walter et al., 1995) has found that explosions in weak materials have...2007). Corner frequency scaling of regional seismic phases for underground nuclear explosions at the Nevada Test Site , Bull. Seismol. Soc. Am. 97

  11. Three-photon states in nonlinear crystal superlattices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Antonosyan, D. A.; Kryuchkyan, G. Yu.; Institute for Physical Researches, National Academy of Sciences Ashtarak-2, 0203 Ashtarak

    2011-04-15

    It has been a longstanding goal in quantum optics to realize controllable sources generating joint multiphoton states, particularly photon triplet with arbitrary spectral characteristics. We demonstrate that such sources can be realized via cascaded parametric down-conversion (PDC) in superlattice structures of nonlinear and linear segments. We consider a scheme that involves two parametric processes--{omega}{sub 0{yields}{omega}1}+{omega}{sub 2}, {omega}{sub 2{yields}{omega}1}+{omega}{sub 1} under pulsed pump--and investigate the spontaneous creation of a photon triplet as well as the generation of high-intensity mode in intracavity three-photon splitting. We show the preparation of Greenberger-Horne-Zeilinger polarization-entangled states in cascaded type-II and type-I PDC in the framework ofmore » considering the dual-grid structure that involves two periodically poled crystals. We demonstrate the method of compensation of the dispersive effects in nonlinear segments by appropriately chosen linear dispersive segments of superlattice for preparation of the heralded joint states of two polarized photons. In the case of intracavity three-photon splitting, we concentrate on the investigation of photon-number distributions, third-order photon-number correlation function, as well as the Wigner functions. These quantities are observed both for short interaction time intervals and the over-transient regime, when dissipative effects are essential.« less

  12. Supratentorial lesions contribute to trigeminal neuralgia in multiple sclerosis.

    PubMed

    Fröhlich, Kilian; Winder, Klemens; Linker, Ralf A; Engelhorn, Tobias; Dörfler, Arnd; Lee, De-Hyung; Hilz, Max J; Schwab, Stefan; Seifert, Frank

    2018-06-01

    Background It has been proposed that multiple sclerosis lesions afflicting the pontine trigeminal afferents contribute to trigeminal neuralgia in multiple sclerosis. So far, there are no imaging studies that have evaluated interactions between supratentorial lesions and trigeminal neuralgia in multiple sclerosis patients. Methods We conducted a retrospective study and sought multiple sclerosis patients with trigeminal neuralgia and controls in a local database. Multiple sclerosis lesions were manually outlined and transformed into stereotaxic space. We determined the lesion overlap and performed a voxel-wise subtraction analysis. Secondly, we conducted a voxel-wise non-parametric analysis using the Liebermeister test. Results From 12,210 multiple sclerosis patient records screened, we identified 41 patients with trigeminal neuralgia. The voxel-wise subtraction analysis yielded associations between trigeminal neuralgia and multiple sclerosis lesions in the pontine trigeminal afferents, as well as larger supratentorial lesion clusters in the contralateral insula and hippocampus. The non-parametric statistical analysis using the Liebermeister test yielded similar areas to be associated with multiple sclerosis-related trigeminal neuralgia. Conclusions Our study confirms previous data on associations between multiple sclerosis-related trigeminal neuralgia and pontine lesions, and showed for the first time an association with lesions in the insular region, a region involved in pain processing and endogenous pain modulation.

  13. Onset of solid state mantle convection and mixing during magma ocean solidification

    NASA Astrophysics Data System (ADS)

    Maurice, Maxime; Tosi, Nicola; Samuel, Henri; Plesa, Ana-Catalina; Hüttig, Christian; Breuer, Doris

    2017-04-01

    The fractional crystallization of a magma ocean can cause the formation of a compositional layering that can play a fundamental role for the subsequent long-term dynamics of the interior, for the evolution of geochemical reservoirs, and for surface tectonics. In order to assess to what extent primordial compositional heterogeneities generated by magma ocean solidification can be preserved, we investigate the solidification of a whole-mantle Martian magma ocean, and in particular the conditions that allow solid state convection to start mixing the mantle before solidification is completed. To this end, we performed 2-D numerical simulations in a cylindrical geometry. We treat the liquid magma ocean in a parametrized way while we self-consistently solve the conservation equations of thermochemical convection in the growing solid cumulates accounting for pressure-, temperature- and, where it applies, melt-dependent viscosity as well as parametrized yield stress to account for plastic yielding. By testing the effects of different cooling rates and convective vigor, we show that for a lifetime of the liquid magma ocean of 1 Myr or longer, the onset of solid state convection prior to complete mantle crystallization is likely and that a significant part of the compositional heterogeneities generated by fractionation can be erased by efficient mantle mixing.

  14. Chempy: A flexible chemical evolution model for abundance fitting. Do the Sun's abundances alone constrain chemical evolution models?

    NASA Astrophysics Data System (ADS)

    Rybizki, Jan; Just, Andreas; Rix, Hans-Walter

    2017-09-01

    Elemental abundances of stars are the result of the complex enrichment history of their galaxy. Interpretation of observed abundances requires flexible modeling tools to explore and quantify the information about Galactic chemical evolution (GCE) stored in such data. Here we present Chempy, a newly developed code for GCE modeling, representing a parametrized open one-zone model within a Bayesian framework. A Chempy model is specified by a set of five to ten parameters that describe the effective galaxy evolution along with the stellar and star-formation physics: for example, the star-formation history (SFH), the feedback efficiency, the stellar initial mass function (IMF), and the incidence of supernova of type Ia (SN Ia). Unlike established approaches, Chempy can sample the posterior probability distribution in the full model parameter space and test data-model matches for different nucleosynthetic yield sets. It is essentially a chemical evolution fitting tool. We straightforwardly extend Chempy to a multi-zone scheme. As an illustrative application, we show that interesting parameter constraints result from only the ages and elemental abundances of the Sun, Arcturus, and the present-day interstellar medium (ISM). For the first time, we use such information to infer the IMF parameter via GCE modeling, where we properly marginalize over nuisance parameters and account for different yield sets. We find that 11.6+ 2.1-1.6% of the IMF explodes as core-collapse supernova (CC-SN), compatible with Salpeter (1955, ApJ, 121, 161). We also constrain the incidence of SN Ia per 103M⊙ to 0.5-1.4. At the same time, this Chempy application shows persistent discrepancies between predicted and observed abundances for some elements, irrespective of the chosen yield set. These cannot be remedied by any variations of Chempy's parameters and could be an indication of missing nucleosynthetic channels. Chempy could be a powerful tool to confront predictions from stellar nucleosynthesis with far more complex abundance data sets and to refine the physical processes governing the chemical evolution of stellar systems.

  15. An Efficient Non-iterative Bulk Parametrization of Surface Fluxes for Stable Atmospheric Conditions Over Polar Sea-Ice

    NASA Astrophysics Data System (ADS)

    Gryanik, Vladimir M.; Lüpkes, Christof

    2018-02-01

    In climate and weather prediction models the near-surface turbulent fluxes of heat and momentum and related transfer coefficients are usually parametrized on the basis of Monin-Obukhov similarity theory (MOST). To avoid iteration, required for the numerical solution of the MOST equations, many models apply parametrizations of the transfer coefficients based on an approach relating these coefficients to the bulk Richardson number Rib. However, the parametrizations that are presently used in most climate models are valid only for weaker stability and larger surface roughnesses than those documented during the Surface Heat Budget of the Arctic Ocean campaign (SHEBA). The latter delivered a well-accepted set of turbulence data in the stable surface layer over polar sea-ice. Using stability functions based on the SHEBA data, we solve the MOST equations applying a new semi-analytic approach that results in transfer coefficients as a function of Rib and roughness lengths for momentum and heat. It is shown that the new coefficients reproduce the coefficients obtained by the numerical iterative method with a good accuracy in the most relevant range of stability and roughness lengths. For small Rib, the new bulk transfer coefficients are similar to the traditional coefficients, but for large Rib they are much smaller than currently used coefficients. Finally, a possible adjustment of the latter and the implementation of the new proposed parametrizations in models are discussed.

  16. Parametrization of DFTB3/3OB for Magnesium and Zinc for Chemical and Biological Applications

    PubMed Central

    2015-01-01

    We report the parametrization of the approximate density functional theory, DFTB3, for magnesium and zinc for chemical and biological applications. The parametrization strategy follows that established in previous work that parametrized several key main group elements (O, N, C, H, P, and S). This 3OB set of parameters can thus be used to study many chemical and biochemical systems. The parameters are benchmarked using both gas-phase and condensed-phase systems. The gas-phase results are compared to DFT (mostly B3LYP), ab initio (MP2 and G3B3), and PM6, as well as to a previous DFTB parametrization (MIO). The results indicate that DFTB3/3OB is particularly successful at predicting structures, including rather complex dinuclear metalloenzyme active sites, while being semiquantitative (with a typical mean absolute deviation (MAD) of ∼3–5 kcal/mol) for energetics. Single-point calculations with high-level quantum mechanics (QM) methods generally lead to very satisfying (a typical MAD of ∼1 kcal/mol) energetic properties. DFTB3/MM simulations for solution and two enzyme systems also lead to encouraging structural and energetic properties in comparison to available experimental data. The remaining limitations of DFTB3, such as the treatment of interaction between metal ions and highly charged/polarizable ligands, are also discussed. PMID:25178644

  17. Stress concentration factors at saddle and crown positions on the central brace of two-planar welded CHS DKT-connections

    NASA Astrophysics Data System (ADS)

    Ahmadi, Hamid; Lotfollahi-Yaghin, Mohammad Ali; Aminfar, Mohammad H.

    2012-03-01

    A set of parametric stress analyses was carried out for two-planar tubular DKT-joints under different axial loading conditions. The analysis results were used to present general remarks on the effects of the geometrical parameters on stress concentration factors (SCFs) at the inner saddle, outer saddle, and crown positions on the central brace. Based on results of finite element (FE) analysis and through nonlinear regression analysis, a new set of SCF parametric equations was established for fatigue design purposes. An assessment study of equations was conducted against the experimental data and original SCF database. The satisfaction of acceptance criteria proposed by the UK Department of Energy (UK DoE) was also checked. Results of parametric study showed that highly remarkable differences exist between the SCF values in a multi-planar DKT-joint and the corresponding SCFs in an equivalent uni-planar KT-joint having the same geometrical properties. It can be clearly concluded from this observation that using the equations proposed for uni-planar KT-connections to compute the SCFs in multi-planar DKT-joints will lead to either considerably under-predicting or over-predicting results. Hence, it is necessary to develop SCF formulae specially designed for multi-planar DKT-joints. Good results of equation assessment according to UK DoE acceptance criteria, high values of correlation coefficients, and the satisfactory agreement between the predictions of the proposed equations and the experimental data guarantee the accuracy of the equations. Therefore, the developed equations can be reliably used for fatigue design of offshore structures.

  18. An acoustic method for predicting relative strengths of cohesive sediment deposits

    NASA Astrophysics Data System (ADS)

    Reed, A. H.; Sanders, W. M.

    2017-12-01

    Cohesive sediment dynamics are fundamentally determined by sediment mineralogy, organic matter composition, ionic strength of water, and currents. These factors work to bind the cohesive sediments and to determine depositional rates. Once deposited the sediments exhibit a nonlinear response to stress and they develop increases in shear strength. Shear strength is critically important in resuspension, transport, creep, and failure predictions. Typically, shear strength is determined by point measurements, both indirectly from free-fall penetrometers or directly on cores with a shear vane. These values are then used to interpolate over larger areas. However, the remote determination of these properties would provide continuos coverage, yet it has proven difficult with sonar systems. Recently, findings from an acoustic study on cohesive sediments in a laboratory setting suggests that cohesive sediments may be differentiated using parametric acoustics; this method pulses two primary frequencies into the sediment and the resultant difference frequency is used to determine the degree of acoustic nonlinearity within the sediment. In this study, two marine clay species, kaolinite and montmorillonite, and two biopolymers, guar gum and xanthan gum were mixed to make nine different samples. The samples were evaluated in a parametric acoustic measurement tank. From the parametric acoustic measurements, the quadratic nonlinearity coefficient (beta) was determined. beta was correlated with the cation exchange capacity (CEC), an indicator of shear strength. The results indicate that increased acoustic nonlinearity correlates with increased CEC. From this work, laboratory measurements indicate that this correlation may be used evaluate geotechnical properties of cohesive sediments and may provide a means to predict sediment weakness in subaqueous environments.

  19. Modulation of precipitation by conditional symmetric instability release

    NASA Astrophysics Data System (ADS)

    Glinton, Michael R.; Gray, Suzanne L.; Chagnon, Jeffrey M.; Morcrette, Cyril J.

    2017-03-01

    Although many theoretical and observational studies have investigated the mechanism of conditional symmetric instability (CSI) release and associated it with mesoscale atmospheric phenomena such as frontal precipitation bands, cloud heads in rapidly developing extratropical cyclones and sting jets, its climatology and contribution to precipitation have not been extensively documented. The aim of this paper is to quantify the contribution of CSI release, yielding slantwise convection, to climatological precipitation accumulations for the North Atlantic and western Europe. Case studies reveal that CSI release could be common along cold fronts of mature extratropical cyclones and the North Atlantic storm track is found to be a region with large CSI according to two independent CSI metrics. Correlations of CSI with accumulated precipitation are also large in this region and CSI release is inferred to be occurring about 20% of the total time over depths of over 1 km. We conclude that the inability of current global weather forecast and climate prediction models to represent CSI release (due to insufficient resolution yet lack of subgrid parametrization schemes) may lead to errors in precipitation distributions, particularly in the region of the North Atlantic storm track.

  20. Sediment reallocations due to erosive rainfall events in the Three Gorges Reservoir Area, Central China

    NASA Astrophysics Data System (ADS)

    Stumpf, Felix; Goebes, Philipp; Schmidt, Karsten; Schindewolf, Marcus; Schönbrodt-Stitt, Sarah; Wadoux, Alexandre; Xiang, Wei; Scholten, Thomas

    2017-04-01

    Soil erosion by water outlines a major threat to the Three Gorges Reservoir Area in China. A detailed assessment of soil conservation measures requires a tool that spatially identifies sediment reallocations due to rainfall-runoff events in catchments. We applied EROSION 3D as a physically based soil erosion and deposition model in a small mountainous catchment. Generally, we aim to provide a methodological frame that facilitates the model parametrization in a data scarce environment and to identify sediment sources and deposits. We used digital soil mapping techniques to generate spatially distributed soil property information for parametrization. For model calibration and validation, we continuously monitored the catchment on rainfall, runoff and sediment yield for a period of 12 months. The model performed well for large events (sediment yield>1 Mg) with an averaged individual model error of 7.5%, while small events showed an average error of 36.2%. We focused on the large events to evaluate reallocation patterns. Erosion occurred in 11.1% of the study area with an average erosion rate of 49.9Mgha 1. Erosion mainly occurred on crop rotation areas with a spatial proportion of 69.2% for 'corn-rapeseed' and 69.1% for 'potato-cabbage'. Deposition occurred on 11.0%. Forested areas (9.7%), infrastructure (41.0%), cropland (corn-rapeseed: 13.6%, potatocabbage: 11.3%) and grassland (18.4%) were affected by deposition. Because the vast majority of annual sediment yields (80.3%) were associated to a few large erosive events, the modelling approach provides a useful tool to spatially assess soil erosion control and conservation measures.

  1. Influence of management and environment on Australian wheat: information for sustainable intensification and closing yield gaps

    NASA Astrophysics Data System (ADS)

    Bryan, B. A.; King, D.; Zhao, G.

    2014-04-01

    In the future, agriculture will need to produce more, from less land, more sustainably. But currently, in many places, actual crop yields are below those attainable. We quantified the ability for agricultural management to increase wheat yields across 179 Mha of potentially arable land in Australia. Using the Agricultural Production Systems Simulator (APSIM), we simulated the impact on wheat yield of 225 fertilization and residue management scenarios at a high spatial, temporal, and agronomic resolution from 1900 to 2010. The influence of management and environmental variables on wheat yield was then assessed using Spearman’s non-parametric correlation test with bootstrapping. While residue management showed little correlation, fertilization strongly increased wheat yield up to around 100 kg N ha-1 yr-1. However, this effect was highly dependent on the key environment variables of rainfall, temperature, and soil water holding capacity. The influence of fertilization on yield was stronger in cooler, wetter climates, and in soils with greater water holding capacity. We conclude that the effectiveness of management intensification to increase wheat yield is highly dependent upon local climate and soil conditions. We provide context-specific information on the yield benefits of fertilization to support adaptive agronomic decision-making and contribute to the closure of yield gaps. We also suggest that future assessments consider the economic and environmental sustainability of management intensification for closing yield gaps.

  2. Robust human body model injury prediction in simulated side impact crashes.

    PubMed

    Golman, Adam J; Danelson, Kerry A; Stitzel, Joel D

    2016-01-01

    This study developed a parametric methodology to robustly predict occupant injuries sustained in real-world crashes using a finite element (FE) human body model (HBM). One hundred and twenty near-side impact motor vehicle crashes were simulated over a range of parameters using a Toyota RAV4 (bullet vehicle), Ford Taurus (struck vehicle) FE models and a validated human body model (HBM) Total HUman Model for Safety (THUMS). Three bullet vehicle crash parameters (speed, location and angle) and two occupant parameters (seat position and age) were varied using a Latin hypercube design of Experiments. Four injury metrics (head injury criterion, half deflection, thoracic trauma index and pelvic force) were used to calculate injury risk. Rib fracture prediction and lung strain metrics were also analysed. As hypothesized, bullet speed had the greatest effect on each injury measure. Injury risk was reduced when bullet location was further from the B-pillar or when the bullet angle was more oblique. Age had strong correlation to rib fractures frequency and lung strain severity. The injuries from a real-world crash were predicted using two different methods by (1) subsampling the injury predictors from the 12 best crush profile matching simulations and (2) using regression models. Both injury prediction methods successfully predicted the case occupant's low risk for pelvic injury, high risk for thoracic injury, rib fractures and high lung strains with tight confidence intervals. This parametric methodology was successfully used to explore crash parameter interactions and to robustly predict real-world injuries.

  3. Sparkle model for AM1 calculation of lanthanide complexes: improved parameters for europium.

    PubMed

    Rocha, Gerd B; Freire, Ricardo O; Da Costa, Nivan B; De Sá, Gilberto F; Simas, Alfredo M

    2004-04-05

    In the present work, we sought to improve our sparkle model for the calculation of lanthanide complexes, SMLC,in various ways: (i) inclusion of the europium atomic mass, (ii) reparametrization of the model within AM1 from a new response function including all distances of the coordination polyhedron for tris(acetylacetonate)(1,10-phenanthroline) europium(III), (iii) implementation of the model in the software package MOPAC93r2, and (iv) inclusion of spherical Gaussian functions in the expression which computes the core-core repulsion energy. The parametrization results indicate that SMLC II is superior to the previous version of the model because Gaussian functions proved essential if one requires a better description of the geometries of the complexes. In order to validate our parametrization, we carried out calculations on 96 europium(III) complexes, selected from Cambridge Structural Database 2003, and compared our predicted ground state geometries with the experimental ones. Our results show that this new parametrization of the SMLC model, with the inclusion of spherical Gaussian functions in the core-core repulsion energy, is better capable of predicting the Eu-ligand distances than the previous version. The unsigned mean error for all interatomic distances Eu-L, in all 96 complexes, which, for the original SMLC is 0.3564 A, is lowered to 0.1993 A when the model was parametrized with the inclusion of two Gaussian functions. Our results also indicate that this model is more applicable to europium complexes with beta-diketone ligands. As such, we conclude that this improved model can be considered a powerful tool for the study of lanthanide complexes and their applications, such as the modeling of light conversion molecular devices.

  4. Modeling water yield response to forest cover changes in northern Minnesota

    Treesearch

    S.C. Bernath; E.S. Verry; K.N. Brooks; P.F. Ffolliott

    1982-01-01

    A water yield model (TIMWAT) has been developed to predict changes in water yield following changes in forest cover in northern Minnesota. Two versions of the model exist; one predicts changes in water yield as a function of gross precipitation and time after clearcutting. The second version predicts changes in water yield due to changes in above-ground biomass...

  5. Efficient Characterization of Parametric Uncertainty of Complex (Bio)chemical Networks.

    PubMed

    Schillings, Claudia; Sunnåker, Mikael; Stelling, Jörg; Schwab, Christoph

    2015-08-01

    Parametric uncertainty is a particularly challenging and relevant aspect of systems analysis in domains such as systems biology where, both for inference and for assessing prediction uncertainties, it is essential to characterize the system behavior globally in the parameter space. However, current methods based on local approximations or on Monte-Carlo sampling cope only insufficiently with high-dimensional parameter spaces associated with complex network models. Here, we propose an alternative deterministic methodology that relies on sparse polynomial approximations. We propose a deterministic computational interpolation scheme which identifies most significant expansion coefficients adaptively. We present its performance in kinetic model equations from computational systems biology with several hundred parameters and state variables, leading to numerical approximations of the parametric solution on the entire parameter space. The scheme is based on adaptive Smolyak interpolation of the parametric solution at judiciously and adaptively chosen points in parameter space. As Monte-Carlo sampling, it is "non-intrusive" and well-suited for massively parallel implementation, but affords higher convergence rates. This opens up new avenues for large-scale dynamic network analysis by enabling scaling for many applications, including parameter estimation, uncertainty quantification, and systems design.

  6. Efficient Characterization of Parametric Uncertainty of Complex (Bio)chemical Networks

    PubMed Central

    Schillings, Claudia; Sunnåker, Mikael; Stelling, Jörg; Schwab, Christoph

    2015-01-01

    Parametric uncertainty is a particularly challenging and relevant aspect of systems analysis in domains such as systems biology where, both for inference and for assessing prediction uncertainties, it is essential to characterize the system behavior globally in the parameter space. However, current methods based on local approximations or on Monte-Carlo sampling cope only insufficiently with high-dimensional parameter spaces associated with complex network models. Here, we propose an alternative deterministic methodology that relies on sparse polynomial approximations. We propose a deterministic computational interpolation scheme which identifies most significant expansion coefficients adaptively. We present its performance in kinetic model equations from computational systems biology with several hundred parameters and state variables, leading to numerical approximations of the parametric solution on the entire parameter space. The scheme is based on adaptive Smolyak interpolation of the parametric solution at judiciously and adaptively chosen points in parameter space. As Monte-Carlo sampling, it is “non-intrusive” and well-suited for massively parallel implementation, but affords higher convergence rates. This opens up new avenues for large-scale dynamic network analysis by enabling scaling for many applications, including parameter estimation, uncertainty quantification, and systems design. PMID:26317784

  7. Parametric Instability Rates in Periodically Driven Band Systems

    NASA Astrophysics Data System (ADS)

    Lellouch, S.; Bukov, M.; Demler, E.; Goldman, N.

    2017-04-01

    In this work, we analyze the dynamical properties of periodically driven band models. Focusing on the case of Bose-Einstein condensates, and using a mean-field approach to treat interparticle collisions, we identify the origin of dynamical instabilities arising from the interplay between the external drive and interactions. We present a widely applicable generic numerical method to extract instability rates and link parametric instabilities to uncontrolled energy absorption at short times. Based on the existence of parametric resonances, we then develop an analytical approach within Bogoliubov theory, which quantitatively captures the instability rates of the system and provides an intuitive picture of the relevant physical processes, including an understanding of how transverse modes affect the formation of parametric instabilities. Importantly, our calculations demonstrate an agreement between the instability rates determined from numerical simulations and those predicted by theory. To determine the validity regime of the mean-field analysis, we compare the latter to the weakly coupled conserving approximation. The tools developed and the results obtained in this work are directly relevant to present-day ultracold-atom experiments based on shaken optical lattices and are expected to provide an insightful guidance in the quest for Floquet engineering.

  8. Parametric study of a passive solar-heated house with special attention on evaluating occupant thermal comfort

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Emery, A.F.; Heerwage, D.R.; Kippehan, C.J.

    A parametric study has been conducted of passive heating devices that are to be used to provide environmental conditioning for a single-family house. This study has been performed using the thermal simulation computer program UWENSOL. Climatic data used in this analysis were for Yokohama, Japan, which has a subtropical humid climate similar to Washington, D.C. (in terms of winter air temperatures and useful radiation). Initial studies considered the use of different wall thicknesses, glazing types, and orientations for a Trombe wall and alternate storage quantities for a walk-in greenhouse. Employing a number of comparative parametric studies an economical and efficientmore » combination of devices was selected. Then, using a computer routine COMFORT which is based on the Fanger Comfort Equation, another series of parametric analyses were performed to evaluate the degree of thermal comfort for the occupants of the house. The results of these analyses demonstrated that an averaged Predicted Mean Vote of less than 0.3 from a thermally-neutral condition could be maintained and that less than 10% of all occupants of such a passively-heated house would be thermally uncomfortable.« less

  9. Parametric Decay Instability of Near-Acoustic Waves in Fluid and Kinetic Regimes

    NASA Astrophysics Data System (ADS)

    Affolter, M.; Anderegg, F.; Driscoll, C. F.; Valentini, F.

    2016-10-01

    We present quantitative measurements of parametric wave-wave coupling rates and decay instabilities in the range 10 meV Δω /2. In contrast, at higher temperatures, the mz = 2 wave is more unstable. The instability threshold is reduced from the cold fluid prediction as the plasma temperature is increased, which is in qualitative agreement with Vlasov simulations, but is not yet understood theoretically. Supported by DOE/HEDLP Grant DE-SC0008693 and DOE Fusion Energy Science Postdoctoral Research Program administered by the Oak Ridge Institute for Science and Education.

  10. Design of a completely model free adaptive control in the presence of parametric, non-parametric uncertainties and random control signal delay.

    PubMed

    Tutsoy, Onder; Barkana, Duygun Erol; Tugal, Harun

    2018-05-01

    In this paper, an adaptive controller is developed for discrete time linear systems that takes into account parametric uncertainty, internal-external non-parametric random uncertainties, and time varying control signal delay. Additionally, the proposed adaptive control is designed in such a way that it is utterly model free. Even though these properties are studied separately in the literature, they are not taken into account all together in adaptive control literature. The Q-function is used to estimate long-term performance of the proposed adaptive controller. Control policy is generated based on the long-term predicted value, and this policy searches an optimal stabilizing control signal for uncertain and unstable systems. The derived control law does not require an initial stabilizing control assumption as in the ones in the recent literature. Learning error, control signal convergence, minimized Q-function, and instantaneous reward are analyzed to demonstrate the stability and effectiveness of the proposed adaptive controller in a simulation environment. Finally, key insights on parameters convergence of the learning and control signals are provided. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  11. On-line prediction of yield grade, longissimus muscle area, preliminary yield grade, adjusted preliminary yield grade, and marbling score using the MARC beef carcass image analysis system.

    PubMed

    Shackelford, S D; Wheeler, T L; Koohmaraie, M

    2003-01-01

    The present experiment was conducted to evaluate the ability of the U.S. Meat Animal Research Center's beef carcass image analysis system to predict calculated yield grade, longissimus muscle area, preliminary yield grade, adjusted preliminary yield grade, and marbling score under commercial beef processing conditions. In two commercial beef-processing facilities, image analysis was conducted on 800 carcasses on the beef-grading chain immediately after the conventional USDA beef quality and yield grades were applied. Carcasses were blocked by plant and observed calculated yield grade. The carcasses were then separated, with 400 carcasses assigned to a calibration data set that was used to develop regression equations, and the remaining 400 carcasses assigned to a prediction data set used to validate the regression equations. Prediction equations, which included image analysis variables and hot carcass weight, accounted for 90, 88, 90, 88, and 76% of the variation in calculated yield grade, longissimus muscle area, preliminary yield grade, adjusted preliminary yield grade, and marbling score, respectively, in the prediction data set. In comparison, the official USDA yield grade as applied by online graders accounted for 73% of the variation in calculated yield grade. The technology described herein could be used by the beef industry to more accurately determine beef yield grades; however, this system does not provide an accurate enough prediction of marbling score to be used without USDA grader interaction for USDA quality grading.

  12. An image based method for crop yield prediction using remotely sensed and crop canopy data: the case of Paphos district, western Cyprus

    NASA Astrophysics Data System (ADS)

    Papadavid, G.; Hadjimitsis, D.

    2014-08-01

    Remote sensing techniques development have provided the opportunity for optimizing yields in the agricultural procedure and moreover to predict the forthcoming yield. Yield prediction plays a vital role in Agricultural Policy and provides useful data to policy makers. In this context, crop and soil parameters along with NDVI index which are valuable sources of information have been elaborated statistically to test if a) Durum wheat yield can be predicted and b) when is the actual time-window to predict the yield in the district of Paphos, where Durum wheat is the basic cultivation and supports the rural economy of the area. 15 plots cultivated with Durum wheat from the Agricultural Research Institute of Cyprus for research purposes, in the area of interest, have been under observation for three years to derive the necessary data. Statistical and remote sensing techniques were then applied to derive and map a model that can predict yield of Durum wheat in this area. Indeed the semi-empirical model developed for this purpose, with very high correlation coefficient R2=0.886, has shown in practice that can predict yields very good. Students T test has revealed that predicted values and real values of yield have no statistically significant difference. The developed model can and will be further elaborated with more parameters and applied for other crops in the near future.

  13. Strategy For Yield Control And Enhancement In VLSI Wafer Manufacturing

    NASA Astrophysics Data System (ADS)

    Neilson, B.; Rickey, D.; Bane, R. P.

    1988-01-01

    In most fully utilized integrated circuit (IC) production facilities, profit is very closely linked with yield. In even the most controlled manufacturing environments, defects due to foreign material are a still major contributor to yield loss. Ideally, an IC manufacturer will have ample engineering resources to address any problem that arises. In the real world, staffing limitations require that some tasks must be left undone and potential benefits left unrealized. Therefore, it is important to prioritize problems in a manner that will give the maximum benefit to the manufacturer. When offered a smorgasbord of problems to solve, most people (engineers included) will start with what is most interesting or the most comfortable to work on. By providing a system that accurately predicts the impact of a wide variety of defect types, a rational method of prioritizing engineering effort can be made. To that effect, a program was developed to determine and rank the major yield detractors in a mixed analog/digital FET manufacturing line. The two classical methods of determining yield detractors are chip failure analysis and defect monitoring on drop in test die. Both of these methods have short comings: 1) Chip failure analysis is painstaking and very time consuming. As a result, the sample size is very small. 2) Drop in test die are usually designed for device parametric analysis rather than defect analysis. To provide enough wafer real estate to do meaningful defect analysis would render the wafer worthless for production. To avoid these problems, a defect monitor was designed that provided enough area to detect defects at the same rate or better than the NMOS product die whose yield was to be optimized. The defect monitor was comprehensive and electrically testable using such equipment as the Prometrix LM25 and other digital testers. This enabled the quick accumulation of data which could be handled statistically and mapped individually. By scaling the defect densities found on the monitors to the known sensitivities of the product wafer, the defect types were ranked by defect limiting yield. (Limiting yield is the resultant product yield if there were no other failure mechanisms other than the type being considered.) These results were then compared to the product failure analysis results to verify that the monitor was finding the same types of defects in the same proportion which were troubling our product. Finally, the major defect types were isolated and reduced using the short loop capability of the monitor.

  14. Real­-Time Ensemble Forecasting of Coronal Mass Ejections Using the Wsa-Enlil+Cone Model

    NASA Astrophysics Data System (ADS)

    Mays, M. L.; Taktakishvili, A.; Pulkkinen, A. A.; Odstrcil, D.; MacNeice, P. J.; Rastaetter, L.; LaSota, J. A.

    2014-12-01

    Ensemble forecasting of coronal mass ejections (CMEs) provides significant information in that it provides an estimation of the spread or uncertainty in CME arrival time predictions. Real-time ensemble modeling of CME propagation is performed by forecasters at the Space Weather Research Center (SWRC) using the WSA-ENLIL+cone model available at the Community Coordinated Modeling Center (CCMC). To estimate the effect of uncertainties in determining CME input parameters on arrival time predictions, a distribution of n (routinely n=48) CME input parameter sets are generated using the CCMC Stereo CME Analysis Tool (StereoCAT) which employs geometrical triangulation techniques. These input parameters are used to perform n different simulations yielding an ensemble of solar wind parameters at various locations of interest, including a probability distribution of CME arrival times (for hits), and geomagnetic storm strength (for Earth-directed hits). We present the results of ensemble simulations for a total of 38 CME events in 2013-2014. For 28 of the ensemble runs containing hits, the observed CME arrival was within the range of ensemble arrival time predictions for 14 runs (half). The average arrival time prediction was computed for each of the 28 ensembles predicting hits and using the actual arrival time, an average absolute error of 10.0 hours (RMSE=11.4 hours) was found for all 28 ensembles, which is comparable to current forecasting errors. Some considerations for the accuracy of ensemble CME arrival time predictions include the importance of the initial distribution of CME input parameters, particularly the mean and spread. When the observed arrivals are not within the predicted range, this still allows the ruling out of prediction errors caused by tested CME input parameters. Prediction errors can also arise from ambient model parameters such as the accuracy of the solar wind background, and other limitations. Additionally the ensemble modeling sysem was used to complete a parametric event case study of the sensitivity of the CME arrival time prediction to free parameters for ambient solar wind model and CME. The parameter sensitivity study suggests future directions for the system, such as running ensembles using various magnetogram inputs to the WSA model.

  15. Perceptual reversals during binocular rivalry: ERP components and their concomitant source differences.

    PubMed

    Britz, Juliane; Pitts, Michael A

    2011-11-01

    We used an intermittent stimulus presentation to investigate event-related potential (ERP) components associated with perceptual reversals during binocular rivalry. The combination of spatiotemporal ERP analysis with source imaging and statistical parametric mapping of the concomitant source differences yielded differences in three time windows: reversals showed increased activity in early visual (∼120 ms) and in inferior frontal and anterior temporal areas (∼400-600 ms) and decreased activity in the ventral stream (∼250-350 ms). The combination of source imaging and statistical parametric mapping suggests that these differences were due to differences in generator strength and not generator configuration, unlike the initiation of reversals in right inferior parietal areas. These results are discussed within the context of the extensive network of brain areas that has been implicated in the initiation, implementation, and appraisal of bistable perceptual reversals. Copyright © 2011 Society for Psychophysiological Research.

  16. Fission properties of Po isotopes in different macroscopic-microscopic models

    NASA Astrophysics Data System (ADS)

    Bartel, J.; Pomorski, K.; Nerlo-Pomorska, B.; Schmitt, Ch

    2015-11-01

    Fission-barrier heights of nuclei in the Po isotopic chain are investigated in several macroscopic-microscopic models. Using the Yukawa-folded single-particle potential, the Lublin-Strasbourg drop (LSD) model, the Strutinsky shell-correction method to yield the shell corrections and the BCS theory for the pairing contributions, fission-barrier heights are calculated and found in quite good agreement with the experimental data. This turns out, however, to be only the case when the underlying macroscopic, liquid-drop (LD) type, theory is well chosen. Together with the LSD approach, different LD parametrizations proposed by Moretto et al are tested. Four deformation parameters describing respectively elongation, neck-formation, reflectional-asymmetric, and non-axiality of the nuclear shape thus defining the so called modified Funny Hills shape parametrization are used in the calculation. The present study clearly demonstrates that nuclear fission-barrier heights constitute a challenging and selective tool to discern between such different macroscopic approaches.

  17. Parametric study in weld mismatch of longitudinally welded SSME HPFTP inlet

    NASA Technical Reports Server (NTRS)

    Min, J. B.; Spanyer, K. L.; Brunair, R. M.

    1991-01-01

    Welded joints are an essential part of pressure vessels such as the Space Shuttle Main Engine (SSME) Turbopumps. Defects produced in the welding process can be detrimental to weld performance. Recently, review of the SSME high pressure fuel turbopump (HPFTP) titanium inlet x rays revealed several weld discrepancies such as penetrameter density issues, film processing discrepancies, weld width discrepancies, porosity, lack of fusion, and weld offsets. Currently, the sensitivity of welded structures to defects is of concern. From a fatigue standpoint, weld offset may have a serious effect since local yielding, in general, aggravates cyclic stress effects. Therefore, the weld offset issue is considered. Using the finite element method and mathematical formulations, parametric studies were conducted to determine the influence of weld offsets and a variation of weld widths in longitudinally welded cylindrical structures with equal wall thickness on both sides of the joint. From the study, the finite element results and theoretical solutions are presented.

  18. Comment on “Breakdown of the expansion of finite-size corrections to the hydrogen Lamb shift in moments of charge distribution”

    DOE PAGES

    Arrington, J.

    2016-02-23

    In a recent study, Hagelstein and Pascalutsa [F. Hagelstein and V. Pascalutsa, Phys. Rev. A 91, 040502 (2015)] examine the error associated with an expansion of proton structure corrections to the Lamb shift in terms of moments of the charge distribution. They propose a small modification to a conventional parametrization of the proton's charge form factor and show that this can resolve the proton radius puzzle. However, while the size of the bump they add to the form factor is small, it is large compared to the total proton structure effects in the initial parametrization, yielding a final form factormore » that is unphysical. Reducing their modification to the point where the resulting form factor is physical does not allow for a resolution of the radius puzzle.« less

  19. Some exact velocity profiles for granular flow in converging hoppers

    NASA Astrophysics Data System (ADS)

    Cox, Grant M.; Hill, James M.

    2005-01-01

    Gravity flow of granular materials through hoppers occurs in many industrial processes. For an ideal cohesionless granular material, which satisfies the Coulomb-Mohr yield condition, the number of known analytical solutions is limited. However, for the special case of the angle of internal friction δ equal to ninety degrees, there exist exact parametric solutions for the governing coupled ordinary differential equations for both two-dimensional wedges and three-dimensional cones, both of which involve two arbitrary constants of integration. These solutions are the only known analytical solutions of this generality. Here, we utilize the double-shearing theory of granular materials to determine the velocity field corresponding to these exact parametric solutions for the two problems of gravity flow through converging wedge and conical hoppers. An independent numerical solution for other angles of internal friction is shown to coincide with the analytical solution.

  20. Optimizing Muscle Parameters in Musculoskeletal Modeling Using Monte Carlo Simulations

    NASA Technical Reports Server (NTRS)

    Hanson, Andrea; Reed, Erik; Cavanagh, Peter

    2011-01-01

    Astronauts assigned to long-duration missions experience bone and muscle atrophy in the lower limbs. The use of musculoskeletal simulation software has become a useful tool for modeling joint and muscle forces during human activity in reduced gravity as access to direct experimentation is limited. Knowledge of muscle and joint loads can better inform the design of exercise protocols and exercise countermeasure equipment. In this study, the LifeModeler(TM) (San Clemente, CA) biomechanics simulation software was used to model a squat exercise. The initial model using default parameters yielded physiologically reasonable hip-joint forces. However, no activation was predicted in some large muscles such as rectus femoris, which have been shown to be active in 1-g performance of the activity. Parametric testing was conducted using Monte Carlo methods and combinatorial reduction to find a muscle parameter set that more closely matched physiologically observed activation patterns during the squat exercise. Peak hip joint force using the default parameters was 2.96 times body weight (BW) and increased to 3.21 BW in an optimized, feature-selected test case. The rectus femoris was predicted to peak at 60.1% activation following muscle recruitment optimization, compared to 19.2% activation with default parameters. These results indicate the critical role that muscle parameters play in joint force estimation and the need for exploration of the solution space to achieve physiologically realistic muscle activation.

  1. Empirical yield tables for spruce-fir cut-over lands in the Northeast

    Treesearch

    Marinus Westveld

    1953-01-01

    Predicting future timber yields is an unavoidable task for the forest manager who is interested in growing timber as a long-term investment. He must predict yields as a basis for formulating management plans and policies. And he must predict yields from lands that differ greatly in productivity.

  2. Optimal Operation of a Josephson Parametric Amplifier for Vacuum Squeezing

    NASA Astrophysics Data System (ADS)

    Malnou, M.; Palken, D. A.; Vale, Leila R.; Hilton, Gene C.; Lehnert, K. W.

    2018-04-01

    A Josephson parametric amplifier (JPA) can create squeezed states of microwave light, lowering the noise associated with certain quantum measurements. We experimentally study how the JPA's pump influences the phase-sensitive amplification and deamplification of a coherent tone's amplitude when that amplitude is commensurate with vacuum fluctuations. We predict and demonstrate that, by operating the JPA with a single current pump whose power is greater than the value that maximizes gain, the amplifier distortion is reduced and, consequently, squeezing is improved. Optimizing the singly pumped JPA's operation in this fashion, we directly observe 3.87 ±0.03 dB of vacuum squeezing over a bandwidth of 30 MHz.

  3. A parametric study of harmonic rotor hub loads

    NASA Technical Reports Server (NTRS)

    He, Chengjian

    1993-01-01

    A parametric study of vibratory rotor hub loads in a nonrotating system is presented. The study is based on a CAMRAD/JA model constructed for the GBH (Growth Version of Blackhawk Helicopter) Mach-scaled wind tunnel rotor model with high blade twist (-16 deg). The theoretical hub load predictions are validated by correlation with available measured data. Effects of various blade aeroelastic design changes on the harmonic nonrotating frame hub loads at both low and high forward flight speeds are investigated. The study aims to illustrate some of the physical mechanisms for change in the harmonic rotor hub loads due to blade design variations.

  4. Mixture EMOS model for calibrating ensemble forecasts of wind speed.

    PubMed

    Baran, S; Lerch, S

    2016-03-01

    Ensemble model output statistics (EMOS) is a statistical tool for post-processing forecast ensembles of weather variables obtained from multiple runs of numerical weather prediction models in order to produce calibrated predictive probability density functions. The EMOS predictive probability density function is given by a parametric distribution with parameters depending on the ensemble forecasts. We propose an EMOS model for calibrating wind speed forecasts based on weighted mixtures of truncated normal (TN) and log-normal (LN) distributions where model parameters and component weights are estimated by optimizing the values of proper scoring rules over a rolling training period. The new model is tested on wind speed forecasts of the 50 member European Centre for Medium-range Weather Forecasts ensemble, the 11 member Aire Limitée Adaptation dynamique Développement International-Hungary Ensemble Prediction System ensemble of the Hungarian Meteorological Service, and the eight-member University of Washington mesoscale ensemble, and its predictive performance is compared with that of various benchmark EMOS models based on single parametric families and combinations thereof. The results indicate improved calibration of probabilistic and accuracy of point forecasts in comparison with the raw ensemble and climatological forecasts. The mixture EMOS model significantly outperforms the TN and LN EMOS methods; moreover, it provides better calibrated forecasts than the TN-LN combination model and offers an increased flexibility while avoiding covariate selection problems. © 2016 The Authors Environmetrics Published by JohnWiley & Sons Ltd.

  5. Genomic selection across multiple breeding cycles in applied bread wheat breeding.

    PubMed

    Michel, Sebastian; Ametz, Christian; Gungor, Huseyin; Epure, Doru; Grausgruber, Heinrich; Löschenberger, Franziska; Buerstmayr, Hermann

    2016-06-01

    We evaluated genomic selection across five breeding cycles of bread wheat breeding. Bias of within-cycle cross-validation and methods for improving the prediction accuracy were assessed. The prospect of genomic selection has been frequently shown by cross-validation studies using the same genetic material across multiple environments, but studies investigating genomic selection across multiple breeding cycles in applied bread wheat breeding are lacking. We estimated the prediction accuracy of grain yield, protein content and protein yield of 659 inbred lines across five independent breeding cycles and assessed the bias of within-cycle cross-validation. We investigated the influence of outliers on the prediction accuracy and predicted protein yield by its components traits. A high average heritability was estimated for protein content, followed by grain yield and protein yield. The bias of the prediction accuracy using populations from individual cycles using fivefold cross-validation was accordingly substantial for protein yield (17-712 %) and less pronounced for protein content (8-86 %). Cross-validation using the cycles as folds aimed to avoid this bias and reached a maximum prediction accuracy of [Formula: see text] = 0.51 for protein content, [Formula: see text] = 0.38 for grain yield and [Formula: see text] = 0.16 for protein yield. Dropping outlier cycles increased the prediction accuracy of grain yield to [Formula: see text] = 0.41 as estimated by cross-validation, while dropping outlier environments did not have a significant effect on the prediction accuracy. Independent validation suggests, on the other hand, that careful consideration is necessary before an outlier correction is undertaken, which removes lines from the training population. Predicting protein yield by multiplying genomic estimated breeding values of grain yield and protein content raised the prediction accuracy to [Formula: see text] = 0.19 for this derived trait.

  6. Random Forests for Global and Regional Crop Yield Predictions.

    PubMed

    Jeong, Jig Han; Resop, Jonathan P; Mueller, Nathaniel D; Fleisher, David H; Yun, Kyungdahm; Butler, Ethan E; Timlin, Dennis J; Shim, Kyo-Moon; Gerber, James S; Reddy, Vangimalla R; Kim, Soo-Hyung

    2016-01-01

    Accurate predictions of crop yield are critical for developing effective agricultural and food policies at the regional and global scales. We evaluated a machine-learning method, Random Forests (RF), for its ability to predict crop yield responses to climate and biophysical variables at global and regional scales in wheat, maize, and potato in comparison with multiple linear regressions (MLR) serving as a benchmark. We used crop yield data from various sources and regions for model training and testing: 1) gridded global wheat grain yield, 2) maize grain yield from US counties over thirty years, and 3) potato tuber and maize silage yield from the northeastern seaboard region. RF was found highly capable of predicting crop yields and outperformed MLR benchmarks in all performance statistics that were compared. For example, the root mean square errors (RMSE) ranged between 6 and 14% of the average observed yield with RF models in all test cases whereas these values ranged from 14% to 49% for MLR models. Our results show that RF is an effective and versatile machine-learning method for crop yield predictions at regional and global scales for its high accuracy and precision, ease of use, and utility in data analysis. RF may result in a loss of accuracy when predicting the extreme ends or responses beyond the boundaries of the training data.

  7. Correlation tracking study for meter-class solar telescope on space shuttle. [solar granulation

    NASA Technical Reports Server (NTRS)

    Smithson, R. C.; Tarbell, T. D.

    1977-01-01

    The theory and expected performance level of correlation trackers used to control the pointing of a solar telescope in space using white light granulation as a target were studied. Three specific trackers were modeled and their performance levels predicted for telescopes of various apertures. The performance of the computer model trackers on computer enhanced granulation photographs was evaluated. Parametric equations for predicting tracker performance are presented.

  8. Reduction of predictive uncertainty in estimating irrigation water requirement through multi-model ensembles and ensemble averaging

    NASA Astrophysics Data System (ADS)

    Multsch, S.; Exbrayat, J.-F.; Kirby, M.; Viney, N. R.; Frede, H.-G.; Breuer, L.

    2014-11-01

    Irrigation agriculture plays an increasingly important role in food supply. Many evapotranspiration models are used today to estimate the water demand for irrigation. They consider different stages of crop growth by empirical crop coefficients to adapt evapotranspiration throughout the vegetation period. We investigate the importance of the model structural vs. model parametric uncertainty for irrigation simulations by considering six evapotranspiration models and five crop coefficient sets to estimate irrigation water requirements for growing wheat in the Murray-Darling Basin, Australia. The study is carried out using the spatial decision support system SPARE:WATER. We find that structural model uncertainty is far more important than model parametric uncertainty to estimate irrigation water requirement. Using the Reliability Ensemble Averaging (REA) technique, we are able to reduce the overall predictive model uncertainty by more than 10%. The exceedance probability curve of irrigation water requirements shows that a certain threshold, e.g. an irrigation water limit due to water right of 400 mm, would be less frequently exceeded in case of the REA ensemble average (45%) in comparison to the equally weighted ensemble average (66%). We conclude that multi-model ensemble predictions and sophisticated model averaging techniques are helpful in predicting irrigation demand and provide relevant information for decision making.

  9. When the Single Matters more than the Group (II): Addressing the Problem of High False Positive Rates in Single Case Voxel Based Morphometry Using Non-parametric Statistics.

    PubMed

    Scarpazza, Cristina; Nichols, Thomas E; Seramondi, Donato; Maumet, Camille; Sartori, Giuseppe; Mechelli, Andrea

    2016-01-01

    In recent years, an increasing number of studies have used Voxel Based Morphometry (VBM) to compare a single patient with a psychiatric or neurological condition of interest against a group of healthy controls. However, the validity of this approach critically relies on the assumption that the single patient is drawn from a hypothetical population with a normal distribution and variance equal to that of the control group. In a previous investigation, we demonstrated that family-wise false positive error rate (i.e., the proportion of statistical comparisons yielding at least one false positive) in single case VBM are much higher than expected (Scarpazza et al., 2013). Here, we examine whether the use of non-parametric statistics, which does not rely on the assumptions of normal distribution and equal variance, would enable the investigation of single subjects with good control of false positive risk. We empirically estimated false positive rates (FPRs) in single case non-parametric VBM, by performing 400 statistical comparisons between a single disease-free individual and a group of 100 disease-free controls. The impact of smoothing (4, 8, and 12 mm) and type of pre-processing (Modulated, Unmodulated) was also examined, as these factors have been found to influence FPRs in previous investigations using parametric statistics. The 400 statistical comparisons were repeated using two independent, freely available data sets in order to maximize the generalizability of the results. We found that the family-wise error rate was 5% for increases and 3.6% for decreases in one data set; and 5.6% for increases and 6.3% for decreases in the other data set (5% nominal). Further, these results were not dependent on the level of smoothing and modulation. Therefore, the present study provides empirical evidence that single case VBM studies with non-parametric statistics are not susceptible to high false positive rates. The critical implication of this finding is that VBM can be used to characterize neuroanatomical alterations in individual subjects as long as non-parametric statistics are employed.

  10. Parametric instability, inverse cascade and the range of solar-wind turbulence

    NASA Astrophysics Data System (ADS)

    Chandran, Benjamin D. G.

    2018-02-01

    In this paper, weak-turbulence theory is used to investigate the nonlinear evolution of the parametric instability in three-dimensional low- plasmas at wavelengths much greater than the ion inertial length under the assumption that slow magnetosonic waves are strongly damped. It is shown analytically that the parametric instability leads to an inverse cascade of Alfvén wave quanta, and several exact solutions to the wave kinetic equations are presented. The main results of the paper concern the parametric decay of Alfvén waves that initially satisfy +\\gg e-$ , where +$ and -$ are the frequency ( ) spectra of Alfvén waves propagating in opposite directions along the magnetic field lines. If +$ initially has a peak frequency 0$ (at which +$ is maximized) and an `infrared' scaling p$ at smaller with , then +$ acquires an -1$ scaling throughout a range of frequencies that spreads out in both directions from 0$ . At the same time, -$ acquires an -2$ scaling within this same frequency range. If the plasma parameters and infrared +$ spectrum are chosen to match conditions in the fast solar wind at a heliocentric distance of 0.3 astronomical units (AU), then the nonlinear evolution of the parametric instability leads to an +$ spectrum that matches fast-wind measurements from the Helios spacecraft at 0.3 AU, including the observed -1$ scaling at -4~\\text{Hz}$ . The results of this paper suggest that the -1$ spectrum seen by Helios in the fast solar wind at -4~\\text{Hz}$ is produced in situ by parametric decay and that the -1$ range of +$ extends over an increasingly narrow range of frequencies as decreases below 0.3 AU. This prediction will be tested by measurements from the Parker Solar Probe.

  11. Comparison of Cox's Regression Model and Parametric Models in Evaluating the Prognostic Factors for Survival after Liver Transplantation in Shiraz during 2000-2012.

    PubMed

    Adelian, R; Jamali, J; Zare, N; Ayatollahi, S M T; Pooladfar, G R; Roustaei, N

    2015-01-01

    Identification of the prognostic factors for survival in patients with liver transplantation is challengeable. Various methods of survival analysis have provided different, sometimes contradictory, results from the same data. To compare Cox's regression model with parametric models for determining the independent factors for predicting adults' and pediatrics' survival after liver transplantation. This study was conducted on 183 pediatric patients and 346 adults underwent liver transplantation in Namazi Hospital, Shiraz, southern Iran. The study population included all patients undergoing liver transplantation from 2000 to 2012. The prognostic factors sex, age, Child class, initial diagnosis of the liver disease, PELD/MELD score, and pre-operative laboratory markers were selected for survival analysis. Among 529 patients, 346 (64.5%) were adult and 183 (34.6%) were pediatric cases. Overall, the lognormal distribution was the best-fitting model for adult and pediatric patients. Age in adults (HR=1.16, p<0.05) and weight (HR=2.68, p<0.01) and Child class B (HR=2.12, p<0.05) in pediatric patients were the most important factors for prediction of survival after liver transplantation. Adult patients younger than the mean age and pediatric patients weighing above the mean and Child class A (compared to those with classes B or C) had better survival. Parametric regression model is a good alternative for the Cox's regression model.

  12. Nonparametric predictive inference for combining diagnostic tests with parametric copula

    NASA Astrophysics Data System (ADS)

    Muhammad, Noryanti; Coolen, F. P. A.; Coolen-Maturi, T.

    2017-09-01

    Measuring the accuracy of diagnostic tests is crucial in many application areas including medicine and health care. The Receiver Operating Characteristic (ROC) curve is a popular statistical tool for describing the performance of diagnostic tests. The area under the ROC curve (AUC) is often used as a measure of the overall performance of the diagnostic test. In this paper, we interest in developing strategies for combining test results in order to increase the diagnostic accuracy. We introduce nonparametric predictive inference (NPI) for combining two diagnostic test results with considering dependence structure using parametric copula. NPI is a frequentist statistical framework for inference on a future observation based on past data observations. NPI uses lower and upper probabilities to quantify uncertainty and is based on only a few modelling assumptions. While copula is a well-known statistical concept for modelling dependence of random variables. A copula is a joint distribution function whose marginals are all uniformly distributed and it can be used to model the dependence separately from the marginal distributions. In this research, we estimate the copula density using a parametric method which is maximum likelihood estimator (MLE). We investigate the performance of this proposed method via data sets from the literature and discuss results to show how our method performs for different family of copulas. Finally, we briefly outline related challenges and opportunities for future research.

  13. Brain signal variability is parametrically modifiable.

    PubMed

    Garrett, Douglas D; McIntosh, Anthony R; Grady, Cheryl L

    2014-11-01

    Moment-to-moment brain signal variability is a ubiquitous neural characteristic, yet remains poorly understood. Evidence indicates that heightened signal variability can index and aid efficient neural function, but it is not known whether signal variability responds to precise levels of environmental demand, or instead whether variability is relatively static. Using multivariate modeling of functional magnetic resonance imaging-based parametric face processing data, we show here that within-person signal variability level responds to incremental adjustments in task difficulty, in a manner entirely distinct from results produced by examining mean brain signals. Using mixed modeling, we also linked parametric modulations in signal variability with modulations in task performance. We found that difficulty-related reductions in signal variability predicted reduced accuracy and longer reaction times within-person; mean signal changes were not predictive. We further probed the various differences between signal variance and signal means by examining all voxels, subjects, and conditions; this analysis of over 2 million data points failed to reveal any notable relations between voxel variances and means. Our results suggest that brain signal variability provides a systematic task-driven signal of interest from which we can understand the dynamic function of the human brain, and in a way that mean signals cannot capture. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  14. Use of Ensemble Numerical Weather Prediction Data for Inversely Determining Atmospheric Refractivity in Surface Ducting Conditions

    NASA Astrophysics Data System (ADS)

    Greenway, D. P.; Hackett, E.

    2017-12-01

    Under certain atmospheric refractivity conditions, propagated electromagnetic waves (EM) can become trapped between the surface and the bottom of the atmosphere's mixed layer, which is referred to as surface duct propagation. Being able to predict the presence of these surface ducts can reap many benefits to users and developers of sensing technologies and communication systems because they significantly influence the performance of these systems. However, the ability to directly measure or model a surface ducting layer is challenging due to the high spatial resolution and large spatial coverage needed to make accurate refractivity estimates for EM propagation; thus, inverse methods have become an increasingly popular way of determining atmospheric refractivity. This study uses data from the Coupled Ocean/Atmosphere Mesoscale Prediction System developed by the Naval Research Laboratory and instrumented helicopter (helo) measurements taken during the Wallops Island Field Experiment to evaluate the use of ensemble forecasts in refractivity inversions. Helo measurements and ensemble forecasts are optimized to a parametric refractivity model, and three experiments are performed to evaluate whether incorporation of ensemble forecast data aids in more timely and accurate inverse solutions using genetic algorithms. The results suggest that using optimized ensemble members as an initial population for the genetic algorithms generally enhances the accuracy and speed of the inverse solution; however, use of the ensemble data to restrict parameter search space yields mixed results. Inaccurate results are related to parameterization of the ensemble members' refractivity profile and the subsequent extraction of the parameter ranges to limit the search space.

  15. From Experiments to Simulations: Downscaling Measurements of Na+ Distribution at the Root-Soil Interface

    NASA Astrophysics Data System (ADS)

    Perelman, A.; Guerra, H. J.; Pohlmeier, A. J.; Vanderborght, J.; Lazarovitch, N.

    2017-12-01

    When salinity increases beyond a certain threshold, crop yield will decrease at a fixed rate, according to the Maas and Hoffman model (1976). Thus, it is highly important to predict salinization and its impact on crops. Current models do not consider the impact of the transpiration rate on plant salt tolerance, although it affects plant water uptake and thus salt accumulation around the roots, consequently influencing the plant's sensitivity to salinity. Better model parametrization can improve the prediction of real salinity effects on crop growth and yield. The aim of this research is to study Na+ distribution around roots at different scales using different non-invasive methods, and to examine how this distribution is affected by the transpiration rate and plant water uptake. Results from tomato plants that were grown on rhizoslides (a capillary paper growth system) showed that the Na+ concentration was higher at the root-substrate interface than in the bulk. Also, Na+ accumulation around the roots decreased under a low transpiration rate, supporting our hypothesis. The rhizoslides enabled the root growth rate and architecture to be studied under different salinity levels. The root system architecture was retrieved from photos taken during the experiment, enabling us to incorporate real root systems into a simulation. Magnetic resonance imaging (MRI) was used to observe correlations between root system architectures and Na+ distribution. The MRI provided fine resolution of the Na+ accumulation around a single root without disturbing the root system. With time, Na+ accumulated only where roots were found in the soil and later around specific roots. Rhizoslides allow the root systems of larger plants to be investigated, but this method is limited by the medium (paper) and the dimension (2D). The MRI can create a 3D image of Na+ accumulation in soil on a microscopic scale. These data are being used for model calibration, which is expected to enable the prediction of root water uptake in saline soils for different climatic conditions and different soil water availabilities.

  16. Validation of the Unthinned Loblolly Pine Plantation Yield Model-USLYCOWG

    Treesearch

    V. Clark Baldwin; D.P. Feduccia

    1982-01-01

    Yield and stand structure predictions from an unthinned loblolly pine plantation yield prediction system (USLYCOWG computer program) were compared with observations from 80 unthinned loblolly pine plots. Overall, the predicted estimates were reasonable when compared to observed values, but predictions based on input data at or near the system's limits may be in...

  17. Parametric excitation of tire-wheel assemblies by a stiffness non-uniformity

    NASA Astrophysics Data System (ADS)

    Stutts, D. S.; Krousgrill, C. M.; Soedel, W.

    1995-01-01

    A simple model of the effect of a concentrated radial stiffness non-uniformity in a passenger car tire is presented. The model treats the tread band of the tire as a rigid ring supported on a viscoelastic foundation. The distributed radial stiffness is lumped into equivalent horizontal (fore-and-aft) and vertical stiffnesses. The concentrated radial stiffness non-uniformity is modeled by treating the tread band as fixed, and the stiffness non-uniformity as rotating around it at the nominal angular velocity of the wheel. Due to loading, the center of mass of the tread band ring model is displaced upward with respect to the wheel spindle and, therefore, the rotating stiffness non-uniformity is alternately compressed and stretched through one complete rotation. This stretching and compressing of the stiffness non-uniformity results in force transmission to the wheel spindle at twice the nominal angular velocity in frequency, and therefore, would excite a given resonance at one-half the nominal angular wheel velocity that a mass unbalance would. The forcing produced by the stiffness non-uniformity is parametric in nature, thus creating the possibility of parametric resonance. The basic theory of the parametric resonance is explained, and a parameter study using derived lumped parameters based on a typical passenger car tire is performed. This study revealed that parametric resonance in passenger car tires, although possible, is unlikely at normal highway speeds as predicted by this model unless the tire is partially deflated.

  18. Nonlinear wave interactions in shallow water magnetohydrodynamics of astrophysical plasma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klimachkov, D. A., E-mail: klimachkovdmitry@gmail.com; Petrosyan, A. S., E-mail: apetrosy@iki.rssi.ru

    2016-05-15

    The rotating magnetohydrodynamic flows of a thin layer of astrophysical and space plasmas with a free surface in a vertical external magnetic field are considered in the shallow water approximation. The presence of a vertical external magnetic field changes significantly the dynamics of wave processes in an astrophysical plasma, in contrast to a neutral fluid and a plasma layer in an external toroidal magnetic field. There are three-wave nonlinear interactions in the case under consideration. Using the asymptotic method of multiscale expansions, we have derived nonlinear equations for the interaction of wave packets: three magneto- Poincare waves, three magnetostrophic waves,more » two magneto-Poincare and one magnetostrophic waves, and two magnetostrophic and one magneto-Poincare waves. The existence of decay instabilities and parametric amplification is predicted. We show that a magneto-Poincare wave decays into two magneto-Poincare waves, a magnetostrophic wave decays into two magnetostrophic waves, a magneto-Poincare wave decays into one magneto-Poincare and one magnetostrophic waves, and a magnetostrophic wave decays into one magnetostrophic and one magneto-Poincare waves. There are the following parametric amplification mechanisms: the parametric amplification of magneto-Poincare waves, the parametric amplification of magnetostrophic waves, the amplification of a magneto-Poincare wave in the field of a magnetostrophic wave, and the amplification of a magnetostrophic wave in the field of a magneto-Poincare wave. The instability growth rates and parametric amplification factors have been found for the corresponding processes.« less

  19. Model-based approach for design verification and co-optimization of catastrophic and parametric-related defects due to systematic manufacturing variations

    NASA Astrophysics Data System (ADS)

    Perry, Dan; Nakamoto, Mark; Verghese, Nishath; Hurat, Philippe; Rouse, Rich

    2007-03-01

    Model-based hotspot detection and silicon-aware parametric analysis help designers optimize their chips for yield, area and performance without the high cost of applying foundries' recommended design rules. This set of DFM/ recommended rules is primarily litho-driven, but cannot guarantee a manufacturable design without imposing overly restrictive design requirements. This rule-based methodology of making design decisions based on idealized polygons that no longer represent what is on silicon needs to be replaced. Using model-based simulation of the lithography, OPC, RET and etch effects, followed by electrical evaluation of the resulting shapes, leads to a more realistic and accurate analysis. This analysis can be used to evaluate intelligent design trade-offs and identify potential failures due to systematic manufacturing defects during the design phase. The successful DFM design methodology consists of three parts: 1. Achieve a more aggressive layout through limited usage of litho-related recommended design rules. A 10% to 15% area reduction is achieved by using more aggressive design rules. DFM/recommended design rules are used only if there is no impact on cell size. 2. Identify and fix hotspots using a model-based layout printability checker. Model-based litho and etch simulation are done at the cell level to identify hotspots. Violations of recommended rules may cause additional hotspots, which are then fixed. The resulting design is ready for step 3. 3. Improve timing accuracy with a process-aware parametric analysis tool for transistors and interconnect. Contours of diffusion, poly and metal layers are used for parametric analysis. In this paper, we show the results of this physical and electrical DFM methodology at Qualcomm. We describe how Qualcomm was able to develop more aggressive cell designs that yielded a 10% to 15% area reduction using this methodology. Model-based shape simulation was employed during library development to validate architecture choices and to optimize cell layout. At the physical verification stage, the shape simulator was run at full-chip level to identify and fix residual hotspots on interconnect layers, on poly or metal 1 due to interaction between adjacent cells, or on metal 1 due to interaction between routing (via and via cover) and cell geometry. To determine an appropriate electrical DFM solution, Qualcomm developed an experiment to examine various electrical effects. After reporting the silicon results of this experiment, which showed sizeable delay variations due to lithography-related systematic effects, we also explain how contours of diffusion, poly and metal can be used for silicon-aware parametric analysis of transistors and interconnect at the cell-, block- and chip-level.

  20. Unveiling the Shape Evolution and Halide-Ion-Segregation in Blue-Emitting Formamidinium Lead Halide Perovskite Nanocrystals Using an Automated Microfluidic Platform.

    PubMed

    Lignos, Ioannis; Protesescu, Loredana; Emiroglu, Dilara Börte; Maceiczyk, Richard; Schneider, Simon; Kovalenko, Maksym V; deMello, Andrew J

    2018-02-14

    Hybrid organic-inorganic perovskites and in particular formamidinium lead halide (FAPbX 3 , X = Cl, Br, I) perovskite nanocrystals (NCs) have shown great promise for their implementation in optoelectronic devices. Specifically, the Br and I counterparts have shown unprecedented photoluminescence properties, including precise wavelength tuning (530-790 nm), narrow emission linewidths (<100 meV) and high photoluminescence quantum yields (70-90%). However, the controlled formation of blue emitting FAPb(Cl 1-x Br x ) 3 NCs lags behind their green and red counterparts and the mechanism of their formation remains unclear. Herein, we report the formation of FAPb(Cl 1-x Br x ) 3 NCs with stable emission between 440 and 520 nm in a fully automated droplet-based microfluidic reactor and subsequent reaction upscaling in conventional laboratory glassware. The thorough parametric screening allows for the elucidation of parametric zones (FA-to-Pb and Br-to-Cl molar ratios, temperature, and excess oleic acid) for the formation of nanoplatelets and/or NCs. In contrast to CsPb(Cl 1-x Br x ) 3 NCs, based on online parametric screening and offline structural characterization, we demonstrate that the controlled synthesis of Cl-rich perovskites (above 60 at% Cl) with stable emission remains a challenge due to fast segregation of halide ions.

  1. Evaluation of Second-Level Inference in fMRI Analysis

    PubMed Central

    Roels, Sanne P.; Loeys, Tom; Moerkerke, Beatrijs

    2016-01-01

    We investigate the impact of decisions in the second-level (i.e., over subjects) inferential process in functional magnetic resonance imaging on (1) the balance between false positives and false negatives and on (2) the data-analytical stability, both proxies for the reproducibility of results. Second-level analysis based on a mass univariate approach typically consists of 3 phases. First, one proceeds via a general linear model for a test image that consists of pooled information from different subjects. We evaluate models that take into account first-level (within-subjects) variability and models that do not take into account this variability. Second, one proceeds via inference based on parametrical assumptions or via permutation-based inference. Third, we evaluate 3 commonly used procedures to address the multiple testing problem: familywise error rate correction, False Discovery Rate (FDR) correction, and a two-step procedure with minimal cluster size. Based on a simulation study and real data we find that the two-step procedure with minimal cluster size results in most stable results, followed by the familywise error rate correction. The FDR results in most variable results, for both permutation-based inference and parametrical inference. Modeling the subject-specific variability yields a better balance between false positives and false negatives when using parametric inference. PMID:26819578

  2. Nonparametric Simulation of Signal Transduction Networks with Semi-Synchronized Update

    PubMed Central

    Nassiri, Isar; Masoudi-Nejad, Ali; Jalili, Mahdi; Moeini, Ali

    2012-01-01

    Simulating signal transduction in cellular signaling networks provides predictions of network dynamics by quantifying the changes in concentration and activity-level of the individual proteins. Since numerical values of kinetic parameters might be difficult to obtain, it is imperative to develop non-parametric approaches that combine the connectivity of a network with the response of individual proteins to signals which travel through the network. The activity levels of signaling proteins computed through existing non-parametric modeling tools do not show significant correlations with the observed values in experimental results. In this work we developed a non-parametric computational framework to describe the profile of the evolving process and the time course of the proportion of active form of molecules in the signal transduction networks. The model is also capable of incorporating perturbations. The model was validated on four signaling networks showing that it can effectively uncover the activity levels and trends of response during signal transduction process. PMID:22737250

  3. Atomistic simulation of solid-liquid coexistence for molecular systems: application to triazole and benzene.

    PubMed

    Eike, David M; Maginn, Edward J

    2006-04-28

    A method recently developed to rigorously determine solid-liquid equilibrium using a free-energy-based analysis has been extended to analyze multiatom molecular systems. This method is based on using a pseudosupercritical transformation path to reversibly transform between solid and liquid phases. Integration along this path yields the free energy difference at a single state point, which can then be used to determine the free energy difference as a function of temperature and therefore locate the coexistence temperature at a fixed pressure. The primary extension reported here is the introduction of an external potential field capable of inducing center of mass order along with secondary orientational order for molecules. The method is used to calculate the melting point of 1-H-1,2,4-triazole and benzene. Despite the fact that the triazole model gives accurate bulk densities for the liquid and crystal phases, it is found to do a poor job of reproducing the experimental crystal structure and heat of fusion. Consequently, it yields a melting point that is 100 K lower than the experimental value. On the other hand, the benzene model has been parametrized extensively to match a wide range of properties and yields a melting point that is only 20 K lower than the experimental value. Previous work in which a simple "direct heating" method was used actually found that the melting point of the benzene model was 50 K higher than the experimental value. This demonstrates the importance of using proper free energy methods to compute phase behavior. It also shows that the melting point is a very sensitive measure of force field quality that should be considered in parametrization efforts. The method described here provides a relatively simple approach for computing melting points of molecular systems.

  4. Geostatistical radar-raingauge combination with nonparametric correlograms: methodological considerations and application in Switzerland

    NASA Astrophysics Data System (ADS)

    Schiemann, R.; Erdin, R.; Willi, M.; Frei, C.; Berenguer, M.; Sempere-Torres, D.

    2011-05-01

    Modelling spatial covariance is an essential part of all geostatistical methods. Traditionally, parametric semivariogram models are fit from available data. More recently, it has been suggested to use nonparametric correlograms obtained from spatially complete data fields. Here, both estimation techniques are compared. Nonparametric correlograms are shown to have a substantial negative bias. Nonetheless, when combined with the sample variance of the spatial field under consideration, they yield an estimate of the semivariogram that is unbiased for small lag distances. This justifies the use of this estimation technique in geostatistical applications. Various formulations of geostatistical combination (Kriging) methods are used here for the construction of hourly precipitation grids for Switzerland based on data from a sparse realtime network of raingauges and from a spatially complete radar composite. Two variants of Ordinary Kriging (OK) are used to interpolate the sparse gauge observations. In both OK variants, the radar data are only used to determine the semivariogram model. One variant relies on a traditional parametric semivariogram estimate, whereas the other variant uses the nonparametric correlogram. The variants are tested for three cases and the impact of the semivariogram model on the Kriging prediction is illustrated. For the three test cases, the method using nonparametric correlograms performs equally well or better than the traditional method, and at the same time offers great practical advantages. Furthermore, two variants of Kriging with external drift (KED) are tested, both of which use the radar data to estimate nonparametric correlograms, and as the external drift variable. The first KED variant has been used previously for geostatistical radar-raingauge merging in Catalonia (Spain). The second variant is newly proposed here and is an extension of the first. Both variants are evaluated for the three test cases as well as an extended evaluation period. It is found that both methods yield merged fields of better quality than the original radar field or fields obtained by OK of gauge data. The newly suggested KED formulation is shown to be beneficial, in particular in mountainous regions where the quality of the Swiss radar composite is comparatively low. An analysis of the Kriging variances shows that none of the methods tested here provides a satisfactory uncertainty estimate. A suitable variable transformation is expected to improve this.

  5. Geostatistical radar-raingauge combination with nonparametric correlograms: methodological considerations and application in Switzerland

    NASA Astrophysics Data System (ADS)

    Schiemann, R.; Erdin, R.; Willi, M.; Frei, C.; Berenguer, M.; Sempere-Torres, D.

    2010-09-01

    Modelling spatial covariance is an essential part of all geostatistical methods. Traditionally, parametric semivariogram models are fit from available data. More recently, it has been suggested to use nonparametric correlograms obtained from spatially complete data fields. Here, both estimation techniques are compared. Nonparametric correlograms are shown to have a substantial negative bias. Nonetheless, when combined with the sample variance of the spatial field under consideration, they yield an estimate of the semivariogram that is unbiased for small lag distances. This justifies the use of this estimation technique in geostatistical applications. Various formulations of geostatistical combination (Kriging) methods are used here for the construction of hourly precipitation grids for Switzerland based on data from a sparse realtime network of raingauges and from a spatially complete radar composite. Two variants of Ordinary Kriging (OK) are used to interpolate the sparse gauge observations. In both OK variants, the radar data are only used to determine the semivariogram model. One variant relies on a traditional parametric semivariogram estimate, whereas the other variant uses the nonparametric correlogram. The variants are tested for three cases and the impact of the semivariogram model on the Kriging prediction is illustrated. For the three test cases, the method using nonparametric correlograms performs equally well or better than the traditional method, and at the same time offers great practical advantages. Furthermore, two variants of Kriging with external drift (KED) are tested, both of which use the radar data to estimate nonparametric correlograms, and as the external drift variable. The first KED variant has been used previously for geostatistical radar-raingauge merging in Catalonia (Spain). The second variant is newly proposed here and is an extension of the first. Both variants are evaluated for the three test cases as well as an extended evaluation period. It is found that both methods yield merged fields of better quality than the original radar field or fields obtained by OK of gauge data. The newly suggested KED formulation is shown to be beneficial, in particular in mountainous regions where the quality of the Swiss radar composite is comparatively low. An analysis of the Kriging variances shows that none of the methods tested here provides a satisfactory uncertainty estimate. A suitable variable transformation is expected to improve this.

  6. Linkage analysis of high myopia susceptibility locus in 26 families.

    PubMed

    Paget, Sandrine; Julia, Sophie; Vitezica, Zulma G; Soler, Vincent; Malecaze, François; Calvas, Patrick

    2008-01-01

    We conducted a linkage analysis in high myopia families to replicate suggestive results from chromosome 7q36 using a model of autosomal dominant inheritance and genetic heterogeneity. We also performed a genome-wide scan to identify novel loci. Twenty-six families, with at least two high-myopic subjects (ie. refractive value in the less affected eye of -5 diopters) in each family, were included. Phenotypic examination included standard autorefractometry, ultrasonographic eye length measurement, and clinical confirmation of the non-syndromic character of the refractive disorder. Nine families were collected de novo including 136 available members of whom 34 were highly myopic subjects. Twenty new subjects were added in 5 of the 17 remaining families. A total of 233 subjects were submitted to a genome scan using ABI linkage mapping set LMSv2-MD-10, additional markers in all regions where preliminary LOD scores were greater than 1.5 were used. Multipoint parametric and non-parametric analyses were conducted with the software packages Genehunter 2.0 and Merlin 1.0.1. Two autosomal recessive, two autosomal dominant, and four autosomal additive models were used in the parametric linkage analyses. No linkage was found using the subset of nine newly collected families. Study of the entire population of 26 families with a parametric model did not yield a significant LOD score (>3), even for the previously suggestive locus on 7q36. A non-parametric model demonstrated significant linkage to chromosome 7p15 in the entire population (Z-NPL=4.07, p=0.00002). The interval is 7.81 centiMorgans (cM) between markers D7S2458 and D7S2515. The significant interval reported here needs confirmation in other cohorts. Among possible susceptibility genes in the interval, certain candidates are likely to be involved in eye growth and development.

  7. Supercritical nonlinear parametric dynamics of Timoshenko microbeams

    NASA Astrophysics Data System (ADS)

    Farokhi, Hamed; Ghayesh, Mergen H.

    2018-06-01

    The nonlinear supercritical parametric dynamics of a Timoshenko microbeam subject to an axial harmonic excitation force is examined theoretically, by means of different numerical techniques, and employing a high-dimensional analysis. The time-variant axial load is assumed to consist of a mean value along with harmonic fluctuations. In terms of modelling, a continuous expression for the elastic potential energy of the system is developed based on the modified couple stress theory, taking into account small-size effects; the kinetic energy of the system is also modelled as a continuous function of the displacement field. Hamilton's principle is employed to balance the energies and to obtain the continuous model of the system. Employing the Galerkin scheme along with an assumed-mode technique, the energy terms are reduced, yielding a second-order reduced-order model with finite number of degrees of freedom. A transformation is carried out to convert the second-order reduced-order model into a double-dimensional first order one. A bifurcation analysis is performed for the system in the absence of the axial load fluctuations. Moreover, a mean value for the axial load is selected in the supercritical range, and the principal parametric resonant response, due to the time-variant component of the axial load, is obtained - as opposed to transversely excited systems, for parametrically excited system (such as our problem here), the nonlinear resonance occurs in the vicinity of twice any natural frequency of the linear system; this is accomplished via use of the pseudo-arclength continuation technique, a direct time integration, an eigenvalue analysis, and the Floquet theory for stability. The natural frequencies of the system prior to and beyond buckling are also determined. Moreover, the effect of different system parameters on the nonlinear supercritical parametric dynamics of the system is analysed, with special consideration to the effect of the length-scale parameter.

  8. Comparison of methods for estimating the attributable risk in the context of survival analysis.

    PubMed

    Gassama, Malamine; Bénichou, Jacques; Dartois, Laureen; Thiébaut, Anne C M

    2017-01-23

    The attributable risk (AR) measures the proportion of disease cases that can be attributed to an exposure in the population. Several definitions and estimation methods have been proposed for survival data. Using simulations, we compared four methods for estimating AR defined in terms of survival functions: two nonparametric methods based on Kaplan-Meier's estimator, one semiparametric based on Cox's model, and one parametric based on the piecewise constant hazards model, as well as one simpler method based on estimated exposure prevalence at baseline and Cox's model hazard ratio. We considered a fixed binary exposure with varying exposure probabilities and strengths of association, and generated event times from a proportional hazards model with constant or monotonic (decreasing or increasing) Weibull baseline hazard, as well as from a nonproportional hazards model. We simulated 1,000 independent samples of size 1,000 or 10,000. The methods were compared in terms of mean bias, mean estimated standard error, empirical standard deviation and 95% confidence interval coverage probability at four equally spaced time points. Under proportional hazards, all five methods yielded unbiased results regardless of sample size. Nonparametric methods displayed greater variability than other approaches. All methods showed satisfactory coverage except for nonparametric methods at the end of follow-up for a sample size of 1,000 especially. With nonproportional hazards, nonparametric methods yielded similar results to those under proportional hazards, whereas semiparametric and parametric approaches that both relied on the proportional hazards assumption performed poorly. These methods were applied to estimate the AR of breast cancer due to menopausal hormone therapy in 38,359 women of the E3N cohort. In practice, our study suggests to use the semiparametric or parametric approaches to estimate AR as a function of time in cohort studies if the proportional hazards assumption appears appropriate.

  9. Can the Direct Medical Cost of Chronic Disease Be Transferred across Different Countries? Using Cost-of-Illness Studies on Type 2 Diabetes, Epilepsy and Schizophrenia as Examples

    PubMed Central

    Gao, Lan; Hu, Hao; Zhao, Fei-Li; Li, Shu-Chuen

    2016-01-01

    Objectives To systematically review cost of illness studies for schizophrenia (SC), epilepsy (EP) and type 2 diabetes mellitus (T2DM) and explore the transferability of direct medical cost across countries. Methods A comprehensive literature search was performed to yield studies that estimated direct medical costs. A generalized linear model (GLM) with gamma distribution and log link was utilized to explore the variation in costs that accounted by the included factors. Both parametric (Random-effects model) and non-parametric (Boot-strapping) meta-analyses were performed to pool the converted raw cost data (expressed as percentage of GDP/capita of the country where the study was conducted). Results In total, 93 articles were included (40 studies were for T2DM, 34 studies for EP and 19 studies for SC). Significant variances were detected inter- and intra-disease classes for the direct medical costs. Multivariate analysis identified that GDP/capita (p<0.05) was a significant factor contributing to the large variance in the cost results. Bootstrapping meta-analysis generated more conservative estimations with slightly wider 95% confidence intervals (CI) than the parametric meta-analysis, yielding a mean (95%CI) of 16.43% (11.32, 21.54) for T2DM, 36.17% (22.34, 50.00) for SC and 10.49% (7.86, 13.41) for EP. Conclusions Converting the raw cost data into percentage of GDP/capita of individual country was demonstrated to be a feasible approach to transfer the direct medical cost across countries. The approach from our study to obtain an estimated direct cost value along with the size of specific disease population from each jurisdiction could be used for a quick check on the economic burden of particular disease for countries without such data. PMID:26814959

  10. Modeling and Prediction of Krueger Device Noise

    NASA Technical Reports Server (NTRS)

    Guo, Yueping; Burley, Casey L.; Thomas, Russell H.

    2016-01-01

    This paper presents the development of a noise prediction model for aircraft Krueger flap devices that are considered as alternatives to leading edge slotted slats. The prediction model decomposes the total Krueger noise into four components, generated by the unsteady flows, respectively, in the cove under the pressure side surface of the Krueger, in the gap between the Krueger trailing edge and the main wing, around the brackets supporting the Krueger device, and around the cavity on the lower side of the main wing. For each noise component, the modeling follows a physics-based approach that aims at capturing the dominant noise-generating features in the flow and developing correlations between the noise and the flow parameters that control the noise generation processes. The far field noise is modeled using each of the four noise component's respective spectral functions, far field directivities, Mach number dependencies, component amplitudes, and other parametric trends. Preliminary validations are carried out by using small scale experimental data, and two applications are discussed; one for conventional aircraft and the other for advanced configurations. The former focuses on the parametric trends of Krueger noise on design parameters, while the latter reveals its importance in relation to other airframe noise components.

  11. Soliton motion in a parametrically ac-driven damped Toda lattice

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rasmussen, K.O.; Malomed, B.A.; Bishop, A.R.

    We demonstrate that a staggered parametric ac driving term can support stable progressive motion of a soliton in a Toda lattice with friction, while an unstaggered driving force cannot. A physical context of the model is that of a chain of anharmonically coupled particles adsorbed on a solid surface of a finite size. The ac driving force is generated by a standing acoustic wave excited on the surface. Simulations demonstrate that the state left behind the moving soliton, with the particles shifted from their equilibrium positions, gradually relaxes back to the equilibrium state that existed before the passage of themore » soliton. The perturbation theory predicts that the ac-driven soliton exists if the amplitude of the drive exceeds a certain threshold. The analytical prediction for the threshold is in reasonable agreement with that found numerically. Collisions between two counterpropagating solitons is also simulated, demonstrating that the collisions are, effectively, fully elastic. {copyright} {ital 1998} {ital The American Physical Society}« less

  12. ROI on yield data analysis systems through a business process management strategy

    NASA Astrophysics Data System (ADS)

    Rehani, Manu; Strader, Nathan; Hanson, Jeff

    2005-05-01

    The overriding motivation for yield engineering is profitability. This is achieved through application of yield management. The first application is to continually reduce waste in the form of yield loss. New products, new technologies and the dynamic state of the process and equipment keep introducing new ways to cause yield loss. In response, the yield management efforts have to continually come up with new solutions to minimize it. The second application of yield engineering is to aid in accurate product pricing. This is achieved through predicting future results of the yield engineering effort. The more accurate the yield prediction, the more accurate the wafer start volume, the more accurate the wafer pricing. Another aspect of yield prediction pertains to gauging the impact of a yield problem and predicting how long that will last. The ability to predict such impacts again feeds into wafer start calculations and wafer pricing. The question then is that if the stakes on yield management are so high why is it that most yield management efforts are run like science and engineering projects and less like manufacturing? In the eighties manufacturing put the theory of constraints1 into practice and put a premium on stability and predictability in manufacturing activities, why can't the same be done for yield management activities? This line of introspection led us to define and implement a business process to manage the yield engineering activities. We analyzed the best known methods (BKM) and deployed a workflow tool to make them the standard operating procedure (SOP) for yield managment. We present a case study in deploying a Business Process Management solution for Semiconductor Yield Engineering in a high-mix ASIC environment. We will present a description of the situation prior to deployment, a window into the development process and a valuation of the benefits.

  13. Air Brayton Solar Receiver, phase 1

    NASA Technical Reports Server (NTRS)

    Zimmerman, D. K.

    1979-01-01

    A six month analysis and conceptual design study of an open cycle Air Brayton Solar Receiver (ABSR) for use on a tracking, parabolic solar concentrator are discussed. The ABSR, which includes a buffer storage system, is designed to provide inlet air to a power conversion unit. Parametric analyses, conceptual design, interface requirements, and production cost estimates are described. The design features were optimized to yield a zero maintenance, low cost, high efficiency concept that will provide a 30 year operational life.

  14. Optimal 2D-SIM reconstruction by two filtering steps with Richardson-Lucy deconvolution.

    PubMed

    Perez, Victor; Chang, Bo-Jui; Stelzer, Ernst Hans Karl

    2016-11-16

    Structured illumination microscopy relies on reconstruction algorithms to yield super-resolution images. Artifacts can arise in the reconstruction and affect the image quality. Current reconstruction methods involve a parametrized apodization function and a Wiener filter. Empirically tuning the parameters in these functions can minimize artifacts, but such an approach is subjective and produces volatile results. We present a robust and objective method that yields optimal results by two straightforward filtering steps with Richardson-Lucy-based deconvolutions. We provide a resource to identify artifacts in 2D-SIM images by analyzing two main reasons for artifacts, out-of-focus background and a fluctuating reconstruction spectrum. We show how the filtering steps improve images of test specimens, microtubules, yeast and mammalian cells.

  15. Optimal 2D-SIM reconstruction by two filtering steps with Richardson-Lucy deconvolution

    NASA Astrophysics Data System (ADS)

    Perez, Victor; Chang, Bo-Jui; Stelzer, Ernst Hans Karl

    2016-11-01

    Structured illumination microscopy relies on reconstruction algorithms to yield super-resolution images. Artifacts can arise in the reconstruction and affect the image quality. Current reconstruction methods involve a parametrized apodization function and a Wiener filter. Empirically tuning the parameters in these functions can minimize artifacts, but such an approach is subjective and produces volatile results. We present a robust and objective method that yields optimal results by two straightforward filtering steps with Richardson-Lucy-based deconvolutions. We provide a resource to identify artifacts in 2D-SIM images by analyzing two main reasons for artifacts, out-of-focus background and a fluctuating reconstruction spectrum. We show how the filtering steps improve images of test specimens, microtubules, yeast and mammalian cells.

  16. A parametric ribcage geometry model accounting for variations among the adult population.

    PubMed

    Wang, Yulong; Cao, Libo; Bai, Zhonghao; Reed, Matthew P; Rupp, Jonathan D; Hoff, Carrie N; Hu, Jingwen

    2016-09-06

    The objective of this study is to develop a parametric ribcage model that can account for morphological variations among the adult population. Ribcage geometries, including 12 pair of ribs, sternum, and thoracic spine, were collected from CT scans of 101 adult subjects through image segmentation, landmark identification (1016 for each subject), symmetry adjustment, and template mesh mapping (26,180 elements for each subject). Generalized procrustes analysis (GPA), principal component analysis (PCA), and regression analysis were used to develop a parametric ribcage model, which can predict nodal locations of the template mesh according to age, sex, height, and body mass index (BMI). Two regression models, a quadratic model for estimating the ribcage size and a linear model for estimating the ribcage shape, were developed. The results showed that the ribcage size was dominated by the height (p=0.000) and age-sex-interaction (p=0.007) and the ribcage shape was significantly affected by the age (p=0.0005), sex (p=0.0002), height (p=0.0064) and BMI (p=0.0000). Along with proper assignment of cortical bone thickness, material properties and failure properties, this parametric ribcage model can directly serve as the mesh of finite element ribcage models for quantifying effects of human characteristics on thoracic injury risks. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. A review of recent developments in parametric based acoustic emission techniques applied to concrete structures

    NASA Astrophysics Data System (ADS)

    Vidya Sagar, R.; Raghu Prasad, B. K.

    2012-03-01

    This article presents a review of recent developments in parametric based acoustic emission (AE) techniques applied to concrete structures. It recapitulates the significant milestones achieved by previous researchers including various methods and models developed in AE testing of concrete structures. The aim is to provide an overview of the specific features of parametric based AE techniques of concrete structures carried out over the years. Emphasis is given to traditional parameter-based AE techniques applied to concrete structures. A significant amount of research on AE techniques applied to concrete structures has already been published and considerable attention has been given to those publications. Some recent studies such as AE energy analysis and b-value analysis used to assess damage of concrete bridge beams have also been discussed. The formation of fracture process zone and the AE energy released during the fracture process in concrete beam specimens have been summarised. A large body of experimental data on AE characteristics of concrete has accumulated over the last three decades. This review of parametric based AE techniques applied to concrete structures may be helpful to the concerned researchers and engineers to better understand the failure mechanism of concrete and evolve more useful methods and approaches for diagnostic inspection of structural elements and failure prediction/prevention of concrete structures.

  18. Three-Phased Wake Vortex Decay

    NASA Technical Reports Server (NTRS)

    Proctor, Fred H.; Ahmad, Nashat N.; Switzer, George S.; LimonDuparcmeur, Fanny M.

    2010-01-01

    A detailed parametric study is conducted that examines vortex decay within turbulent and stratified atmospheres. The study uses a large eddy simulation model to simulate the out-of-ground effect behavior of wake vortices due to their interaction with atmospheric turbulence and thermal stratification. This paper presents results from a parametric investigation and suggests improvements for existing fast-time wake prediction models. This paper also describes a three-phased decay for wake vortices. The third phase is characterized by a relatively slow rate of circulation decay, and is associated with the ringvortex stage that occurs following vortex linking. The three-phased decay is most prevalent for wakes imbedded within environments having low-turbulence and near-neutral stratification.

  19. The 32nd CDC: System identification using interval dynamic models

    NASA Technical Reports Server (NTRS)

    Keel, L. H.; Lew, J. S.; Bhattacharyya, S. P.

    1992-01-01

    Motivated by the recent explosive development of results in the area of parametric robust control, a new technique to identify a family of uncertain systems is identified. The new technique takes the frequency domain input and output data obtained from experimental test signals and produces an 'interval transfer function' that contains the complete frequency domain behavior with respect to the test signals. This interval transfer function is one of the key concepts in the parametric robust control approach and identification with such an interval model allows one to predict the worst case performance and stability margins using recent results on interval systems. The algorithm is illustrated by applying it to an 18 bay Mini-Mast truss structure.

  20. Direct solar-pumped iodine laser amplifier

    NASA Technical Reports Server (NTRS)

    Han, K. S.

    1986-01-01

    During this period the parametric studies of the iodine laser oscillator pumped by a Vortek simulator were carried out before amplifier studies. The amplifier studies are postponed to the extended period after completing the parametric studies. In addition, the kinetic modeling of a solar-pumped iodine laser amplifier, and the experimental work for a solar pumped dye laser amplifier are in progress. This report contains three parts: (1) a 10 W CW iodine laser pumped by a Vortek solar simulator; (2) kinetic modeling to predict the time to lasing threshold, lasing time, and energy output of solar-pumped iodine laser; and (3) the study of the dye laser amplifier pumped by a Tamarack solar simulator.

  1. Nonlinear Tides in Close Binary Systems

    NASA Astrophysics Data System (ADS)

    Weinberg, Nevin N.; Arras, Phil; Quataert, Eliot; Burkart, Josh

    2012-06-01

    We study the excitation and damping of tides in close binary systems, accounting for the leading-order nonlinear corrections to linear tidal theory. These nonlinear corrections include two distinct physical effects: three-mode nonlinear interactions, i.e., the redistribution of energy among stellar modes of oscillation, and nonlinear excitation of stellar normal modes by the time-varying gravitational potential of the companion. This paper, the first in a series, presents the formalism for studying nonlinear tides and studies the nonlinear stability of the linear tidal flow. Although the formalism we present is applicable to binaries containing stars, planets, and/or compact objects, we focus on non-rotating solar-type stars with stellar or planetary companions. Our primary results include the following: (1) The linear tidal solution almost universally used in studies of binary evolution is unstable over much of the parameter space in which it is employed. More specifically, resonantly excited internal gravity waves in solar-type stars are nonlinearly unstable to parametric resonance for companion masses M' >~ 10-100 M ⊕ at orbital periods P ≈ 1-10 days. The nearly static "equilibrium" tidal distortion is, however, stable to parametric resonance except for solar binaries with P <~ 2-5 days. (2) For companion masses larger than a few Jupiter masses, the dynamical tide causes short length scale waves to grow so rapidly that they must be treated as traveling waves, rather than standing waves. (3) We show that the global three-wave treatment of parametric instability typically used in the astrophysics literature does not yield the fastest-growing daughter modes or instability threshold in many cases. We find a form of parametric instability in which a single parent wave excites a very large number of daughter waves (N ≈ 103[P/10 days] for a solar-type star) and drives them as a single coherent unit with growth rates that are a factor of ≈N faster than the standard three-wave parametric instability. These are local instabilities viewed through the lens of global analysis; the coherent global growth rate follows local rates in the regions where the shear is strongest. In solar-type stars, the dynamical tide is unstable to this collective version of the parametric instability for even sub-Jupiter companion masses with P <~ a month. (4) Independent of the parametric instability, the dynamical and equilibrium tides excite a wide range of stellar p-modes and g-modes by nonlinear inhomogeneous forcing; this coupling appears particularly efficient at draining energy out of the dynamical tide and may be more important than either wave breaking or parametric resonance at determining the nonlinear dissipation of the dynamical tide.

  2. Using string invariants for prediction searching for optimal parameters

    NASA Astrophysics Data System (ADS)

    Bundzel, Marek; Kasanický, Tomáš; Pinčák, Richard

    2016-02-01

    We have developed a novel prediction method based on string invariants. The method does not require learning but a small set of parameters must be set to achieve optimal performance. We have implemented an evolutionary algorithm for the parametric optimization. We have tested the performance of the method on artificial and real world data and compared the performance to statistical methods and to a number of artificial intelligence methods. We have used data and the results of a prediction competition as a benchmark. The results show that the method performs well in single step prediction but the method's performance for multiple step prediction needs to be improved. The method works well for a wide range of parameters.

  3. Flexible parametric survival models built on age-specific antimüllerian hormone percentiles are better predictors of menopause.

    PubMed

    Ramezani Tehrani, Fahimeh; Mansournia, Mohammad Ali; Solaymani-Dodaran, Masoud; Steyerberg, Ewout; Azizi, Fereidoun

    2016-06-01

    This study aimed to improve existing prediction models for age at menopause. We identified all reproductive aged women with regular menstrual cycles who met our eligibility criteria (n = 1,015) in the Tehran Lipid and Glucose Study-an ongoing population-based cohort study initiated in 1998. Participants were examined every 3 years and their reproductive histories were recorded. Blood levels of antimüllerian hormone (AMH) were measured at the time of recruitment. Age at menopause was estimated based on serum concentrations of AMH using flexible parametric survival models. The optimum model was selected according to Akaike Information Criteria and the realness of the range of predicted median menopause age. We followed study participants for a median of 9.8 years during which 277 women reached menopause and found that a spline-based proportional odds model including age-specific AMH percentiles as the covariate performed well in terms of statistical criteria and provided the most clinically relevant and realistic predictions. The range of predicted median age at menopause for this model was 47.1 to 55.9 years. For those who reached menopause, the median of the absolute mean difference between actual and predicted age at menopause was 1.9 years (interquartile range 2.9). The model including the age-specific AMH percentiles as the covariate and using proportional odds as its covariate metrics meets all the statistical criteria for the best model and provides the most clinically relevant and realistic predictions for age at menopause for reproductive-aged women.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Post, Wilfred M; King, Anthony Wayne; Dragoni, Danilo

    Many parameters in terrestrial biogeochemical models are inherently uncertain, leading to uncertainty in predictions of key carbon cycle variables. At observation sites, this uncertainty can be quantified by applying model-data fusion techniques to estimate model parameters using eddy covariance observations and associated biometric data sets as constraints. Uncertainty is reduced as data records become longer and different types of observations are added. We estimate parametric and associated predictive uncertainty at the Morgan Monroe State Forest in Indiana, USA. Parameters in the Local Terrestrial Ecosystem Carbon (LoTEC) are estimated using both synthetic and actual constraints. These model parameters and uncertainties aremore » then used to make predictions of carbon flux for up to 20 years. We find a strong dependence of both parametric and prediction uncertainty on the length of the data record used in the model-data fusion. In this model framework, this dependence is strongly reduced as the data record length increases beyond 5 years. If synthetic initial biomass pool constraints with realistic uncertainties are included in the model-data fusion, prediction uncertainty is reduced by more than 25% when constraining flux records are less than 3 years. If synthetic annual aboveground woody biomass increment constraints are also included, uncertainty is similarly reduced by an additional 25%. When actual observed eddy covariance data are used as constraints, there is still a strong dependence of parameter and prediction uncertainty on data record length, but the results are harder to interpret because of the inability of LoTEC to reproduce observed interannual variations and the confounding effects of model structural error.« less

  5. [Prediction of the side-cut product yield of atmospheric/vacuum distillation unit by NIR crude oil rapid assay].

    PubMed

    Wang, Yan-Bin; Hu, Yu-Zhong; Li, Wen-Le; Zhang, Wei-Song; Zhou, Feng; Luo, Zhi

    2014-10-01

    In the present paper, based on the fast evaluation technique of near infrared, a method to predict the yield of atmos- pheric and vacuum line was developed, combined with H/CAMS software. Firstly, the near-infrared (NIR) spectroscopy method for rapidly determining the true boiling point of crude oil was developed. With commercially available crude oil spectroscopy da- tabase and experiments test from Guangxi Petrochemical Company, calibration model was established and a topological method was used as the calibration. The model can be employed to predict the true boiling point of crude oil. Secondly, the true boiling point based on NIR rapid assay was converted to the side-cut product yield of atmospheric/vacuum distillation unit by H/CAMS software. The predicted yield and the actual yield of distillation product for naphtha, diesel, wax and residual oil were compared in a 7-month period. The result showed that the NIR rapid crude assay can predict the side-cut product yield accurately. The near infrared analytic method for predicting yield has the advantages of fast analysis, reliable results, and being easy to online operate, and it can provide elementary data for refinery planning optimization and crude oil blending.

  6. High-NOx Photooxidation of n-Dodecane: Temperature Dependence of SOA Formation.

    PubMed

    Lamkaddam, Houssni; Gratien, Aline; Pangui, Edouard; Cazaunau, Mathieu; Picquet-Varrault, Bénédicte; Doussin, Jean-François

    2017-01-03

    The temperature and concentration dependence of secondary organic aerosol (SOA) yields has been investigated for the first time for the photooxidation of n-dodecane (C 12 H 26 ) in the presence of NO x in the CESAM chamber (French acronym for "Chamber for Atmospheric Multiphase Experimental Simulation"). Experiments were performed with and without seed aerosol between 283 and 304.5 K. In order to quantify the SOA yields, a new parametrization is proposed to account for organic vapor loss to the chamber walls. Deposition processes were found to impact the aerosol yields by a factor from 1.3 to 1.8 between the lowest and the highest value. As with other photooxidation systems, experiments performed without seed and at low concentration of oxidant showed a lower SOA yield than other seeded experiments. Temperature did not significantly influence SOA formation in this study. This unforeseen behavior indicates that the SOA is dominated by sufficiently low volatility products for which a change in their partitioning due to temperature would not significantly affect the condensed quantities.

  7. Joint analysis of input and parametric uncertainties in watershed water quality modeling: A formal Bayesian approach

    NASA Astrophysics Data System (ADS)

    Han, Feng; Zheng, Yi

    2018-06-01

    Significant Input uncertainty is a major source of error in watershed water quality (WWQ) modeling. It remains challenging to address the input uncertainty in a rigorous Bayesian framework. This study develops the Bayesian Analysis of Input and Parametric Uncertainties (BAIPU), an approach for the joint analysis of input and parametric uncertainties through a tight coupling of Markov Chain Monte Carlo (MCMC) analysis and Bayesian Model Averaging (BMA). The formal likelihood function for this approach is derived considering a lag-1 autocorrelated, heteroscedastic, and Skew Exponential Power (SEP) distributed error model. A series of numerical experiments were performed based on a synthetic nitrate pollution case and on a real study case in the Newport Bay Watershed, California. The Soil and Water Assessment Tool (SWAT) and Differential Evolution Adaptive Metropolis (DREAM(ZS)) were used as the representative WWQ model and MCMC algorithm, respectively. The major findings include the following: (1) the BAIPU can be implemented and used to appropriately identify the uncertain parameters and characterize the predictive uncertainty; (2) the compensation effect between the input and parametric uncertainties can seriously mislead the modeling based management decisions, if the input uncertainty is not explicitly accounted for; (3) the BAIPU accounts for the interaction between the input and parametric uncertainties and therefore provides more accurate calibration and uncertainty results than a sequential analysis of the uncertainties; and (4) the BAIPU quantifies the credibility of different input assumptions on a statistical basis and can be implemented as an effective inverse modeling approach to the joint inference of parameters and inputs.

  8. Multi-parametric variational data assimilation for hydrological forecasting

    NASA Astrophysics Data System (ADS)

    Alvarado-Montero, R.; Schwanenberg, D.; Krahe, P.; Helmke, P.; Klein, B.

    2017-12-01

    Ensemble forecasting is increasingly applied in flow forecasting systems to provide users with a better understanding of forecast uncertainty and consequently to take better-informed decisions. A common practice in probabilistic streamflow forecasting is to force deterministic hydrological model with an ensemble of numerical weather predictions. This approach aims at the representation of meteorological uncertainty but neglects uncertainty of the hydrological model as well as its initial conditions. Complementary approaches use probabilistic data assimilation techniques to receive a variety of initial states or represent model uncertainty by model pools instead of single deterministic models. This paper introduces a novel approach that extends a variational data assimilation based on Moving Horizon Estimation to enable the assimilation of observations into multi-parametric model pools. It results in a probabilistic estimate of initial model states that takes into account the parametric model uncertainty in the data assimilation. The assimilation technique is applied to the uppermost area of River Main in Germany. We use different parametric pools, each of them with five parameter sets, to assimilate streamflow data, as well as remotely sensed data from the H-SAF project. We assess the impact of the assimilation in the lead time performance of perfect forecasts (i.e. observed data as forcing variables) as well as deterministic and probabilistic forecasts from ECMWF. The multi-parametric assimilation shows an improvement of up to 23% for CRPS performance and approximately 20% in Brier Skill Scores with respect to the deterministic approach. It also improves the skill of the forecast in terms of rank histogram and produces a narrower ensemble spread.

  9. Frequency Analysis Using Bootstrap Method and SIR Algorithm for Prevention of Natural Disasters

    NASA Astrophysics Data System (ADS)

    Kim, T.; Kim, Y. S.

    2017-12-01

    The frequency analysis of hydrometeorological data is one of the most important factors in response to natural disaster damage, and design standards for a disaster prevention facilities. In case of frequency analysis of hydrometeorological data, it assumes that observation data have statistical stationarity, and a parametric method considering the parameter of probability distribution is applied. For a parametric method, it is necessary to sufficiently collect reliable data; however, snowfall observations are needed to compensate for insufficient data in Korea, because of reducing the number of days for snowfall observations and mean maximum daily snowfall depth due to climate change. In this study, we conducted the frequency analysis for snowfall using the Bootstrap method and SIR algorithm which are the resampling methods that can overcome the problems of insufficient data. For the 58 meteorological stations distributed evenly in Korea, the probability of snowfall depth was estimated by non-parametric frequency analysis using the maximum daily snowfall depth data. The results show that probabilistic daily snowfall depth by frequency analysis is decreased at most stations, and most stations representing the rate of change were found to be consistent in both parametric and non-parametric frequency analysis. This study shows that the resampling methods can do the frequency analysis of the snowfall depth that has insufficient observed samples, which can be applied to interpretation of other natural disasters such as summer typhoons with seasonal characteristics. Acknowledgment.This research was supported by a grant(MPSS-NH-2015-79) from Disaster Prediction and Mitigation Technology Development Program funded by Korean Ministry of Public Safety and Security(MPSS).

  10. Markers of preparatory attention predict visual short-term memory performance.

    PubMed

    Murray, Alexandra M; Nobre, Anna C; Stokes, Mark G

    2011-05-01

    Visual short-term memory (VSTM) is limited in capacity. Therefore, it is important to encode only visual information that is most likely to be relevant to behaviour. Here we asked which aspects of selective biasing of VSTM encoding predict subsequent memory-based performance. We measured EEG during a selective VSTM encoding task, in which we varied parametrically the memory load and the precision of recall required to compare a remembered item to a subsequent probe item. On half the trials, a spatial cue indicated that participants only needed to encode items from one hemifield. We observed a typical sequence of markers of anticipatory spatial attention: early attention directing negativity (EDAN), anterior attention directing negativity (ADAN), late directing attention positivity (LDAP); as well as of VSTM maintenance: contralateral delay activity (CDA). We found that individual differences in preparatory brain activity (EDAN/ADAN) predicted cue-related changes in recall accuracy, indexed by memory-probe discrimination sensitivity (d'). Importantly, our parametric manipulation of memory-probe similarity also allowed us to model the behavioural data for each participant, providing estimates for the quality of the memory representation and the probability that an item could be retrieved. We found that selective encoding primarily increased the probability of accurate memory recall; that ERP markers of preparatory attention predicted the cue-related changes in recall probability. Copyright © 2011. Published by Elsevier Ltd.

  11. Understanding and predicting profile structure and parametric scaling of intrinsic rotation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, W. X.; Grierson, B. A.; Ethier, S.

    2017-08-10

    This study reports on a recent advance in developing physical understanding and a first-principles-based model for predicting intrinsic rotation profiles in magnetic fusion experiments. It is shown for the first time that turbulent fluctuation-driven residual stress (a non-diffusive component of momentum flux) along with diffusive momentum flux can account for both the shape and magnitude of the observed intrinsic toroidal rotation profile. Both the turbulence intensity gradient and zonal flow E×B shear are identified as major contributors to the generation of the k ∥-asymmetry needed for the residual stress generation. The model predictions of core rotation based on global gyrokineticmore » simulations agree well with the experimental measurements of main ion toroidal rotation for a set of DIII-D ECH discharges. The validated model is further used to investigate the characteristic dependence of residual stress and intrinsic rotation profile structure on the multi-dimensional parametric space covering the turbulence type, q-profile structure, and up-down asymmetry in magnetic geometry with the goal of developing the physics understanding needed for rotation profile control and optimization. It is shown that in the flat-q profile regime, intrinsic rotations driven by ITG and TEM turbulence are in the opposite direction (i.e., intrinsic rotation reverses). The predictive model also produces reversed intrinsic rotation for plasmas with weak and normal shear q-profiles.« less

  12. Sequential causal inference: Application to randomized trials of adaptive treatment strategies

    PubMed Central

    Dawson, Ree; Lavori, Philip W.

    2009-01-01

    SUMMARY Clinical trials that randomize subjects to decision algorithms, which adapt treatments over time according to individual response, have gained considerable interest as investigators seek designs that directly inform clinical decision making. We consider designs in which subjects are randomized sequentially at decision points, among adaptive treatment options under evaluation. We present a sequential method to estimate the comparative effects of the randomized adaptive treatments, which are formalized as adaptive treatment strategies. Our causal estimators are derived using Bayesian predictive inference. We use analytical and empirical calculations to compare the predictive estimators to (i) the ‘standard’ approach that allocates the sequentially obtained data to separate strategy-specific groups as would arise from randomizing subjects at baseline; (ii) the semi-parametric approach of marginal mean models that, under appropriate experimental conditions, provides the same sequential estimator of causal differences as the proposed approach. Simulation studies demonstrate that sequential causal inference offers substantial efficiency gains over the standard approach to comparing treatments, because the predictive estimators can take advantage of the monotone structure of shared data among adaptive strategies. We further demonstrate that the semi-parametric asymptotic variances, which are marginal ‘one-step’ estimators, may exhibit significant bias, in contrast to the predictive variances. We show that the conditions under which the sequential method is attractive relative to the other two approaches are those most likely to occur in real studies. PMID:17914714

  13. Reduction of predictive uncertainty in estimating irrigation water requirement through multi-model ensembles and ensemble averaging

    NASA Astrophysics Data System (ADS)

    Multsch, S.; Exbrayat, J.-F.; Kirby, M.; Viney, N. R.; Frede, H.-G.; Breuer, L.

    2015-04-01

    Irrigation agriculture plays an increasingly important role in food supply. Many evapotranspiration models are used today to estimate the water demand for irrigation. They consider different stages of crop growth by empirical crop coefficients to adapt evapotranspiration throughout the vegetation period. We investigate the importance of the model structural versus model parametric uncertainty for irrigation simulations by considering six evapotranspiration models and five crop coefficient sets to estimate irrigation water requirements for growing wheat in the Murray-Darling Basin, Australia. The study is carried out using the spatial decision support system SPARE:WATER. We find that structural model uncertainty among reference ET is far more important than model parametric uncertainty introduced by crop coefficients. These crop coefficients are used to estimate irrigation water requirement following the single crop coefficient approach. Using the reliability ensemble averaging (REA) technique, we are able to reduce the overall predictive model uncertainty by more than 10%. The exceedance probability curve of irrigation water requirements shows that a certain threshold, e.g. an irrigation water limit due to water right of 400 mm, would be less frequently exceeded in case of the REA ensemble average (45%) in comparison to the equally weighted ensemble average (66%). We conclude that multi-model ensemble predictions and sophisticated model averaging techniques are helpful in predicting irrigation demand and provide relevant information for decision making.

  14. Comparison of Cox’s Regression Model and Parametric Models in Evaluating the Prognostic Factors for Survival after Liver Transplantation in Shiraz during 2000–2012

    PubMed Central

    Adelian, R.; Jamali, J.; Zare, N.; Ayatollahi, S. M. T.; Pooladfar, G. R.; Roustaei, N.

    2015-01-01

    Background: Identification of the prognostic factors for survival in patients with liver transplantation is challengeable. Various methods of survival analysis have provided different, sometimes contradictory, results from the same data. Objective: To compare Cox’s regression model with parametric models for determining the independent factors for predicting adults’ and pediatrics’ survival after liver transplantation. Method: This study was conducted on 183 pediatric patients and 346 adults underwent liver transplantation in Namazi Hospital, Shiraz, southern Iran. The study population included all patients undergoing liver transplantation from 2000 to 2012. The prognostic factors sex, age, Child class, initial diagnosis of the liver disease, PELD/MELD score, and pre-operative laboratory markers were selected for survival analysis. Result: Among 529 patients, 346 (64.5%) were adult and 183 (34.6%) were pediatric cases. Overall, the lognormal distribution was the best-fitting model for adult and pediatric patients. Age in adults (HR=1.16, p<0.05) and weight (HR=2.68, p<0.01) and Child class B (HR=2.12, p<0.05) in pediatric patients were the most important factors for prediction of survival after liver transplantation. Adult patients younger than the mean age and pediatric patients weighing above the mean and Child class A (compared to those with classes B or C) had better survival. Conclusion: Parametric regression model is a good alternative for the Cox’s regression model. PMID:26306158

  15. Parametric Study of Biconic Re-Entry Vehicles

    NASA Technical Reports Server (NTRS)

    Steele, Bryan; Banks, Daniel W.; Whitmore, Stephen A.

    2007-01-01

    An optimization based on hypersonic aerodynamic performance and volumetric efficiency was accomplished for a range of biconic configurations. Both axisymmetric and quasi-axisymmetric geometries (bent and flattened) were analyzed. The aerodynamic optimization wag based on hypersonic simple Incidence angle analysis tools. The range of configurations included those suitable for r lunar return trajectory with a lifting aerocapture at Earth and an overall volume that could support a nominal crew. The results yielded five configurations that had acceptable aerodynamic performance and met overall geometry and size limitations

  16. Bayesian Unimodal Density Regression for Causal Inference

    ERIC Educational Resources Information Center

    Karabatsos, George; Walker, Stephen G.

    2011-01-01

    Karabatsos and Walker (2011) introduced a new Bayesian nonparametric (BNP) regression model. Through analyses of real and simulated data, they showed that the BNP regression model outperforms other parametric and nonparametric regression models of common use, in terms of predictive accuracy of the outcome (dependent) variable. The other,…

  17. Parametric response mapping cut-off values that predict survival of hepatocellular carcinoma patients after TACE.

    PubMed

    Nörthen, Aventinus; Asendorf, Thomas; Shin, Hoen-Oh; Hinrichs, Jan B; Werncke, Thomas; Vogel, Arndt; Kirstein, Martha M; Wacker, Frank K; Rodt, Thomas

    2018-04-21

    Parametric response mapping (PRM) is a novel image-analysis technique applicable to assess tumor viability and predict intrahepatic recurrence of hepatocellular carcinoma (HCC) patients treated with transarterial chemoembolization (TACE). However, to date, the prognostic value of PRM for prediction of overall survival in HCC patients undergoing TACE is unclear. The objective of this explorative, single-center study was to identify cut-off values for voxel-specific PRM parameters that predict the post TACE overall survival in HCC patients. PRM was applied to biphasic CT data obtained at baseline and following 3 TACE treatments of 20 patients with HCC tumors ≥ 2 cm. The individual portal venous phases were registered to the arterial phases followed by segmentation of the largest lesion, i.e., the region of interest (ROI). Segmented voxels with their respective arterial and portal venous phase density values were displayed as a scatter plot. Voxel-specific PRM parameters were calculated and compared to patients' survival at 1, 2, and 3 years post treatment to identify the maximal predictive parameters. The hypervascularized tissue portion of the ROI was found to represent an independent predictor of the post TACE overall survival. For this parameter, cut-off values of 3650, 2057, and 2057 voxels, respectively, were determined to be optimal to predict overall survival at 1, 2, and 3 years after TACE. Using these cut points, patients were correctly classified as having died with a sensitivity of 80, 92, and 86% and as still being alive with a specificity of 60, 75, and 83%, respectively. The prognostic accuracy measured by area under the curve (AUC) values ranged from 0.73 to 0.87. PRM may have prognostic value to predict post TACE overall survival in HCC patients.

  18. Parameter and prediction uncertainty in an optimized terrestrial carbon cycle model: Effects of constraining variables and data record length

    NASA Astrophysics Data System (ADS)

    Ricciuto, Daniel M.; King, Anthony W.; Dragoni, D.; Post, Wilfred M.

    2011-03-01

    Many parameters in terrestrial biogeochemical models are inherently uncertain, leading to uncertainty in predictions of key carbon cycle variables. At observation sites, this uncertainty can be quantified by applying model-data fusion techniques to estimate model parameters using eddy covariance observations and associated biometric data sets as constraints. Uncertainty is reduced as data records become longer and different types of observations are added. We estimate parametric and associated predictive uncertainty at the Morgan Monroe State Forest in Indiana, USA. Parameters in the Local Terrestrial Ecosystem Carbon (LoTEC) are estimated using both synthetic and actual constraints. These model parameters and uncertainties are then used to make predictions of carbon flux for up to 20 years. We find a strong dependence of both parametric and prediction uncertainty on the length of the data record used in the model-data fusion. In this model framework, this dependence is strongly reduced as the data record length increases beyond 5 years. If synthetic initial biomass pool constraints with realistic uncertainties are included in the model-data fusion, prediction uncertainty is reduced by more than 25% when constraining flux records are less than 3 years. If synthetic annual aboveground woody biomass increment constraints are also included, uncertainty is similarly reduced by an additional 25%. When actual observed eddy covariance data are used as constraints, there is still a strong dependence of parameter and prediction uncertainty on data record length, but the results are harder to interpret because of the inability of LoTEC to reproduce observed interannual variations and the confounding effects of model structural error.

  19. Least-squares reverse time migration in elastic media

    NASA Astrophysics Data System (ADS)

    Ren, Zhiming; Liu, Yang; Sen, Mrinal K.

    2017-02-01

    Elastic reverse time migration (RTM) can yield accurate subsurface information (e.g. PP and PS reflectivity) by imaging the multicomponent seismic data. However, the existing RTM methods are still insufficient to provide satisfactory results because of the finite recording aperture, limited bandwidth and imperfect illumination. Besides, the P- and S-wave separation and the polarity reversal correction are indispensable in conventional elastic RTM. Here, we propose an iterative elastic least-squares RTM (LSRTM) method, in which the imaging accuracy is improved gradually with iteration. We first use the Born approximation to formulate the elastic de-migration operator, and employ the Lagrange multiplier method to derive the adjoint equations and gradients with respect to reflectivity. Then, an efficient inversion workflow (only four forward computations needed in each iteration) is introduced to update the reflectivity. Synthetic and field data examples reveal that the proposed LSRTM method can obtain higher-quality images than the conventional elastic RTM. We also analyse the influence of model parametrizations and misfit functions in elastic LSRTM. We observe that Lamé parameters, velocity and impedance parametrizations have similar and plausible migration results when the structures of different models are correlated. For an uncorrelated subsurface model, velocity and impedance parametrizations produce fewer artefacts caused by parameter crosstalk than the Lamé coefficient parametrization. Correlation- and convolution-type misfit functions are effective when amplitude errors are involved and the source wavelet is unknown, respectively. Finally, we discuss the dependence of elastic LSRTM on migration velocities and its antinoise ability. Imaging results determine that the new elastic LSRTM method performs well as long as the low-frequency components of migration velocities are correct. The quality of images of elastic LSRTM degrades with increasing noise.

  20. Guided-mode resonance nanophotonics in materially sparse architectures

    NASA Astrophysics Data System (ADS)

    Magnusson, Robert; Niraula, Manoj; Yoon, Jae W.; Ko, Yeong H.; Lee, Kyu J.

    2016-03-01

    The guided-mode resonance (GMR) concept refers to lateral quasi-guided waveguide modes induced in periodic layers. Whereas these effects have been known for a long time, new attributes and innovations continue to appear. Here, we review some recent progress in this field with emphasis on sparse, or minimal, device embodiments. We discuss properties of wideband resonant reflectors designed with gratings in which the grating ridges are matched to an identical material to eliminate local reflections and phase changes. This critical interface therefore possesses zero refractive-index contrast; hence we call them "zero-contrast gratings." Applying this architecture, we present single-layer, wideband reflectors that are robust under experimentally realistic parametric variations. We introduce a new class of reflectors and polarizers fashioned with dielectric nanowire grids that are mostly empty space. Computed results predict high reflection and attendant polarization extinction for these sparse lattices. Experimental verification with Si nanowire grids yields ~200-nm-wide band of high reflection for one polarization state and free transmission of the orthogonal state. Finally, we present bandpass filters using all-dielectric resonant gratings. We design, fabricate, and test nanostructured single layer filters exhibiting high efficiency and sub-nanometer-wide passbands surrounded by 100-nm-wide stopbands.

  1. Aeroelastic effects in multirotor vehicles. Part 2: Methods of solution and results illustrating coupled rotor/body aeromechanical stability

    NASA Technical Reports Server (NTRS)

    Venkatesan, C.; Friedmann, P. P.

    1987-01-01

    This report is a sequel to the earlier report titled, Aeroelastic Effects in Multi-Rotor Vehicles with Application to Hybrid Heavy Lift System, Part 1: Formulation of Equations of Motion (NASA CR-3822). The trim and stability equations are presented for a twin rotor system with a buoyant envelope and an underslung load attached to a flexible supporting structure. These equations are specialized for the case of hovering flight. A stability analysis, for such a vehicle with 31 degrees of freedom, yields a total of 62 eigenvalues. A careful parametric study is performed to identify the various blade and vehicle modes, as well as the coupling between various modes. Finally, it is shown that the coupled rotor/vehicle stability analysis provides information on both the aeroelastic stability as well as complete vehicle dynamic stability. Also presented are the results of an analytical study aimed at predicting the aeromechanical stability of a single rotor helicopter in ground resonance. The theoretical results are found to be in good agreement with the experimental results, thereby validating the analytical model for the dynamics of the coupled rotor/support system.

  2. Non-hydrostatic semi-elastic hybrid-coordinate SISL extension of HIRLAM. Part I: numerical scheme

    NASA Astrophysics Data System (ADS)

    Rõõm, Rein; Männik, Aarne; Luhamaa, Andres

    2007-10-01

    Two-time-level, semi-implicit, semi-Lagrangian (SISL) scheme is applied to the non-hydrostatic pressure coordinate equations, constituting a modified Miller-Pearce-White model, in hybrid-coordinate framework. Neutral background is subtracted in the initial continuous dynamics, yielding modified equations for geopotential, temperature and logarithmic surface pressure fluctuation. Implicit Lagrangian marching formulae for single time-step are derived. A disclosure scheme is presented, which results in an uncoupled diagnostic system, consisting of 3-D Poisson equation for omega velocity and 2-D Helmholtz equation for logarithmic pressure fluctuation. The model is discretized to create a non-hydrostatic extension to numerical weather prediction model HIRLAM. The discretization schemes, trajectory computation algorithms and interpolation routines, as well as the physical parametrization package are maintained from parent hydrostatic HIRLAM. For stability investigation, the derived SISL model is linearized with respect to the initial, thermally non-equilibrium resting state. Explicit residuals of the linear model prove to be sensitive to the relative departures of temperature and static stability from the reference state. Relayed on the stability study, the semi-implicit term in the vertical momentum equation is replaced to the implicit term, which results in stability increase of the model.

  3. Texture-based characterization of subskin features by specified laser speckle effects at λ = 650 nm region for more accurate parametric 'skin age' modelling.

    PubMed

    Orun, A B; Seker, H; Uslan, V; Goodyer, E; Smith, G

    2017-06-01

    The textural structure of 'skin age'-related subskin components enables us to identify and analyse their unique characteristics, thus making substantial progress towards establishing an accurate skin age model. This is achieved by a two-stage process. First by the application of textural analysis using laser speckle imaging, which is sensitive to textural effects within the λ = 650 nm spectral band region. In the second stage, a Bayesian inference method is used to select attributes from which a predictive model is built. This technique enables us to contrast different skin age models, such as the laser speckle effect against the more widely used normal light (LED) imaging method, whereby it is shown that our laser speckle-based technique yields better results. The method introduced here is non-invasive, low cost and capable of operating in real time; having the potential to compete against high-cost instrumentation such as confocal microscopy or similar imaging devices used for skin age identification purposes. © 2016 Society of Cosmetic Scientists and the Société Française de Cosmétologie.

  4. SOLAR MODULATION OF THE LOCAL INTERSTELLAR SPECTRUM WITH VOYAGER 1 , AMS-02, PAMELA , AND BESS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Corti, C.; Bindi, V.; Consolandi, C.

    In recent years, the increasing precision of direct cosmic rays measurements opened the door to high-sensitivity indirect searches of dark matter and to more accurate predictions for radiation doses received by astronauts and electronics in space. The key ingredients in the study of these phenomena are the knowledge of the local interstellar spectrum (LIS) of galactic cosmic rays and the understanding of how the solar modulation affects the LIS inside the heliosphere. Voyager 1 , AMS-02, PAMELA , and BESS measurements of proton and helium fluxes provide valuable information, allowing us to shed light on the shape of the LISmore » and the details of the solar modulation during solar cycles 22-24. A new parametrization of the LIS is presented, based on the latest data from Voyager 1 and AMS-02. Using the framework of the force-field approximation, the solar modulation parameter is extracted from the time-dependent fluxes measured by PAMELA and BESS . A modified version of the force-field approximation with a rigidity-dependent modulation parameter is introduced, yielding better fits than the force-field approximation. The results are compared with the modulation parameter inferred by neutron monitors.« less

  5. Yield performance and stability of CMS-based triticale hybrids.

    PubMed

    Mühleisen, Jonathan; Piepho, Hans-Peter; Maurer, Hans Peter; Reif, Jochen Christoph

    2015-02-01

    CMS-based triticale hybrids showed only marginal midparent heterosis for grain yield and lower dynamic yield stability compared to inbred lines. Hybrids of triticale (×Triticosecale Wittmack) are expected to possess outstanding yield performance and increased dynamic yield stability. The objectives of the present study were to (1) examine the optimum choice of the biometrical model to compare yield stability of hybrids versus lines, (2) investigate whether hybrids exhibit a more pronounced grain yield performance and yield stability, and (3) study optimal strategies to predict yield stability of hybrids. Thirteen female and seven male parental lines and their 91 factorial hybrids as well as 30 commercial lines were evaluated for grain yield in up to 20 environments. Hybrids were produced using a cytoplasmic male sterility (CMS)-inducing cytoplasm that originated from Triticumtimopheevii Zhuk. We found that the choice of the biometrical model can cause contrasting results and concluded that a group-by-environment interaction term should be added to the model when estimating stability variance of hybrids and lines. midparent heterosis for grain yield was on average 3 % with a range from -15.0 to 11.5 %. No hybrid outperformed the best inbred line. Hybrids had, on average, lower dynamic yield stability compared to the inbred lines. Grain yield performance of hybrids could be predicted based on midparent values and general combining ability (GCA)-predicted values. In contrast, stability variance of hybrids could be predicted only based on GCA-predicted values. We speculated that negative effects of the used CMS cytoplasm might be the reason for the low performance and yield stability of the hybrids. For this purpose a detailed study on the reasons for the drawback of the currently existing CMS system in triticale is urgently required comprising also the search of potentially alternative hybridization systems.

  6. Self-consistent projection operator theory in nonlinear quantum optical systems: A case study on degenerate optical parametric oscillators

    NASA Astrophysics Data System (ADS)

    Degenfeld-Schonburg, Peter; Navarrete-Benlloch, Carlos; Hartmann, Michael J.

    2015-05-01

    Nonlinear quantum optical systems are of paramount relevance for modern quantum technologies, as well as for the study of dissipative phase transitions. Their nonlinear nature makes their theoretical study very challenging and hence they have always served as great motivation to develop new techniques for the analysis of open quantum systems. We apply the recently developed self-consistent projection operator theory to the degenerate optical parametric oscillator to exemplify its general applicability to quantum optical systems. We show that this theory provides an efficient method to calculate the full quantum state of each mode with a high degree of accuracy, even at the critical point. It is equally successful in describing both the stationary limit and the dynamics, including regions of the parameter space where the numerical integration of the full problem is significantly less efficient. We further develop a Gaussian approach consistent with our theory, which yields sensibly better results than the previous Gaussian methods developed for this system, most notably standard linearization techniques.

  7. A parametric study of single-wall carbon nanotube growth by laser ablation

    NASA Technical Reports Server (NTRS)

    Arepalli, Sivaram; Holmes, William A.; Nikolaev, Pavel; Hadjiev, Victor G.; Scott, Carl D.

    2004-01-01

    Results of a parametric study of carbon nanotube production by the double-pulse laser oven process are presented. The effect of various operating parameters on the production of single-wall carbon nanotubes (SWCNTs) is estimated by characterizing the nanotube material using analytical techniques, including scanning electron microscopy, transmission electron microscopy, thermo gravimetric analysis and Raman spectroscopy. The study included changing the sequence of the laser pulses, laser energy, pulse separation, type of buffer gas used, operating pressure, flow rate, inner tube diameter, as well as its material, and oven temperature. It was found that the material quality and quantity improve with deviation from normal operation parameters such as laser energy density higher than 1.5 J/cm2, pressure lower than 67 kPa, and flow rates higher than 100 sccm. Use of helium produced mainly small diameter tubes and a lower yield. The diameter of SWCNTs decreases with decreasing oven temperature and lower flow rates.

  8. What generates Callisto's atmosphere? - Indications from calculations of ionospheric electron densities and airglow

    NASA Astrophysics Data System (ADS)

    Hartkorn, O. A.; Saur, J.; Strobel, D. F.

    2016-12-01

    Callisto's atmosphere has been probed by the Galileo spacecraft and the Hubble Space Telescope (HST) and is expected to be composed of O2 and minor components CO2 and H2O. We use an ionosphere model coupled with a parametrized atmosphere model to calculate ionospheric electron densities and airglow. By varying a prescribed neutral atmosphere and comparing the model results to Galileo radio occultation and HST-Cosmic Origin Spectrograph observations we find that Callisto's atmosphere likely possesses a day/night asymmetry driven by solar illumination. We see two possible explanation for this asymmetry: 1) If sublimation dominates the atmosphere formation, a day/night asymmetry will be generated since the sublimation production rate is naturally much stronger at the day side than at the night side. 2) If surface sputtering dominates the atmosphere formation, a day/night asymmetry is likely generated as well since the sputtering yield increases with increasing surface temperature and, therefore, with decreasing solar zenith angle. The main difference between both processes is given by the fact that surface sputtering, in contrast to sublimation, is also a function of Callisto's orbital position since sputtering projectiles predominately co-rotate with the Jovian magnetosphere. On this basis, we develop a method that can discriminate between both explanations by comparing airglow observations at different orbital positions with airglow predictions. Our predictions are based on our ionosphere model and an orbital position dependent atmosphere model originally developed for the O2 atmosphere of Europa by Plainaki et al. (2013).

  9. Negative impacts of climate change on cereal yields: statistical evidence from France

    NASA Astrophysics Data System (ADS)

    Gammans, Matthew; Mérel, Pierre; Ortiz-Bobea, Ariel

    2017-05-01

    In several world regions, climate change is predicted to negatively affect crop productivity. The recent statistical yield literature emphasizes the importance of flexibly accounting for the distribution of growing-season temperature to better represent the effects of warming on crop yields. We estimate a flexible statistical yield model using a long panel from France to investigate the impacts of temperature and precipitation changes on wheat and barley yields. Winter varieties appear sensitive to extreme cold after planting. All yields respond negatively to an increase in spring-summer temperatures and are a decreasing function of precipitation about historical precipitation levels. Crop yields are predicted to be negatively affected by climate change under a wide range of climate models and emissions scenarios. Under warming scenario RCP8.5 and holding growing areas and technology constant, our model ensemble predicts a 21.0% decline in winter wheat yield, a 17.3% decline in winter barley yield, and a 33.6% decline in spring barley yield by the end of the century. Uncertainty from climate projections dominates uncertainty from the statistical model. Finally, our model predicts that continuing technology trends would counterbalance most of the effects of climate change.

  10. How to Make Data a Blessing to Parametric Uncertainty Quantification and Reduction?

    NASA Astrophysics Data System (ADS)

    Ye, M.; Shi, X.; Curtis, G. P.; Kohler, M.; Wu, J.

    2013-12-01

    In a Bayesian point of view, probability of model parameters and predictions are conditioned on data used for parameter inference and prediction analysis. It is critical to use appropriate data for quantifying parametric uncertainty and its propagation to model predictions. However, data are always limited and imperfect. When a dataset cannot properly constrain model parameters, it may lead to inaccurate uncertainty quantification. While in this case data appears to be a curse to uncertainty quantification, a comprehensive modeling analysis may help understand the cause and characteristics of parametric uncertainty and thus turns data into a blessing. In this study, we illustrate impacts of data on uncertainty quantification and reduction using an example of surface complexation model (SCM) developed to simulate uranyl (U(VI)) adsorption. The model includes two adsorption sites, referred to as strong and weak sites. The amount of uranium adsorption on these sites determines both the mean arrival time and the long tail of the breakthrough curves. There is one reaction on the weak site but two reactions on the strong site. The unknown parameters include fractions of the total surface site density of the two sites and surface complex formation constants of the three reactions. A total of seven experiments were conducted with different geochemical conditions to estimate these parameters. The experiments with low initial concentration of U(VI) result in a large amount of parametric uncertainty. A modeling analysis shows that it is because the experiments cannot distinguish the relative adsorption affinity of the strong and weak sites on uranium adsorption. Therefore, the experiments with high initial concentration of U(VI) are needed, because in the experiments the strong site is nearly saturated and the weak site can be determined. The experiments with high initial concentration of U(VI) are a blessing to uncertainty quantification, and the experiments with low initial concentration help modelers turn a curse into a blessing. The data impacts on uncertainty quantification and reduction are quantified using probability density functions of model parameters obtained from Markov Chain Monte Carlo simulation using the DREAM algorithm. This study provides insights to model calibration, uncertainty quantification, experiment design, and data collection in groundwater reactive transport modeling and other environmental modeling.

  11. Predicting critical transitions in dynamical systems from time series using nonstationary probability density modeling.

    PubMed

    Kwasniok, Frank

    2013-11-01

    A time series analysis method for predicting the probability density of a dynamical system is proposed. A nonstationary parametric model of the probability density is estimated from data within a maximum likelihood framework and then extrapolated to forecast the future probability density and explore the system for critical transitions or tipping points. A full systematic account of parameter uncertainty is taken. The technique is generic, independent of the underlying dynamics of the system. The method is verified on simulated data and then applied to prediction of Arctic sea-ice extent.

  12. Predicting fluorescence quantum yield for anisole at elevated temperatures and pressures

    NASA Astrophysics Data System (ADS)

    Wang, Q.; Tran, K. H.; Morin, C.; Bonnety, J.; Legros, G.; Guibert, P.

    2017-07-01

    Aromatic molecules are promising candidates for using as a fluorescent tracer for gas-phase scalar parameter diagnostics in a drastic environment like engines. Along with anisole turning out an excellent temperature tracer by Planar Laser-Induced Fluorescence (PLIF) diagnostics in Rapid Compression Machine (RCM), its fluorescence signal evolution versus pressure and temperature variation in a high-pressure and high-temperature cell have been reported in our recent paper on Applied Phys. B by Tran et al. Parallel to this experimental study, a photophysical model to determine anisole Fluorescence Quantum Yield (FQY) is delivered in this paper. The key to development of the model is the identification of pressure, temperature, and ambient gases, where the FQY is dominated by certain processes of the model (quenching effect, vibrational relaxation, etc.). In addition to optimization of the vibrational relaxation energy cascade coefficient and the collision probability with oxygen, the non-radiative pathways are mainly discussed. The common non-radiative rate (intersystem crossing and internal conversion) is simulated in parametric form as a function of excess vibrational energy, derived from the data acquired at different pressures and temperatures from the literature. A new non-radiative rate, namely, the equivalent Intramolecular Vibrational Redistribution or Randomization (IVR) rate, is proposed to characterize anisole deactivated processes. The new model exhibits satisfactory results which are validated against experimental measurements of fluorescence signal induced at a wavelength of 266 nm in a cell with different bath gases (N2, CO2, Ar and O2), a pressure range from 0.2 to 4 MPa, and a temperature range from 473 to 873 K.

  13. High-energy neutrino fluxes from AGN populations inferred from X-ray surveys

    NASA Astrophysics Data System (ADS)

    Jacobsen, Idunn B.; Wu, Kinwah; On, Alvina Y. L.; Saxton, Curtis J.

    2015-08-01

    High-energy neutrinos and photons are complementary messengers, probing violent astrophysical processes and structural evolution of the Universe. X-ray and neutrino observations jointly constrain conditions in active galactic nuclei (AGN) jets: their baryonic and leptonic contents, and particle production efficiency. Testing two standard neutrino production models for local source Cen A (Koers & Tinyakov and Becker & Biermann), we calculate the high-energy neutrino spectra of single AGN sources and derive the flux of high-energy neutrinos expected for the current epoch. Assuming that accretion determines both X-rays and particle creation, our parametric scaling relations predict neutrino yield in various AGN classes. We derive redshift-dependent number densities of each class, from Chandra and Swift/BAT X-ray luminosity functions (Silverman et al. and Ajello et al.). We integrate the neutrino spectrum expected from the cumulative history of AGN (correcting for cosmological and source effects, e.g. jet orientation and beaming). Both emission scenarios yield neutrino fluxes well above limits set by IceCube (by ˜4-106 × at 1 PeV, depending on the assumed jet models for neutrino production). This implies that: (i) Cen A might not be a typical neutrino source as commonly assumed; (ii) both neutrino production models overestimate the efficiency; (iii) neutrino luminosity scales with accretion power differently among AGN classes and hence does not follow X-ray luminosity universally; (iv) some AGN are neutrino-quiet (e.g. below a power threshold for neutrino production); (v) neutrino and X-ray emission have different duty cycles (e.g. jets alternate between baryonic and leptonic flows); or (vi) some combination of the above.

  14. Applying complex models to poultry production in the future--economics and biology.

    PubMed

    Talpaz, H; Cohen, M; Fancher, B; Halley, J

    2013-09-01

    The ability to determine the optimal broiler feed nutrient density that maximizes margin over feeding cost (MOFC) has obvious economic value. To determine optimal feed nutrient density, one must consider ingredient prices, meat values, the product mix being marketed, and the projected biological performance. A series of 8 feeding trials was conducted to estimate biological responses to changes in ME and amino acid (AA) density. Eight different genotypes of sex-separate reared broilers were fed diets varying in ME (2,723-3,386 kcal of ME/kg) and AA (0.89-1.65% digestible lysine with all essential AA acids being indexed to lysine) levels. Broilers were processed to determine carcass component yield at many different BW (1.09-4.70 kg). Trial data generated were used in model constructed to discover the dietary levels of ME and AA that maximize MOFC on a per broiler or per broiler annualized basis (bird × number of cycles/year). The model was designed to estimate the effects of dietary nutrient concentration on broiler live weight, feed conversion, mortality, and carcass component yield. Estimated coefficients from the step-wise regression process are subsequently used to predict the optimal ME and AA concentrations that maximize MOFC. The effects of changing feed or meat prices across a wide spectrum on optimal ME and AA levels can be evaluated via parametric analysis. The model can rapidly compare both biological and economic implications of changing from current practice to the simulated optimal solution. The model can be exploited to enhance decision making under volatile market conditions.

  15. Logistic Stick-Breaking Process

    PubMed Central

    Ren, Lu; Du, Lan; Carin, Lawrence; Dunson, David B.

    2013-01-01

    A logistic stick-breaking process (LSBP) is proposed for non-parametric clustering of general spatially- or temporally-dependent data, imposing the belief that proximate data are more likely to be clustered together. The sticks in the LSBP are realized via multiple logistic regression functions, with shrinkage priors employed to favor contiguous and spatially localized segments. The LSBP is also extended for the simultaneous processing of multiple data sets, yielding a hierarchical logistic stick-breaking process (H-LSBP). The model parameters (atoms) within the H-LSBP are shared across the multiple learning tasks. Efficient variational Bayesian inference is derived, and comparisons are made to related techniques in the literature. Experimental analysis is performed for audio waveforms and images, and it is demonstrated that for segmentation applications the LSBP yields generally homogeneous segments with sharp boundaries. PMID:25258593

  16. Cumulative toxicity of neonicotinoid insecticide mixtures to Chironomus dilutus under acute exposure scenarios.

    PubMed

    Maloney, Erin M; Morrissey, Christy A; Headley, John V; Peru, Kerry M; Liber, Karsten

    2017-11-01

    Extensive agricultural use of neonicotinoid insecticide products has resulted in the presence of neonicotinoid mixtures in surface waters worldwide. Although many aquatic insect species are known to be sensitive to neonicotinoids, the impact of neonicotinoid mixtures is poorly understood. In the present study, the cumulative toxicities of binary and ternary mixtures of select neonicotinoids (imidacloprid, clothianidin, and thiamethoxam) were characterized under acute (96-h) exposure scenarios using the larval midge Chironomus dilutus as a representative aquatic insect species. Using the MIXTOX approach, predictive parametric models were fitted and statistically compared with observed toxicity in subsequent mixture tests. Single-compound toxicity tests yielded median lethal concentration (LC50) values of 4.63, 5.93, and 55.34 μg/L for imidacloprid, clothianidin, and thiamethoxam, respectively. Because of the similar modes of action of neonicotinoids, concentration-additive cumulative mixture toxicity was the predicted model. However, we found that imidacloprid-clothianidin mixtures demonstrated response-additive dose-level-dependent synergism, clothianidin-thiamethoxam mixtures demonstrated concentration-additive synergism, and imidacloprid-thiamethoxam mixtures demonstrated response-additive dose-ratio-dependent synergism, with toxicity shifting from antagonism to synergism as the relative concentration of thiamethoxam increased. Imidacloprid-clothianidin-thiamethoxam ternary mixtures demonstrated response-additive synergism. These results indicate that, under acute exposure scenarios, the toxicity of neonicotinoid mixtures to C. dilutus cannot be predicted using the common assumption of additive joint activity. Indeed, the overarching trend of synergistic deviation emphasizes the need for further research into the ecotoxicological effects of neonicotinoid insecticide mixtures in field settings, the development of better toxicity models for neonicotinoid mixture exposures, and the consideration of mixture effects when setting water quality guidelines for this class of pesticides. Environ Toxicol Chem 2017;36:3091-3101. © 2017 SETAC. © 2017 SETAC.

  17. Cognitive control over learning: Creating, clustering and generalizing task-set structure

    PubMed Central

    Collins, Anne G.E.; Frank, Michael J.

    2013-01-01

    Executive functions and learning share common neural substrates essential for their expression, notably in prefrontal cortex and basal ganglia. Understanding how they interact requires studying how cognitive control facilitates learning, but also how learning provides the (potentially hidden) structure, such as abstract rules or task-sets, needed for cognitive control. We investigate this question from three complementary angles. First, we develop a new computational “C-TS” (context-task-set) model inspired by non-parametric Bayesian methods, specifying how the learner might infer hidden structure and decide whether to re-use that structure in new situations, or to create new structure. Second, we develop a neurobiologically explicit model to assess potential mechanisms of such interactive structured learning in multiple circuits linking frontal cortex and basal ganglia. We systematically explore the link betweens these levels of modeling across multiple task demands. We find that the network provides an approximate implementation of high level C-TS computations, where manipulations of specific neural mechanisms are well captured by variations in distinct C-TS parameters. Third, this synergism across models yields strong predictions about the nature of human optimal and suboptimal choices and response times during learning. In particular, the models suggest that participants spontaneously build task-set structure into a learning problem when not cued to do so, which predicts positive and negative transfer in subsequent generalization tests. We provide evidence for these predictions in two experiments and show that the C-TS model provides a good quantitative fit to human sequences of choices in this task. These findings implicate a strong tendency to interactively engage cognitive control and learning, resulting in structured abstract representations that afford generalization opportunities, and thus potentially long-term rather than short-term optimality. PMID:23356780

  18. Yield of undamaged slash pine stands in South Florida

    Treesearch

    O. Gordon Langdon

    1961-01-01

    Predictions of future timber yields are necessary for formulating management plans and for comparing timber growing with alternative land uses. One useful tool for making these predictions is a set of yield tables.

  19. Stimulated Parametric Decay of Large Amplitude Alfvén waves in the Large Plasma Device (LaPD)

    NASA Astrophysics Data System (ADS)

    Dorfman, S. E.; Carter, T.; Pribyl, P.; Tripathi, S.; Van Compernolle, B.; Vincena, S. T.

    2012-12-01

    Alfvén waves, a fundamental mode of magnetized plasmas, are ubiquitous in lab and space. While the linear behaviour of these waves has been extensively studied [1], non-linear effects are important in many real systems, including the solar wind and solar corona. In particular, a parametric decay process in which a large amplitude Alfvén wave decays into an ion acoustic wave and backward propagating Alfvén wave may be key to the spectrum of solar wind turbulence. Ion acoustic waves have been observed in the heliosphere, but their origin and role have not yet been determined [2]. Such waves produced by parametric decay in the corona could contribute to coronal heating [3]. Parametric decay has also been suggested as an intermediate instability mediating the observed turbulent cascade of Alfvén waves to small spatial scales [4]. The present laboratory experiments aim to stimulate the parametric decay process by launching counter-propagating Alfvén waves from antennas placed at either end of the Large Plasma Device (LaPD). The resulting beat response has a dispersion relation consistent with an ion acoustic wave. Also consistent with a stimulated decay process: 1) The beat amplitude peaks when the frequency difference between the two Alfvén waves is near the value predicted by Alfvén-ion acoustic wave coupling. 2) This peak beat frequency scales with antenna and plasma parameters as predicted by three wave matching. 3) The beat amplitude peaks at the same location as the magnetic field from the Alfvén waves. 4) The beat wave is carried by the ions and propagates in the direction of the higher-frequency Alfvén wave. Strong damping observed after the pump Alfvén waves are turned off and observed heating of the plasma by the Alfvén waves are under investigation. [1] W. Gekelman, J. Geophys. Res., 104:14417-14436, July 1999. [2] A. Mangeney,et. al., Annales Geophysicae, Volume 17, Number 3 (1999). [3] F. Pruneti, F and M. Velli, ESA Spec. Pub. 404, 623 (1997). [4] P. Yoon and T. Fang, Plasma Phys. Control. Fusion 50 (2008). This work was performed at UCLA's Basic Plasma Science Facility, which is jointly supported by the U.S. DoE and NSF.

  20. Parametric Instability, Inverse Cascade, and the 1/f Range of Solar-Wind Turbulence.

    PubMed

    Chandran, Benjamin D G

    2018-02-01

    In this paper, weak turbulence theory is used to investigate the nonlinear evolution of the parametric instability in 3D low- β plasmas at wavelengths much greater than the ion inertial length under the assumption that slow magnetosonic waves are strongly damped. It is shown analytically that the parametric instability leads to an inverse cascade of Alfvén wave quanta, and several exact solutions to the wave kinetic equations are presented. The main results of the paper concern the parametric decay of Alfvén waves that initially satisfy e + ≫ e - , where e + and e - are the frequency ( f ) spectra of Alfvén waves propagating in opposite directions along the magnetic field lines. If e + initially has a peak frequency f 0 (at which fe + is maximized) and an "infrared" scaling f p at smaller f with -1 < p < 1, then e + acquires an f -1 scaling throughout a range of frequencies that spreads out in both directions from f 0 . At the same time, e - acquires an f -2 scaling within this same frequency range. If the plasma parameters and infrared e + spectrum are chosen to match conditions in the fast solar wind at a heliocentric distance of 0.3 astronomical units (AU), then the nonlinear evolution of the parametric instability leads to an e + spectrum that matches fast-wind measurements from the Helios spacecraft at 0.3 AU, including the observed f -1 scaling at f ≳ 3 × 10 -4 Hz. The results of this paper suggest that the f -1 spectrum seen by Helios in the fast solar wind at f ≳ 3 × 10 -4 Hz is produced in situ by parametric decay and that the f -1 range of e + extends over an increasingly narrow range of frequencies as r decreases below 0.3 AU. This prediction will be tested by measurements from the Parker Solar Probe .

  1. Differential diagnosis of normal pressure hydrocephalus by MRI mean diffusivity histogram analysis.

    PubMed

    Ivkovic, M; Liu, B; Ahmed, F; Moore, D; Huang, C; Raj, A; Kovanlikaya, I; Heier, L; Relkin, N

    2013-01-01

    Accurate diagnosis of normal pressure hydrocephalus is challenging because the clinical symptoms and radiographic appearance of NPH often overlap those of other conditions, including age-related neurodegenerative disorders such as Alzheimer and Parkinson diseases. We hypothesized that radiologic differences between NPH and AD/PD can be characterized by a robust and objective MR imaging DTI technique that does not require intersubject image registration or operator-defined regions of interest, thus avoiding many pitfalls common in DTI methods. We collected 3T DTI data from 15 patients with probable NPH and 25 controls with AD, PD, or dementia with Lewy bodies. We developed a parametric model for the shape of intracranial mean diffusivity histograms that separates brain and ventricular components from a third component composed mostly of partial volume voxels. To accurately fit the shape of the third component, we constructed a parametric function named the generalized Voss-Dyke function. We then examined the use of the fitting parameters for the differential diagnosis of NPH from AD, PD, and DLB. Using parameters for the MD histogram shape, we distinguished clinically probable NPH from the 3 other disorders with 86% sensitivity and 96% specificity. The technique yielded 86% sensitivity and 88% specificity when differentiating NPH from AD only. An adequate parametric model for the shape of intracranial MD histograms can distinguish NPH from AD, PD, or DLB with high sensitivity and specificity.

  2. Exploiting the spatial locality of electron correlation within the parametric two-electron reduced-density-matrix method

    NASA Astrophysics Data System (ADS)

    DePrince, A. Eugene; Mazziotti, David A.

    2010-01-01

    The parametric variational two-electron reduced-density-matrix (2-RDM) method is applied to computing electronic correlation energies of medium-to-large molecular systems by exploiting the spatial locality of electron correlation within the framework of the cluster-in-molecule (CIM) approximation [S. Li et al., J. Comput. Chem. 23, 238 (2002); J. Chem. Phys. 125, 074109 (2006)]. The 2-RDMs of individual molecular fragments within a molecule are determined, and selected portions of these 2-RDMs are recombined to yield an accurate approximation to the correlation energy of the entire molecule. In addition to extending CIM to the parametric 2-RDM method, we (i) suggest a more systematic selection of atomic-orbital domains than that presented in previous CIM studies and (ii) generalize the CIM method for open-shell quantum systems. The resulting method is tested with a series of polyacetylene molecules, water clusters, and diazobenzene derivatives in minimal and nonminimal basis sets. Calculations show that the computational cost of the method scales linearly with system size. We also compute hydrogen-abstraction energies for a series of hydroxyurea derivatives. Abstraction of hydrogen from hydroxyurea is thought to be a key step in its treatment of sickle cell anemia; the design of hydroxyurea derivatives that oxidize more rapidly is one approach to devising more effective treatments.

  3. Plateletpheresis efficiency and mathematical correction of software-derived platelet yield prediction: A linear regression and ROC modeling approach.

    PubMed

    Jaime-Pérez, José Carlos; Jiménez-Castillo, Raúl Alberto; Vázquez-Hernández, Karina Elizabeth; Salazar-Riojas, Rosario; Méndez-Ramírez, Nereida; Gómez-Almaguer, David

    2017-10-01

    Advances in automated cell separators have improved the efficiency of plateletpheresis and the possibility of obtaining double products (DP). We assessed cell processor accuracy of predicted platelet (PLT) yields with the goal of a better prediction of DP collections. This retrospective proof-of-concept study included 302 plateletpheresis procedures performed on a Trima Accel v6.0 at the apheresis unit of a hematology department. Donor variables, software predicted yield and actual PLT yield were statistically evaluated. Software prediction was optimized by linear regression analysis and its optimal cut-off to obtain a DP assessed by receiver operating characteristic curve (ROC) modeling. Three hundred and two plateletpheresis procedures were performed; in 271 (89.7%) occasions, donors were men and in 31 (10.3%) women. Pre-donation PLT count had the best direct correlation with actual PLT yield (r = 0.486. P < .001). Means of software machine-derived values differed significantly from actual PLT yield, 4.72 × 10 11 vs.6.12 × 10 11 , respectively, (P < .001). The following equation was developed to adjust these values: actual PLT yield= 0.221 + (1.254 × theoretical platelet yield). ROC curve model showed an optimal apheresis device software prediction cut-off of 4.65 × 10 11 to obtain a DP, with a sensitivity of 82.2%, specificity of 93.3%, and an area under the curve (AUC) of 0.909. Trima Accel v6.0 software consistently underestimated PLT yields. Simple correction derived from linear regression analysis accurately corrected this underestimation and ROC analysis identified a precise cut-off to reliably predict a DP. © 2016 Wiley Periodicals, Inc.

  4. Crop status evaluations and yield predictions

    NASA Technical Reports Server (NTRS)

    Haun, J. R.

    1976-01-01

    One phase of the large area crop inventory project is presented. Wheat yield models based on the input of environmental variables potentially obtainable through the use of space remote sensing were developed and demonstrated. By the use of a unique method for visually qualifying daily plant development and subsequent multifactor computer analyses, it was possible to develop practical models for predicting crop development and yield. Development of wheat yield prediction models was based on the discovery that morphological changes in plants are detected and quantified on a daily basis, and that this change during a portion of the season was proportional to yield.

  5. e-Cow: an animal model that predicts herbage intake, milk yield and live weight change in dairy cows grazing temperate pastures, with and without supplementary feeding.

    PubMed

    Baudracco, J; Lopez-Villalobos, N; Holmes, C W; Comeron, E A; Macdonald, K A; Barry, T N; Friggens, N C

    2012-06-01

    This animal simulation model, named e-Cow, represents a single dairy cow at grazing. The model integrates algorithms from three previously published models: a model that predicts herbage dry matter (DM) intake by grazing dairy cows, a mammary gland model that predicts potential milk yield and a body lipid model that predicts genetically driven live weight (LW) and body condition score (BCS). Both nutritional and genetic drives are accounted for in the prediction of energy intake and its partitioning. The main inputs are herbage allowance (HA; kg DM offered/cow per day), metabolisable energy and NDF concentrations in herbage and supplements, supplements offered (kg DM/cow per day), type of pasture (ryegrass or lucerne), days in milk, days pregnant, lactation number, BCS and LW at calving, breed or strain of cow and genetic merit, that is, potential yields of milk, fat and protein. Separate equations are used to predict herbage intake, depending on the cutting heights at which HA is expressed. The e-Cow model is written in Visual Basic programming language within Microsoft Excel®. The model predicts whole-lactation performance of dairy cows on a daily basis, and the main outputs are the daily and annual DM intake, milk yield and changes in BCS and LW. In the e-Cow model, neither herbage DM intake nor milk yield or LW change are needed as inputs; instead, they are predicted by the e-Cow model. The e-Cow model was validated against experimental data for Holstein-Friesian cows with both North American (NA) and New Zealand (NZ) genetics grazing ryegrass-based pastures, with or without supplementary feeding and for three complete lactations, divided into weekly periods. The model was able to predict animal performance with satisfactory accuracy, with concordance correlation coefficients of 0.81, 0.76 and 0.62 for herbage DM intake, milk yield and LW change, respectively. Simulations performed with the model showed that it is sensitive to genotype by feeding environment interactions. The e-Cow model tended to overestimate the milk yield of NA genotype cows at low milk yields, while it underestimated the milk yield of NZ genotype cows at high milk yields. The approach used to define the potential milk yield of the cow and equations used to predict herbage DM intake make the model applicable for predictions in countries with temperate pastures.

  6. Reliability and Maintainability model (RAM) user and maintenance manual. Part 2

    NASA Technical Reports Server (NTRS)

    Ebeling, Charles E.

    1995-01-01

    This report documents the procedures for utilizing and maintaining the Reliability and Maintainability Model (RAM) developed by the University of Dayton for the NASA Langley Research Center (LaRC). The RAM model predicts reliability and maintainability (R&M) parameters for conceptual space vehicles using parametric relationships between vehicle design and performance characteristics and subsystem mean time between maintenance actions (MTBM) and manhours per maintenance action (MH/MA). These parametric relationships were developed using aircraft R&M data from over thirty different military aircraft of all types. This report describes the general methodology used within the model, the execution and computational sequence, the input screens and data, the output displays and reports, and study analyses and procedures. A source listing is provided.

  7. Parameter Estimation with Entangled Photons Produced by Parametric Down-Conversion

    NASA Technical Reports Server (NTRS)

    Cable, Hugo; Durkin, Gabriel A.

    2010-01-01

    We explore the advantages offered by twin light beams produced in parametric down-conversion for precision measurement. The symmetry of these bipartite quantum states, even under losses, suggests that monitoring correlations between the divergent beams permits a high-precision inference of any symmetry-breaking effect, e.g., fiber birefringence. We show that the quantity of entanglement is not the key feature for such an instrument. In a lossless setting, scaling of precision at the ultimate "Heisenberg" limit is possible with photon counting alone. Even as photon losses approach 100% the precision is shot-noise limited, and we identify the crossover point between quantum and classical precision as a function of detected flux. The predicted hypersensitivity is demonstrated with a Bayesian simulation.

  8. Parameter estimation with entangled photons produced by parametric down-conversion.

    PubMed

    Cable, Hugo; Durkin, Gabriel A

    2010-07-02

    We explore the advantages offered by twin light beams produced in parametric down-conversion for precision measurement. The symmetry of these bipartite quantum states, even under losses, suggests that monitoring correlations between the divergent beams permits a high-precision inference of any symmetry-breaking effect, e.g., fiber birefringence. We show that the quantity of entanglement is not the key feature for such an instrument. In a lossless setting, scaling of precision at the ultimate "Heisenberg" limit is possible with photon counting alone. Even as photon losses approach 100% the precision is shot-noise limited, and we identify the crossover point between quantum and classical precision as a function of detected flux. The predicted hypersensitivity is demonstrated with a Bayesian simulation.

  9. Distribution of polarization-entangled photonpairs produced via spontaneous parametric down-conversion within a local-area fiber network: theoretical model and experiment.

    PubMed

    Lim, Han Chuen; Yoshizawa, Akio; Tsuchida, Hidemi; Kikuchi, Kazuro

    2008-09-15

    We present a theoretical model for the distribution of polarization-entangled photon-pairs produced via spontaneous parametric down-conversion within a local-area fiber network. This model allows an entanglement distributor who plays the role of a service provider to determine the photon-pair generation rate giving highest two-photon interference fringe visibility for any pair of users, when given user-specific parameters. Usefulness of this model is illustrated in an example and confirmed in an experiment, where polarization-entangled photon-pairs are distributed over 82 km and 132 km of dispersion-managed optical fiber. Experimentally observed visibilities and entanglement fidelities are in good agreement with theoretically predicted values.

  10. A non-parametric consistency test of the ΛCDM model with Planck CMB data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aghamousa, Amir; Shafieloo, Arman; Hamann, Jan, E-mail: amir@aghamousa.com, E-mail: jan.hamann@unsw.edu.au, E-mail: shafieloo@kasi.re.kr

    Non-parametric reconstruction methods, such as Gaussian process (GP) regression, provide a model-independent way of estimating an underlying function and its uncertainty from noisy data. We demonstrate how GP-reconstruction can be used as a consistency test between a given data set and a specific model by looking for structures in the residuals of the data with respect to the model's best-fit. Applying this formalism to the Planck temperature and polarisation power spectrum measurements, we test their global consistency with the predictions of the base ΛCDM model. Our results do not show any serious inconsistencies, lending further support to the interpretation ofmore » the base ΛCDM model as cosmology's gold standard.« less

  11. Parametric Investigation of Liquid Jets in Low Gravity

    NASA Technical Reports Server (NTRS)

    Chato, David J.

    2005-01-01

    An axisymmetric phase field model is developed and used to model surface tension forces on liquid jets in microgravity. The previous work in this area is reviewed and a baseline drop tower experiment selected for model comparison. This paper uses the model to parametrically investigate the influence of key parameters on the geysers formed by jets in microgravity. Investigation of the contact angle showed the expected trend of increasing contact angle increasing geyser height. Investigation of the tank radius showed some interesting effects and demonstrated the zone of free surface deformation is quite large. Variation of the surface tension with a laminar jet showed clearly the evolution of free surface shape with Weber number. It predicted a breakthrough Weber number of 1.

  12. On kinetic modelling for solar redox thermochemical H2O and CO2 splitting over NiFe2O4 for H2, CO and syngas production.

    PubMed

    Dimitrakis, Dimitrios A; Syrigou, Maria; Lorentzou, Souzana; Kostoglou, Margaritis; Konstandopoulos, Athanasios G

    2017-10-11

    This study aims at developing a kinetic model that can adequately describe solar thermochemical water and carbon dioxide splitting with nickel ferrite powder as the active redox material. The kinetic parameters of water splitting of a previous study are revised to include transition times and new kinetic parameters for carbon dioxide splitting are developed. The computational results show a satisfactory agreement with experimental data and continuous multicycle operation under varying operating conditions is simulated. Different test cases are explored in order to improve the product yield. At first a parametric analysis is conducted, investigating the appropriate duration of the oxidation and the thermal reduction step that maximizes the hydrogen yield. Subsequently, a non-isothermal oxidation step is simulated and proven as an interesting option for increasing the hydrogen production. The kinetic model is adapted to simulate the production yields in structured solar reactor components, i.e. extruded monolithic structures, as well.

  13. Assessment of bioethanol yield by S. cerevisiae grown on oil palm residues: Monte Carlo simulation and sensitivity analysis.

    PubMed

    Samsudin, Mohd Dinie Muhaimin; Mat Don, Mashitah

    2015-01-01

    Oil palm trunk (OPT) sap was utilized for growth and bioethanol production by Saccharomycescerevisiae with addition of palm oil mill effluent (POME) as nutrients supplier. Maximum yield (YP/S) was attained at 0.464g bioethanol/g glucose presence in the OPT sap-POME-based media. However, OPT sap and POME are heterogeneous in properties and fermentation performance might change if it is repeated. Contribution of parametric uncertainty analysis on bioethanol fermentation performance was then assessed using Monte Carlo simulation (stochastic variable) to determine probability distributions due to fluctuation and variation of kinetic model parameters. Results showed that based on 100,000 samples tested, the yield (YP/S) ranged 0.423-0.501g/g. Sensitivity analysis was also done to evaluate the impact of each kinetic parameter on the fermentation performance. It is found that bioethanol fermentation highly depend on growth of the tested yeast. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Nonlinear dynamic analysis of cantilevered piezoelectric energy harvesters under simultaneous parametric and external excitations

    NASA Astrophysics Data System (ADS)

    Fang, Fei; Xia, Guanghui; Wang, Jianguo

    2018-02-01

    The nonlinear dynamics of cantilevered piezoelectric beams is investigated under simultaneous parametric and external excitations. The beam is composed of a substrate and two piezoelectric layers and assumed as an Euler-Bernoulli model with inextensible deformation. A nonlinear distributed parameter model of cantilevered piezoelectric energy harvesters is proposed using the generalized Hamilton's principle. The proposed model includes geometric and inertia nonlinearity, but neglects the material nonlinearity. Using the Galerkin decomposition method and harmonic balance method, analytical expressions of the frequency-response curves are presented when the first bending mode of the beam plays a dominant role. Using these expressions, we investigate the effects of the damping, load resistance, electromechanical coupling, and excitation amplitude on the frequency-response curves. We also study the difference between the nonlinear lumped-parameter and distributed-parameter model for predicting the performance of the energy harvesting system. Only in the case of parametric excitation, we demonstrate that the energy harvesting system has an initiation excitation threshold below which no energy can be harvested. We also illustrate that the damping and load resistance affect the initiation excitation threshold.

  15. Theory of parametrically amplified electron-phonon superconductivity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Babadi, Mehrtash; Knap, Michael; Martin, Ivar

    2017-07-01

    Ultrafast optical manipulation of ordered phases in strongly correlated materials is a topic of significant theoretical, experimental, and technological interest. Inspired by a recent experiment on light-induced superconductivity in fullerenes [M. Mitrano et al., Nature (London) 530, 461 (2016)], we develop a comprehensive theory of light-induced superconductivity in driven electron-phonon systemswith lattice nonlinearities. In analogy with the operation of parametric amplifiers, we show how the interplay between the external drive and lattice nonlinearities lead to significantly enhanced effective electron-phonon couplings. We provide a detailed and unbiased study of the nonequilibrium dynamics of the driven system using the real-time Green's functionmore » technique. To this end, we develop a Floquet generalization of the Migdal-Eliashberg theory and derive a numerically tractable set of quantum Floquet-Boltzmann kinetic equations for the coupled electron-phonon system. We study the role of parametric phonon generation and electronic heating in destroying the transient superconducting state. Finally, we predict the transient formation of electronic Floquet bands in time-and angle-resolved photoemission spectroscopy experiments as a consequence of the proposed mechanism.« less

  16. MEASURING DARK MATTER PROFILES NON-PARAMETRICALLY IN DWARF SPHEROIDALS: AN APPLICATION TO DRACO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jardel, John R.; Gebhardt, Karl; Fabricius, Maximilian H.

    2013-02-15

    We introduce a novel implementation of orbit-based (or Schwarzschild) modeling that allows dark matter density profiles to be calculated non-parametrically in nearby galaxies. Our models require no assumptions to be made about velocity anisotropy or the dark matter profile. The technique can be applied to any dispersion-supported stellar system, and we demonstrate its use by studying the Local Group dwarf spheroidal galaxy (dSph) Draco. We use existing kinematic data at larger radii and also present 12 new radial velocities within the central 13 pc obtained with the VIRUS-W integral field spectrograph on the 2.7 m telescope at McDonald Observatory. Ourmore » non-parametric Schwarzschild models find strong evidence that the dark matter profile in Draco is cuspy for 20 {<=} r {<=} 700 pc. The profile for r {>=} 20 pc is well fit by a power law with slope {alpha} = -1.0 {+-} 0.2, consistent with predictions from cold dark matter simulations. Our models confirm that, despite its low baryon content relative to other dSphs, Draco lives in a massive halo.« less

  17. Nonlinear dynamic analysis of cantilevered piezoelectric energy harvesters under simultaneous parametric and external excitations

    NASA Astrophysics Data System (ADS)

    Fang, Fei; Xia, Guanghui; Wang, Jianguo

    2018-06-01

    The nonlinear dynamics of cantilevered piezoelectric beams is investigated under simultaneous parametric and external excitations. The beam is composed of a substrate and two piezoelectric layers and assumed as an Euler-Bernoulli model with inextensible deformation. A nonlinear distributed parameter model of cantilevered piezoelectric energy harvesters is proposed using the generalized Hamilton's principle. The proposed model includes geometric and inertia nonlinearity, but neglects the material nonlinearity. Using the Galerkin decomposition method and harmonic balance method, analytical expressions of the frequency-response curves are presented when the first bending mode of the beam plays a dominant role. Using these expressions, we investigate the effects of the damping, load resistance, electromechanical coupling, and excitation amplitude on the frequency-response curves. We also study the difference between the nonlinear lumped-parameter and distributed-parameter model for predicting the performance of the energy harvesting system. Only in the case of parametric excitation, we demonstrate that the energy harvesting system has an initiation excitation threshold below which no energy can be harvested. We also illustrate that the damping and load resistance affect the initiation excitation threshold.

  18. Modeling and Simulation of a Parametrically Resonant Micromirror With Duty-Cycled Excitation.

    PubMed

    Shahid, Wajiha; Qiu, Zhen; Duan, Xiyu; Li, Haijun; Wang, Thomas D; Oldham, Kenn R

    2014-12-01

    High frequency large scanning angle electrostatically actuated microelectromechanical systems (MEMS) mirrors are used in a variety of applications involving fast optical scanning. A 1-D parametrically resonant torsional micromirror for use in biomedical imaging is analyzed here with respect to operation by duty-cycled square waves. Duty-cycled square wave excitation can have significant advantages for practical mirror regulation and/or control. The mirror's nonlinear dynamics under such excitation is analyzed in a Hill's equation form. This form is used to predict stability regions (the voltage-frequency relationship) of parametric resonance behavior over large scanning angles using iterative approximations for nonlinear capacitance behavior of the mirror. Numerical simulations are also performed to obtain the mirror's frequency response over several voltages for various duty cycles. Frequency sweeps, stability results, and duty cycle trends from both analytical and simulation methods are compared with experimental results. Both analytical models and simulations show good agreement with experimental results over the range of duty cycled excitations tested. This paper discusses the implications of changing amplitude and phase with duty cycle for robust open-loop operation and future closed-loop operating strategies.

  19. On Parametric Sensitivity of Reynolds-Averaged Navier-Stokes SST Turbulence Model: 2D Hypersonic Shock-Wave Boundary Layer Interactions

    NASA Technical Reports Server (NTRS)

    Brown, James L.

    2014-01-01

    Examined is sensitivity of separation extent, wall pressure and heating to variation of primary input flow parameters, such as Mach and Reynolds numbers and shock strength, for 2D and Axisymmetric Hypersonic Shock Wave Turbulent Boundary Layer interactions obtained by Navier-Stokes methods using the SST turbulence model. Baseline parametric sensitivity response is provided in part by comparison with vetted experiments, and in part through updated correlations based on free interaction theory concepts. A recent database compilation of hypersonic 2D shock-wave/turbulent boundary layer experiments extensively used in a prior related uncertainty analysis provides the foundation for this updated correlation approach, as well as for more conventional validation. The primary CFD method for this work is DPLR, one of NASA's real-gas aerothermodynamic production RANS codes. Comparisons are also made with CFL3D, one of NASA's mature perfect-gas RANS codes. Deficiencies in predicted separation response of RANS/SST solutions to parametric variations of test conditions are summarized, along with recommendations as to future turbulence approach.

  20. Parametric laws to model urban pollutant dispersion with a street network approach

    NASA Astrophysics Data System (ADS)

    Soulhac, L.; Salizzoni, P.; Mejean, P.; Perkins, R. J.

    2013-03-01

    This study discusses the reliability of the street network approach for pollutant dispersion modelling in urban areas. This is essentially based on a box model, with parametric relations that explicitly model the main phenomena that contribute to the street canyon ventilation: the mass exchanges between the street and the atmosphere, the pollutant advection along the street axes and the pollutant transfer at street intersections. In the first part of the paper the focus is on the development of a model for the bulk transfer street/atmosphere, which represents the main ventilation mechanisms for wind direction that are almost perpendicular to the axis of the street. We then discuss the role of the advective transfer along the street axis on its ventilation, depending on the length of the street and the direction of the external wind. Finally we evaluate the performances of a box model integrating parametric exchange laws for these transfer phenomena. To that purpose we compare the prediction of the model to wind tunnel experiments of pollutant dispersion within a street canyon placed in an idealised urban district.

  1. Gravitational wave production from preheating: parameter dependence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Figueroa, Daniel G.; Torrentí, Francisco, E-mail: daniel.figueroa@cern.ch, E-mail: f.torrenti@csic.es

    Parametric resonance is among the most efficient phenomena generating gravitational waves (GWs) in the early Universe. The dynamics of parametric resonance, and hence of the GWs, depend exclusively on the resonance parameter q . The latter is determined by the properties of each scenario: the initial amplitude and potential curvature of the oscillating field, and its coupling to other species. Previous works have only studied the GW production for fixed value(s) of q . We present an analytical derivation of the GW amplitude dependence on q , valid for any scenario, which we confront against numerical results. By running latticemore » simulations in an expanding grid, we study for a wide range of q values, the production of GWs in post-inflationary preheating scenarios driven by parametric resonance. We present simple fits for the final amplitude and position of the local maxima in the GW spectrum. Our parametrization allows to predict the location and amplitude of the GW background today, for an arbitrary q . The GW signal can be rather large, as h {sup 2Ω}{sub GW}( f {sub p} ) ∼< 10{sup −11}, but it is always peaked at high frequencies f {sub p} ∼> 10{sup 7} Hz. We also discuss the case of spectator-field scenarios, where the oscillatory field can be e.g. a curvaton, or the Standard Model Higgs.« less

  2. Parametric models to compute tryptophan fluorescence wavelengths from classical protein simulations.

    PubMed

    Lopez, Alvaro J; Martínez, Leandro

    2018-02-26

    Fluorescence spectroscopy is an important method to study protein conformational dynamics and solvation structures. Tryptophan (Trp) residues are the most important and practical intrinsic probes for protein fluorescence due to the variability of their fluorescence wavelengths: Trp residues emit in wavelengths ranging from 308 to 360 nm depending on the local molecular environment. Fluorescence involves electronic transitions, thus its computational modeling is a challenging task. We show that it is possible to predict the wavelength of emission of a Trp residue from classical molecular dynamics simulations by computing the solvent-accessible surface area or the electrostatic interaction between the indole group and the rest of the system. Linear parametric models are obtained to predict the maximum emission wavelengths with standard errors of the order 5 nm. In a set of 19 proteins with emission wavelengths ranging from 308 to 352 nm, the best model predicts the maximum wavelength of emission with a standard error of 4.89 nm and a quadratic Pearson correlation coefficient of 0.81. These models can be used for the interpretation of fluorescence spectra of proteins with multiple Trp residues, or for which local Trp environmental variability exists and can be probed by classical molecular dynamics simulations. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.

  3. Atomic Oxygen Erosion Yield Predictive Tool for Spacecraft Polymers in Low Earth Orbit

    NASA Technical Reports Server (NTRS)

    Bank, Bruce A.; de Groh, Kim K.; Backus, Jane A.

    2008-01-01

    A predictive tool was developed to estimate the low Earth orbit (LEO) atomic oxygen erosion yield of polymers based on the results of the Polymer Erosion and Contamination Experiment (PEACE) Polymers experiment flown as part of the Materials International Space Station Experiment 2 (MISSE 2). The MISSE 2 PEACE experiment accurately measured the erosion yield of a wide variety of polymers and pyrolytic graphite. The 40 different materials tested were selected specifically to represent a variety of polymers used in space as well as a wide variety of polymer chemical structures. The resulting erosion yield data was used to develop a predictive tool which utilizes chemical structure and physical properties of polymers that can be measured in ground laboratory testing to predict the in-space atomic oxygen erosion yield of a polymer. The properties include chemical structure, bonding information, density and ash content. The resulting predictive tool has a correlation coefficient of 0.914 when compared with actual MISSE 2 space data for 38 polymers and pyrolytic graphite. The intent of the predictive tool is to be able to make estimates of atomic oxygen erosion yields for new polymers without requiring expensive and time consumptive in-space testing.

  4. An evaluation of the lamb vision system as a predictor of lamb carcass red meat yield percentage.

    PubMed

    Brady, A S; Belk, K E; LeValley, S B; Dalsted, N L; Scanga, J A; Tatum, J D; Smith, G C

    2003-06-01

    An objective method for predicting red meat yield in lamb carcasses is needed to accurately assess true carcass value. This study was performed to evaluate the ability of the lamb vision system (LVS; Research Management Systems USA, Fort Collins, CO) to predict fabrication yields of lamb carcasses. Lamb carcasses (n = 246) were evaluated using LVS and hot carcass weight (HCW), as well as by USDA expert and on-line graders, before fabrication of carcass sides to either bone-in or boneless cuts. On-line whole number, expert whole-number, and expert nearest-tenth USDA yield grades and LVS + HCW estimates accounted for 53, 52, 58, and 60%, respectively, of the observed variability in boneless, saleable meat yields, and accounted for 56, 57, 62, and 62%, respectively, of the variation in bone-in, saleable meat yields. The LVS + HCW system predicted 77, 65, 70, and 87% of the variation in weights of boneless shoulders, racks, loins, and legs, respectively, and 85, 72, 75, and 86% of the variation in weights of bone-in shoulders, racks, loins, and legs, respectively. Addition of longissimus muscle area (REA), adjusted fat thickness (AFT), or both REA and AFT to LVS + HCW models resulted in improved prediction of boneless saleable meat yields by 5, 3, and 5 percentage points, respectively. Bone-in, saleable meat yield estimations were improved in predictive accuracy by 7.7, 6.6, and 10.1 percentage points, and in precision, when REA alone, AFT alone, or both REA and AFT, respectively, were added to the LVS + HCW output models. Use of LVS + HCW to predict boneless red meat yields of lamb carcasses was more accurate than use of current on-line whole-number, expert whole-number, or expert nearest-tenth USDA yield grades. Thus, LVS + HCW output, when used alone or in combination with AFT and/or REA, improved on-line estimation of boneless cut yields from lamb carcasses. The ability of LVS + HCW to predict yields of wholesale cuts suggests that LVS could be used as an objective means for pricing carcasses in a value-based marketing system.

  5. Advanced Subsonic Technology (AST) 22-Inch Low Noise Research Fan Rig Preliminary Design of ADP-Type Fan 3

    NASA Technical Reports Server (NTRS)

    Jeracki, Robert J. (Technical Monitor); Topol, David A.; Ingram, Clint L.; Larkin, Michael J.; Roche, Charles H.; Thulin, Robert D.

    2004-01-01

    This report presents results of the work completed on the preliminary design of Fan 3 of NASA s 22-inch Fan Low Noise Research project. Fan 3 was intended to build on the experience gained from Fans 1 and 2 by demonstrating noise reduction technology that surpasses 1992 levels by 6 dB. The work was performed as part of NASA s Advanced Subsonic Technology (AST) program. Work on this task was conducted in the areas of CFD code validation, acoustic prediction and validation, rotor parametric studies, and fan exit guide vane (FEGV) studies up to the time when a NASA decision was made to cancel the design, fabrication and testing phases of the work. The scope of the program changed accordingly to concentrate on two subtasks: (1) Rig data analysis and CFD code validation and (2) Fan and FEGV optimization studies. The results of the CFD code validation work showed that this tool predicts 3D flowfield features well from the blade trailing edge to about a chord downstream. The CFD tool loses accuracy as the distance from the trailing edge increases beyond a blade chord. The comparisons of noise predictions to rig test data showed that both the tone noise tool and the broadband noise tool demonstrated reasonable agreement with the data to the degree that these tools can reliably be used for design work. The section on rig airflow and inlet separation analysis describes the method used to determine total fan airflow, shows the good agreement of predicted boundary layer profiles to measured profiles, and shows separation angles of attack ranging from 29.5 to 27deg for the range of airflows tested. The results of the rotor parametric studies were significant in leading to the decision not to pursue a new rotor design for Fan 3 and resulted in recommendations to concentrate efforts on FEGV stator designs. The ensuing parametric study on FEGV designs showed the potential for 8 to 10 EPNdB noise reduction relative to the baseline.

  6. Dynamic considerations for composite metal-rubber laminate acoustic power coupling bellows with application to thermoacoustic refrigeration

    NASA Astrophysics Data System (ADS)

    Smith, Robert William

    Many electrically driven thermoacoustic refrigerators have employed corrugated metal bellows to couple work from an electro-mechanical transducer to the working fluid typically. An alternative bellows structure to mediate this power transfer is proposed: a laminated hollow cylinder comprised of alternating layers of rubber and metal 'hoop-stack'. Fatigue and visoelastic power dissipation in the rubber are critical considerations; strain energy density plays a role in both. Optimal aspect ratios for a rectangle corss-section in the rubber, for given values of bellows axial strain and oscillatory pressure loads are discussed. Comparisons of tearing energies estimated from known load cases and those obtained by finite element analysis for candidate dimensions are presented. The metal layers of bellows are subject to an out-of-plane buckling instability for the case of external pressure loading; failure of this type was experimentally observed. The proposed structure also exhibits column instability when subject to internal pressure, as do metal bellows. For hoop-stack bellows, shear deflection cannot be ignored and this leads to column instability for both internal and external pressures, the latter being analogous to the case of tension buckling of a beam. During prototype bellows testing, transverse modes of vibration are believed to have been excited parametrically as a consequence of the oscillatory pressures. Some operating frequencies of interest in this study lie above the cut-on frequency at which Timoshenko beam theory (TBT) predicts multiple phase speeds; it is shown that TBT fails to accurately predict both mode shapes and resonance frequencies in this regime. TBT is also shown to predict multiple phase speeds in the presence of axial tension, or external pressures, at magnitudes of interest in this study, over the entire frequency spectrum. For modes below cut-on absent a pressure differential (or equivalently, axial load) TBT predicts decreasing resonance frequencies for both internal external static pressure, and converges on known, valid static buckling solutions. Parametric stability in the presence of oscillatory pressure is discussed for such modes; periodic solutions to the Whittaker-Hill equation are pursued to illustrate the shape of the parametric instability regions, and contrasted with results of the more well-known Mathieu equation.

  7. Machine Learning Based Evaluation of Reading and Writing Difficulties.

    PubMed

    Iwabuchi, Mamoru; Hirabayashi, Rumi; Nakamura, Kenryu; Dim, Nem Khan

    2017-01-01

    The possibility of auto evaluation of reading and writing difficulties was investigated using non-parametric machine learning (ML) regression technique for URAWSS (Understanding Reading and Writing Skills of Schoolchildren) [1] test data of 168 children of grade 1 - 9. The result showed that the ML had better prediction than the ordinary rule-based decision.

  8. Quality Quandaries: Predicting a Population of Curves

    DOE PAGES

    Fugate, Michael Lynn; Hamada, Michael Scott; Weaver, Brian Phillip

    2017-12-19

    We present a random effects spline regression model based on splines that provides an integrated approach for analyzing functional data, i.e., curves, when the shape of the curves is not parametrically specified. An analysis using this model is presented that makes inferences about a population of curves as well as features of the curves.

  9. Quality Quandaries: Predicting a Population of Curves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fugate, Michael Lynn; Hamada, Michael Scott; Weaver, Brian Phillip

    We present a random effects spline regression model based on splines that provides an integrated approach for analyzing functional data, i.e., curves, when the shape of the curves is not parametrically specified. An analysis using this model is presented that makes inferences about a population of curves as well as features of the curves.

  10. Force Project Technology Presentation to the NRCC

    DTIC Science & Technology

    2014-02-04

    Functional Bridge components Smart Odometer Adv Pretreatment Smart Bridge Multi-functional Gap Crossing Fuel Automated Tracking System Adv...comprehensive matrix of candidate composite material systems and textile reinforcement architectures via modeling/analyses and testing. Product(s...Validated Dynamic Modeling tool based on parametric study using material models to reliably predict the textile mechanics of the hose

  11. Using the Functional Prerequisites to Communication Rules as a Structure for Rule-Behavior Research.

    ERIC Educational Resources Information Center

    Fairhurst, Gail Theus

    This paper points out that the available research on communication rules tends to be descriptive (or humanistic) in nature and characterized by a conspicuous absence of prediction along with experimental methods and parametric interpretations of social behavior. The paper first argues that current scientific methodology is consistent with a…

  12. COSMO-PAFOG: Three-dimensional fog forecasting with the high-resolution COSMO-model

    NASA Astrophysics Data System (ADS)

    Hacker, Maike; Bott, Andreas

    2017-04-01

    The presence of fog can have critical impact on shipping, aviation and road traffic increasing the risk of serious accidents. Besides these negative impacts of fog, in arid regions fog is explored as a supplementary source of water for human settlements. Thus the improvement of fog forecasts holds immense operational value. The aim of this study is the development of an efficient three-dimensional numerical fog forecast model based on a mesoscale weather prediction model for the application in the Namib region. The microphysical parametrization of the one-dimensional fog forecast model PAFOG (PArameterized FOG) is implemented in the three-dimensional nonhydrostatic mesoscale weather prediction model COSMO (COnsortium for Small-scale MOdeling) developed and maintained by the German Meteorological Service. Cloud water droplets are introduced in COSMO as prognostic variables, thus allowing a detailed description of droplet sedimentation. Furthermore, a visibility parametrization depending on the liquid water content and the droplet number concentration is implemented. The resulting fog forecast model COSMO-PAFOG is run with kilometer-scale horizontal resolution. In vertical direction, we use logarithmically equidistant layers with 45 of 80 layers in total located below 2000 m. Model results are compared to satellite observations and synoptic observations of the German Meteorological Service for a domain in the west of Germany, before the model is adapted to the geographical and climatological conditions in the Namib desert. COSMO-PAFOG is able to represent the horizontal structure of fog patches reasonably well. Especially small fog patches typical of radiation fog can be simulated in agreement with observations. Ground observations of temperature are also reproduced. Simulations without the PAFOG microphysics yield unrealistically high liquid water contents. This in turn reduces the radiative cooling of the ground, thus inhibiting nocturnal temperature decrease. The simulated visibility agrees with observations. However, fog tends to be dissolved earlier than in the observation. As a result of the investigated fog events, it is concluded that the three-dimensional fog forecast model COSMO-PAFOG is able to simulate these fog events in accordance with observations. After the successful application of COSMO-PAFOG for fog events in the west of Germany, model simulations will be performed for coastal desert fog in the Namib region.

  13. Toward a self-organizing pre-symbolic neural model representing sensorimotor primitives.

    PubMed

    Zhong, Junpei; Cangelosi, Angelo; Wermter, Stefan

    2014-01-01

    The acquisition of symbolic and linguistic representations of sensorimotor behavior is a cognitive process performed by an agent when it is executing and/or observing own and others' actions. According to Piaget's theory of cognitive development, these representations develop during the sensorimotor stage and the pre-operational stage. We propose a model that relates the conceptualization of the higher-level information from visual stimuli to the development of ventral/dorsal visual streams. This model employs neural network architecture incorporating a predictive sensory module based on an RNNPB (Recurrent Neural Network with Parametric Biases) and a horizontal product model. We exemplify this model through a robot passively observing an object to learn its features and movements. During the learning process of observing sensorimotor primitives, i.e., observing a set of trajectories of arm movements and its oriented object features, the pre-symbolic representation is self-organized in the parametric units. These representational units act as bifurcation parameters, guiding the robot to recognize and predict various learned sensorimotor primitives. The pre-symbolic representation also accounts for the learning of sensorimotor primitives in a latent learning context.

  14. Toward a self-organizing pre-symbolic neural model representing sensorimotor primitives

    PubMed Central

    Zhong, Junpei; Cangelosi, Angelo; Wermter, Stefan

    2014-01-01

    The acquisition of symbolic and linguistic representations of sensorimotor behavior is a cognitive process performed by an agent when it is executing and/or observing own and others' actions. According to Piaget's theory of cognitive development, these representations develop during the sensorimotor stage and the pre-operational stage. We propose a model that relates the conceptualization of the higher-level information from visual stimuli to the development of ventral/dorsal visual streams. This model employs neural network architecture incorporating a predictive sensory module based on an RNNPB (Recurrent Neural Network with Parametric Biases) and a horizontal product model. We exemplify this model through a robot passively observing an object to learn its features and movements. During the learning process of observing sensorimotor primitives, i.e., observing a set of trajectories of arm movements and its oriented object features, the pre-symbolic representation is self-organized in the parametric units. These representational units act as bifurcation parameters, guiding the robot to recognize and predict various learned sensorimotor primitives. The pre-symbolic representation also accounts for the learning of sensorimotor primitives in a latent learning context. PMID:24550798

  15. Adsorption of metal atoms at a buckled graphene grain boundary using model potentials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Helgee, Edit E.; Isacsson, Andreas

    Two model potentials have been evaluated with regard to their ability to model adsorption of single metal atoms on a buckled graphene grain boundary. One of the potentials is a Lennard-Jones potential parametrized for gold and carbon, while the other is a bond-order potential parametrized for the interaction between carbon and platinum. Metals are expected to adsorb more strongly to grain boundaries than to pristine graphene due to their enhanced adsorption at point defects resembling those that constitute the grain boundary. Of the two potentials considered here, only the bond-order potential reproduces this behavior and predicts the energy of themore » adsorbate to be about 0.8 eV lower at the grain boundary than on pristine graphene. The Lennard-Jones potential predicts no significant difference in energy between adsorbates at the boundary and on pristine graphene. These results indicate that the Lennard-Jones potential is not suitable for studies of metal adsorption on defects in graphene, and that bond-order potentials are preferable.« less

  16. Semi-empirical studies of atomic structure. Progress report, 1 July 1982-1 February 1983

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Curtis, L.J.

    1983-01-01

    A program of studies of the properties of the heavy and highly ionized atomic systems which often occur as contaminants in controlled fusion devices is continuing. The project combines experimental measurements by fast-ion-beam excitation with semi-empirical data parametrizations to identify and exploit regularities in the properties of these very heavy and very highly ionized systems. The increasing use of spectroscopic line intensities as diagnostics for determining thermonuclear plasma temperatures and densities requires laboratory observation and analysis of such spectra, often to accuracies that exceed the capabilities of ab initio theoretical methods for these highly relativistic many electron systems. Through themore » acquisition and systematization of empirical data, remarkably precise methods for predicting excitation energies, transition wavelengths, transition probabilities, level lifetimes, ionization potentials, core polarizabilities, and core penetrabilities are being developed and applied. Although the data base for heavy, highly ionized atoms is still sparse, parametrized extrapolations and interpolations along isoelectronic, homologous, and Rydberg sequences are providing predictions for large classes of quantities, with a precision that is sharpened by subsequent measurements.« less

  17. Semiempirical studies of atomic structure. Progress report, 1 July 1983-1 June 1984

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Curtis, L.J.

    1984-01-01

    A program of studies of the properties of the heavy and highly ionized atomic systems which often occur as contaminants in controlled fusion devices is continuing. The project combines experimental measurements by fast ion beam excitation with semiempirical data parametrizations to identify and exploit regularities in the properties of these very heavy and very highly ionized systems. The increasing use of spectroscopic line intensities as diagnostics for determining thermonuclear plasma temperatures and densities requires laboratory observation and analysis of such spectra, often to accuracies that exceed the capabilities of ab initio theoretical methods for these highly relativistic many electron systems.more » Through the acquisition and systematization of empirical data, remarkably precise methods for predicting excitation energies, transition wavelengths, transition probabilities, level lifetimes, ionization potentials, core polarizabilities, and core penetrabilities are being developed and applied. Although the data base for heavy, highly ionized atoms is still sparse, parametrized extrapolations and interpolations along isoelectronic, homologous, and Rydberg sequences are providing predictions for large classes of quantities, with a precision that is sharpened by subsequent measurements.« less

  18. A parametric study of fracture toughness of fibrous composite materials

    NASA Technical Reports Server (NTRS)

    Poe, C. C., Jr.

    1987-01-01

    Impacts to fibrous composite laminates by objects with low velocities can break fibers giving crack-like damage. The damage may not extend completely through a thick laminate. The tension strength of these damage laminates is reduced much like that of cracked metals. The fracture toughness depends on fiber and matrix properties, fiber orientations, and stacking sequence. Accordingly, a parametric study was made to determine how fiber and matrix properties and fiber orientations affect fracture toughness and notch sensitivity. The values of fracture toughness were predicted from the elastic constants of the laminate and the failing strain of the fibers using a general fracture toughness parameter developed previously. For a variety of laminates, values of fracture toughness from tests of center-cracked specimens and values of residual strength from tests of thick laminates with surface cracks were compared to the predictions to give credibility to the study. In contrast to the usual behavior of metals, it is shown that both ultimate tensile strength and fracture toughness of composites can be increased without increasing notch sensitivity.

  19. Program Predicts Performance of Optical Parametric Oscillators

    NASA Technical Reports Server (NTRS)

    Cross, Patricia L.; Bowers, Mark

    2006-01-01

    A computer program predicts the performances of solid-state lasers that operate at wavelengths from ultraviolet through mid-infrared and that comprise various combinations of stable and unstable resonators, optical parametric oscillators (OPOs), and sum-frequency generators (SFGs), including second-harmonic generators (SHGs). The input to the program describes the signal, idler, and pump beams; the SFG and OPO crystals; and the laser geometry. The program calculates the electric fields of the idler, pump, and output beams at three locations (inside the laser resonator, just outside the input mirror, and just outside the output mirror) as functions of time for the duration of the pump beam. For each beam, the electric field is used to calculate the fluence at the output mirror, plus summary parameters that include the centroid location, the radius of curvature of the wavefront leaving through the output mirror, the location and size of the beam waist, and a quantity known, variously, as a propagation constant or beam-quality factor. The program provides a typical Windows interface for entering data and selecting files. The program can include as many as six plot windows, each containing four graphs.

  20. Evaluation of an urban land surface scheme over a tropical suburban neighborhood

    NASA Astrophysics Data System (ADS)

    Harshan, Suraj; Roth, Matthias; Velasco, Erik; Demuzere, Matthias

    2017-07-01

    The present study evaluates the performance of the SURFEX (TEB/ISBA) urban land surface parametrization scheme in offline mode over a suburban area of Singapore. Model performance (diurnal and seasonal characteristics) is investigated using measurements of energy balance fluxes, surface temperatures of individual urban facets, and canyon air temperature collected during an 11-month period. Model performance is best for predicting net radiation and sensible heat fluxes (both are slightly overpredicted during daytime), but weaker for latent heat (underpredicted during daytime) and storage heat fluxes (significantly underpredicted daytime peaks and nighttime storage). Daytime surface temperatures are generally overpredicted, particularly those containing horizontal surfaces such as roofs and roads. This result, together with those for the storage heat flux, point to the need for a better characterization of the thermal and radiative characteristics of individual urban surface facets in the model. Significant variation exists in model behavior between dry and wet seasons, the latter generally being better predicted. The simple vegetation parametrization used is inadequate to represent seasonal moisture dynamics, sometimes producing unrealistically dry conditions.

  1. Data in support of energy performance of double-glazed windows.

    PubMed

    Shakouri, Mahmoud; Banihashemi, Saeed

    2016-06-01

    This paper provides the data used in a research project to propose a new simplified windows rating system based on saved annual energy ("Developing an empirical predictive energy-rating model for windows by using Artificial Neural Network" (Shakouri Hassanabadi and Banihashemi Namini, 2012) [1], "Climatic, parametric and non-parametric analysis of energy performance of double-glazed windows in different climates" (Banihashemi et al., 2015) [2]). A full factorial simulation study was conducted to evaluate the performance of 26 different types of windows in a four-story residential building. In order to generalize the results, the selected windows were tested in four climates of cold, tropical, temperate, and hot and arid; and four different main orientations of North, West, South and East. The accompanied datasets include the annual saved cooling and heating energy in different climates and orientations by using the selected windows. Moreover, a complete dataset is provided that includes the specifications of 26 windows, climate data, month, and orientation of the window. This dataset can be used to make predictive models for energy efficiency assessment of double glazed windows.

  2. Influence of Finite Element Size in Residual Strength Prediction of Composite Structures

    NASA Technical Reports Server (NTRS)

    Satyanarayana, Arunkumar; Bogert, Philip B.; Karayev, Kazbek Z.; Nordman, Paul S.; Razi, Hamid

    2012-01-01

    The sensitivity of failure load to the element size used in a progressive failure analysis (PFA) of carbon composite center notched laminates is evaluated. The sensitivity study employs a PFA methodology previously developed by the authors consisting of Hashin-Rotem intra-laminar fiber and matrix failure criteria and a complete stress degradation scheme for damage simulation. The approach is implemented with a user defined subroutine in the ABAQUS/Explicit finite element package. The effect of element size near the notch tips on residual strength predictions was assessed for a brittle failure mode with a parametric study that included three laminates of varying material system, thickness and stacking sequence. The study resulted in the selection of an element size of 0.09 in. X 0.09 in., which was later used for predicting crack paths and failure loads in sandwich panels and monolithic laminated panels. Comparison of predicted crack paths and failure loads for these panels agreed well with experimental observations. Additionally, the element size vs. normalized failure load relationship, determined in the parametric study, was used to evaluate strength-scaling factors for three different element sizes. The failure loads predicted with all three element sizes provided converged failure loads with respect to that corresponding with the 0.09 in. X 0.09 in. element size. Though preliminary in nature, the strength-scaling concept has the potential to greatly reduce the computational time required for PFA and can enable the analysis of large scale structural components where failure is dominated by fiber failure in tension.

  3. Prediction of skull fracture risk for children 0-9 months old through validated parametric finite element model and cadaver test reconstruction.

    PubMed

    Li, Zhigang; Liu, Weiguo; Zhang, Jinhuan; Hu, Jingwen

    2015-09-01

    Skull fracture is one of the most common pediatric traumas. However, injury assessment tools for predicting pediatric skull fracture risk is not well established mainly due to the lack of cadaver tests. Weber conducted 50 pediatric cadaver drop tests for forensic research on child abuse in the mid-1980s (Experimental studies of skull fractures in infants, Z Rechtsmed. 92: 87-94, 1984; Biomechanical fragility of the infant skull, Z Rechtsmed. 94: 93-101, 1985). To our knowledge, these studies contained the largest sample size among pediatric cadaver tests in the literature. However, the lack of injury measurements limited their direct application in investigating pediatric skull fracture risks. In this study, 50 pediatric cadaver tests from Weber's studies were reconstructed using a parametric pediatric head finite element (FE) model which were morphed into subjects with ages, head sizes/shapes, and skull thickness values that reported in the tests. The skull fracture risk curves for infants from 0 to 9 months old were developed based on the model-predicted head injury measures through logistic regression analysis. It was found that the model-predicted stress responses in the skull (maximal von Mises stress, maximal shear stress, and maximal first principal stress) were better predictors than global kinematic-based injury measures (peak head acceleration and head injury criterion (HIC)) in predicting pediatric skull fracture. This study demonstrated the feasibility of using age- and size/shape-appropriate head FE models to predict pediatric head injuries. Such models can account for the morphological variations among the subjects, which cannot be considered by a single FE human model.

  4. Uncertainties in Predicting Rice Yield by Current Crop Models Under a Wide Range of Climatic Conditions

    NASA Technical Reports Server (NTRS)

    Li, Tao; Hasegawa, Toshihiro; Yin, Xinyou; Zhu, Yan; Boote, Kenneth; Adam, Myriam; Bregaglio, Simone; Buis, Samuel; Confalonieri, Roberto; Fumoto, Tamon; hide

    2014-01-01

    Predicting rice (Oryza sativa) productivity under future climates is important for global food security. Ecophysiological crop models in combination with climate model outputs are commonly used in yield prediction, but uncertainties associated with crop models remain largely unquantified. We evaluated 13 rice models against multi-year experimental yield data at four sites with diverse climatic conditions in Asia and examined whether different modeling approaches on major physiological processes attribute to the uncertainties of prediction to field measured yields and to the uncertainties of sensitivity to changes in temperature and CO2 concentration [CO2]. We also examined whether a use of an ensemble of crop models can reduce the uncertainties. Individual models did not consistently reproduce both experimental and regional yields well, and uncertainty was larger at the warmest and coolest sites. The variation in yield projections was larger among crop models than variation resulting from 16 global climate model-based scenarios. However, the mean of predictions of all crop models reproduced experimental data, with an uncertainty of less than 10 percent of measured yields. Using an ensemble of eight models calibrated only for phenology or five models calibrated in detail resulted in the uncertainty equivalent to that of the measured yield in well-controlled agronomic field experiments. Sensitivity analysis indicates the necessity to improve the accuracy in predicting both biomass and harvest index in response to increasing [CO2] and temperature.

  5. Developmental models for estimating ecological responses to environmental variability: structural, parametric, and experimental issues.

    PubMed

    Moore, Julia L; Remais, Justin V

    2014-03-01

    Developmental models that account for the metabolic effect of temperature variability on poikilotherms, such as degree-day models, have been widely used to study organism emergence, range and development, particularly in agricultural and vector-borne disease contexts. Though simple and easy to use, structural and parametric issues can influence the outputs of such models, often substantially. Because the underlying assumptions and limitations of these models have rarely been considered, this paper reviews the structural, parametric, and experimental issues that arise when using degree-day models, including the implications of particular structural or parametric choices, as well as assumptions that underlie commonly used models. Linear and non-linear developmental functions are compared, as are common methods used to incorporate temperature thresholds and calculate daily degree-days. Substantial differences in predicted emergence time arose when using linear versus non-linear developmental functions to model the emergence time in a model organism. The optimal method for calculating degree-days depends upon where key temperature threshold parameters fall relative to the daily minimum and maximum temperatures, as well as the shape of the daily temperature curve. No method is shown to be universally superior, though one commonly used method, the daily average method, consistently provides accurate results. The sensitivity of model projections to these methodological issues highlights the need to make structural and parametric selections based on a careful consideration of the specific biological response of the organism under study, and the specific temperature conditions of the geographic regions of interest. When degree-day model limitations are considered and model assumptions met, the models can be a powerful tool for studying temperature-dependent development.

  6. Parametric Raman anti-Stokes laser at 503 nm with phase-matched collinear beam interaction of orthogonally polarized Raman components in calcite under 532 nm 20 ps laser pumping

    NASA Astrophysics Data System (ADS)

    Smetanin, Sergei; Jelínek, Michal; Kubeček, Václav

    2017-05-01

    Lasers based on stimulated-Raman-scattering process can be used for the frequency-conversion to the wavelengths that are not readily available from solid-state lasers. Parametric Raman lasers allow generation of not only Stokes, but also anti-Stokes components. However, practically all the known crystalline parametric Raman anti-Stokes lasers have very low conversion efficiencies of about 1 % at theoretically predicted values of up to 40 % because of relatively narrow angular tolerance of phase matching in comparison with angular divergence of the interacting beams. In our investigation, to widen the angular tolerance of four-wave mixing and to obtain high conversion efficiency into the antiStokes wave we propose and study a new scheme of the parametric Raman anti-Stokes laser at 503 nm with phasematched collinear beam interaction of orthogonally polarized Raman components in calcite under 532 nm 20 ps laser pumping. We use only one 532-nm laser source to pump the Raman-active calcite crystal oriented at the phase matched angle for orthogonally polarized Raman components four-wave mixing. Additionally, we split the 532-nm laser radiation into the orthogonally polarized components entering to the Raman-active calcite crystal at the certain incidence angles to fulfill the tangential phase matching compensating walk-off of extraordinary waves for collinear beam interaction in the crystal with the widest angular tolerance of four-wave mixing. For the first time the highest 503-nm anti-Stokes conversion efficiency of 30 % close to the theoretical limit of about 40 % at overall optical efficiency of the parametric Raman anti-Stokes generation of up to 3.5 % in calcite is obtained due to realization of tangential phase matching insensitive to the angular mismatch.

  7. Is there more valuable information in PWI datasets for a voxel-wise acute ischemic stroke tissue outcome prediction than what is represented by typical perfusion maps?

    NASA Astrophysics Data System (ADS)

    Forkert, Nils Daniel; Siemonsen, Susanne; Dalski, Michael; Verleger, Tobias; Kemmling, Andre; Fiehler, Jens

    2014-03-01

    The acute ischemic stroke is a leading cause for death and disability in the industry nations. In case of a present acute ischemic stroke, the prediction of the future tissue outcome is of high interest for the clinicians as it can be used to support therapy decision making. Within this context, it has already been shown that the voxel-wise multi-parametric tissue outcome prediction leads to more promising results compared to single channel perfusion map thresholding. Most previously published multi-parametric predictions employ information from perfusion maps derived from perfusion-weighted MRI together with other image sequences such as diffusion-weighted MRI. However, it remains unclear if the typically calculated perfusion maps used for this purpose really include all valuable information from the PWI dataset for an optimal tissue outcome prediction. To investigate this problem in more detail, two different methods to predict tissue outcome using a k-nearest-neighbor approach were developed in this work and evaluated based on 18 datasets of acute stroke patients with known tissue outcome. The first method integrates apparent diffusion coefficient and perfusion parameter (Tmax, MTT, CBV, CBF) information for the voxel-wise prediction, while the second method employs also apparent diffusion coefficient information but the complete perfusion information in terms of the voxel-wise residue functions instead of the perfusion parameter maps for the voxel-wise prediction. Overall, the comparison of the results of the two prediction methods for the 18 patients using a leave-one-out cross validation revealed no considerable differences. Quantitatively, the parameter-based prediction of tissue outcome led to a mean Dice coefficient of 0.474, while the prediction using the residue functions led to a mean Dice coefficient of 0.461. Thus, it may be concluded from the results of this study that the perfusion parameter maps typically derived from PWI datasets include all valuable perfusion information required for a voxel-based tissue outcome prediction, while the complete analysis of the residue functions does not add further benefits for the voxel-wise tissue outcome prediction and is also computationally more expensive.

  8. Parameterization of DFTB3/3OB for Sulfur and Phosphorus for Chemical and Biological Applications

    PubMed Central

    2015-01-01

    We report the parametrization of the approximate density functional tight binding method, DFTB3, for sulfur and phosphorus. The parametrization is done in a framework consistent with our previous 3OB set established for O, N, C, and H, thus the resulting parameters can be used to describe a broad set of organic and biologically relevant molecules. The 3d orbitals are included in the parametrization, and the electronic parameters are chosen to minimize errors in the atomization energies. The parameters are tested using a fairly diverse set of molecules of biological relevance, focusing on the geometries, reaction energies, proton affinities, and hydrogen bonding interactions of these molecules; vibrational frequencies are also examined, although less systematically. The results of DFTB3/3OB are compared to those from DFT (B3LYP and PBE), ab initio (MP2, G3B3), and several popular semiempirical methods (PM6 and PDDG), as well as predictions of DFTB3 with the older parametrization (the MIO set). In general, DFTB3/3OB is a major improvement over the previous parametrization (DFTB3/MIO), and for the majority cases tested here, it also outperforms PM6 and PDDG, especially for structural properties, vibrational frequencies, hydrogen bonding interactions, and proton affinities. For reaction energies, DFTB3/3OB exhibits major improvement over DFTB3/MIO, due mainly to significant reduction of errors in atomization energies; compared to PM6 and PDDG, DFTB3/3OB also generally performs better, although the magnitude of improvement is more modest. Compared to high-level calculations, DFTB3/3OB is most successful at predicting geometries; larger errors are found in the energies, although the results can be greatly improved by computing single point energies at a high level with DFTB3 geometries. There are several remaining issues with the DFTB3/3OB approach, most notably its difficulty in describing phosphate hydrolysis reactions involving a change in the coordination number of the phosphorus, for which a specific parametrization (3OB/OPhyd) is developed as a temporary solution; this suggests that the current DFTB3 methodology has limited transferability for complex phosphorus chemistry at the level of accuracy required for detailed mechanistic investigations. Therefore, fundamental improvements in the DFTB3 methodology are needed for a reliable method that describes phosphorus chemistry without ad hoc parameters. Nevertheless, DFTB3/3OB is expected to be a competitive QM method in QM/MM calculations for studying phosphorus/sulfur chemistry in condensed phase systems, especially as a low-level method that drives the sampling in a dual-level QM/MM framework. PMID:24803865

  9. TU-AB-BRC-03: Accurate Tissue Characterization for Monte Carlo Dose Calculation Using Dual-and Multi-Energy CT Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lalonde, A; Bouchard, H

    Purpose: To develop a general method for human tissue characterization with dual-and multi-energy CT and evaluate its performance in determining elemental compositions and the associated proton stopping power relative to water (SPR) and photon mass absorption coefficients (EAC). Methods: Principal component analysis is used to extract an optimal basis of virtual materials from a reference dataset of tissues. These principal components (PC) are used to perform two-material decomposition using simulated DECT data. The elemental mass fraction and the electron density in each tissue is retrieved by measuring the fraction of each PC. A stoichiometric calibration method is adapted to themore » technique to make it suitable for clinical use. The present approach is compared with two others: parametrization and three-material decomposition using the water-lipid-protein (WLP) triplet. Results: Monte Carlo simulations using TOPAS for four reference tissues shows that characterizing them with only two PC is enough to get a submillimetric precision on proton range prediction. Based on the simulated DECT data of 43 references tissues, the proposed method is in agreement with theoretical values of protons SPR and low-kV EAC with a RMS error of 0.11% and 0.35%, respectively. In comparison, parametrization and WLP respectively yield RMS errors of 0.13% and 0.29% on SPR, and 2.72% and 2.19% on EAC. Furthermore, the proposed approach shows potential applications for spectral CT. Using five PC and five energy bins reduces the SPR RMS error to 0.03%. Conclusion: The proposed method shows good performance in determining elemental compositions from DECT data and physical quantities relevant to radiotherapy dose calculation and generally shows better accuracy and unbiased results compared to reference methods. The proposed method is particularly suitable for Monte Carlo calculations and shows promise in using more than two energies to characterize human tissue with CT.« less

  10. Active-Optical Sensors Using Red NDVI Compared to Red Edge NDVI for Prediction of Corn Grain Yield in North Dakota, U.S.A.

    PubMed Central

    Sharma, Lakesh K.; Bu, Honggang; Denton, Anne; Franzen, David W.

    2015-01-01

    Active-optical sensor readings from an N non-limiting area standard established within a farm field are used to predict yield in the standard. Lower yield predictions from sensor readings obtained from other parts of the field outside of the N non-limiting standard area indicate a need for supplemental N. Active-optical sensor algorithms for predicting corn (Zea mays, L.) yield to direct in-season nitrogen (N) fertilization in corn utilize red NDVI (normalized differential vegetative index). Use of red edge NDVI might improve corn yield prediction at later growth stages when corn leaves cover the inter-row space resulting in “saturation” of red NDVI readings. The purpose of this study was to determine whether the use of red edge NDVI in two active-optical sensors (GreenSeeker™ and Holland Scientific Crop Circle™) improved corn yield prediction. Nitrogen rate experiments were established at 15 sites in North Dakota (ND). Sensor readings were conducted at V6 and V12 corn. Red NDVI and red edge NDVI were similar in the relationship of readings with yield at V6. At V12, the red edge NDVI was superior to the red NDVI in most comparisons, indicating that it would be most useful in developing late-season N application algorithms. PMID:26540057

  11. Active-Optical Sensors Using Red NDVI Compared to Red Edge NDVI for Prediction of Corn Grain Yield in North Dakota, U.S.A.

    PubMed

    Sharma, Lakesh K; Bu, Honggang; Denton, Anne; Franzen, David W

    2015-11-02

    Active-optical sensor readings from an N non-limiting area standard established within a farm field are used to predict yield in the standard. Lower yield predictions from sensor readings obtained from other parts of the field outside of the N non-limiting standard area indicate a need for supplemental N. Active-optical sensor algorithms for predicting corn (Zea mays, L.) yield to direct in-season nitrogen (N) fertilization in corn utilize red NDVI (normalized differential vegetative index). Use of red edge NDVI might improve corn yield prediction at later growth stages when corn leaves cover the inter-row space resulting in "saturation" of red NDVI readings. The purpose of this study was to determine whether the use of red edge NDVI in two active-optical sensors (GreenSeeker™ and Holland Scientific Crop Circle™) improved corn yield prediction. Nitrogen rate experiments were established at 15 sites in North Dakota (ND). Sensor readings were conducted at V6 and V12 corn. Red NDVI and red edge NDVI were similar in the relationship of readings with yield at V6. At V12, the red edge NDVI was superior to the red NDVI in most comparisons, indicating that it would be most useful in developing late-season N application algorithms.

  12. Delineating parameter unidentifiabilities in complex models

    NASA Astrophysics Data System (ADS)

    Raman, Dhruva V.; Anderson, James; Papachristodoulou, Antonis

    2017-03-01

    Scientists use mathematical modeling as a tool for understanding and predicting the properties of complex physical systems. In highly parametrized models there often exist relationships between parameters over which model predictions are identical, or nearly identical. These are known as structural or practical unidentifiabilities, respectively. They are hard to diagnose and make reliable parameter estimation from data impossible. They furthermore imply the existence of an underlying model simplification. We describe a scalable method for detecting unidentifiabilities, as well as the functional relations defining them, for generic models. This allows for model simplification, and appreciation of which parameters (or functions thereof) cannot be estimated from data. Our algorithm can identify features such as redundant mechanisms and fast time-scale subsystems, as well as the regimes in parameter space over which such approximations are valid. We base our algorithm on a quantification of regional parametric sensitivity that we call `multiscale sloppiness'. Traditionally, the link between parametric sensitivity and the conditioning of the parameter estimation problem is made locally, through the Fisher information matrix. This is valid in the regime of infinitesimal measurement uncertainty. We demonstrate the duality between multiscale sloppiness and the geometry of confidence regions surrounding parameter estimates made where measurement uncertainty is non-negligible. Further theoretical relationships are provided linking multiscale sloppiness to the likelihood-ratio test. From this, we show that a local sensitivity analysis (as typically done) is insufficient for determining the reliability of parameter estimation, even with simple (non)linear systems. Our algorithm can provide a tractable alternative. We finally apply our methods to a large-scale, benchmark systems biology model of necrosis factor (NF)-κ B , uncovering unidentifiabilities.

  13. Predicting failure to return to work.

    PubMed

    Mills, R

    2012-08-01

    The research question is: is it possible to predict, at the time of workers' compensation claim lodgement, which workers will have a prolonged return to work (RTW) outcome? This paper illustrates how a traditional analytic approach to the analysis of an existing large database can be insufficient to answer the research question, and suggests an alternative data management and analysis approach. This paper retrospectively analyses 9018 workers' compensation claims from two different workers' compensation jurisdictions in Australia (two data sets) over a 4-month period in 2007. De-identified data, submitted at the time of claim lodgement, were compared with RTW outcomes for up to 3 months. Analysis consisted of descriptive, parametric (analysis of variance and multiple regression), survival (proportional hazards) and data mining (partitioning) analysis. No significant associations were found on parametric analysis. Multiple associations were found between the predictor variables and RTW outcome on survival analysis, with marked differences being found between some sub-groups on partitioning--where diagnosis was found to be the strongest discriminator (particularly neck and shoulder injuries). There was a consistent trend for female gender to be associated with a prolonged RTW outcome. The supplied data were not sufficient to enable the development of a predictive model. If we want to predict early who will have a prolonged RTW in Australia, workers' compensation claim forms should be redesigned, data management improved and specialised analytic techniques used. © 2011 The Author. Internal Medicine Journal © 2011 Royal Australasian College of Physicians.

  14. Real-time Ensemble Forecasting of Coronal Mass Ejections using the WSA-ENLIL+Cone Model

    NASA Astrophysics Data System (ADS)

    Mays, M. L.; Taktakishvili, A.; Pulkkinen, A. A.; MacNeice, P. J.; Rastaetter, L.; Kuznetsova, M. M.; Odstrcil, D.

    2013-12-01

    Ensemble forecasting of coronal mass ejections (CMEs) provides significant information in that it provides an estimation of the spread or uncertainty in CME arrival time predictions due to uncertainties in determining CME input parameters. Ensemble modeling of CME propagation in the heliosphere is performed by forecasters at the Space Weather Research Center (SWRC) using the WSA-ENLIL cone model available at the Community Coordinated Modeling Center (CCMC). SWRC is an in-house research-based operations team at the CCMC which provides interplanetary space weather forecasting for NASA's robotic missions and performs real-time model validation. A distribution of n (routinely n=48) CME input parameters are generated using the CCMC Stereo CME Analysis Tool (StereoCAT) which employs geometrical triangulation techniques. These input parameters are used to perform n different simulations yielding an ensemble of solar wind parameters at various locations of interest (satellites or planets), including a probability distribution of CME shock arrival times (for hits), and geomagnetic storm strength (for Earth-directed hits). Ensemble simulations have been performed experimentally in real-time at the CCMC since January 2013. We present the results of ensemble simulations for a total of 15 CME events, 10 of which were performed in real-time. The observed CME arrival was within the range of ensemble arrival time predictions for 5 out of the 12 ensemble runs containing hits. The average arrival time prediction was computed for each of the twelve ensembles predicting hits and using the actual arrival time an average absolute error of 8.20 hours was found for all twelve ensembles, which is comparable to current forecasting errors. Some considerations for the accuracy of ensemble CME arrival time predictions include the importance of the initial distribution of CME input parameters, particularly the mean and spread. When the observed arrivals are not within the predicted range, this still allows the ruling out of prediction errors caused by tested CME input parameters. Prediction errors can also arise from ambient model parameters such as the accuracy of the solar wind background, and other limitations. Additionally the ensemble modeling setup was used to complete a parametric event case study of the sensitivity of the CME arrival time prediction to free parameters for ambient solar wind model and CME.

  15. Genomic assisted selection for enhancing line breeding: merging genomic and phenotypic selection in winter wheat breeding programs with preliminary yield trials.

    PubMed

    Michel, Sebastian; Ametz, Christian; Gungor, Huseyin; Akgöl, Batuhan; Epure, Doru; Grausgruber, Heinrich; Löschenberger, Franziska; Buerstmayr, Hermann

    2017-02-01

    Early generation genomic selection is superior to conventional phenotypic selection in line breeding and can be strongly improved by including additional information from preliminary yield trials. The selection of lines that enter resource-demanding multi-environment trials is a crucial decision in every line breeding program as a large amount of resources are allocated for thoroughly testing these potential varietal candidates. We compared conventional phenotypic selection with various genomic selection approaches across multiple years as well as the merit of integrating phenotypic information from preliminary yield trials into the genomic selection framework. The prediction accuracy using only phenotypic data was rather low (r = 0.21) for grain yield but could be improved by modeling genetic relationships in unreplicated preliminary yield trials (r = 0.33). Genomic selection models were nevertheless found to be superior to conventional phenotypic selection for predicting grain yield performance of lines across years (r = 0.39). We subsequently simplified the problem of predicting untested lines in untested years to predicting tested lines in untested years by combining breeding values from preliminary yield trials and predictions from genomic selection models by a heritability index. This genomic assisted selection led to a 20% increase in prediction accuracy, which could be further enhanced by an appropriate marker selection for both grain yield (r = 0.48) and protein content (r = 0.63). The easy to implement and robust genomic assisted selection gave thus a higher prediction accuracy than either conventional phenotypic or genomic selection alone. The proposed method took the complex inheritance of both low and high heritable traits into account and appears capable to support breeders in their selection decisions to develop enhanced varieties more efficiently.

  16. Exoplanet Yield Estimation for Decadal Study Concepts using EXOSIMS

    NASA Astrophysics Data System (ADS)

    Morgan, Rhonda; Lowrance, Patrick; Savransky, Dmitry; Garrett, Daniel

    2016-01-01

    The anticipated upcoming large mission study concepts for the direct imaging of exo-earths present an exciting opportunity for exoplanet discovery and characterization. While these telescope concepts would also be capable of conducting a broad range of astrophysical investigations, the most difficult technology challenges are driven by the requirements for imaging exo-earths. The exoplanet science yield for these mission concepts will drive design trades and mission concept comparisons.To assist in these trade studies, the Exoplanet Exploration Program Office (ExEP) is developing a yield estimation tool that emphasizes transparency and consistent comparison of various design concepts. The tool will provide a parametric estimate of science yield of various mission concepts using contrast curves from physics-based model codes and Monte Carlo simulations of design reference missions using realistic constraints, such as solar avoidance angles, the observatory orbit, propulsion limitations of star shades, the accessibility of candidate targets, local and background zodiacal light levels, and background confusion by stars and galaxies. The python tool utilizes Dmitry Savransky's EXOSIMS (Exoplanet Open-Source Imaging Mission Simulator) design reference mission simulator that is being developed for the WFIRST Preliminary Science program. ExEP is extending and validating the tool for future mission concepts under consideration for the upcoming 2020 decadal review. We present a validation plan and preliminary yield results for a point design.

  17. Photoactive roadways: Determination of CO, NO and VOC uptake coefficients and photolabile side product yields on TiO2 treated asphalt and concrete

    NASA Astrophysics Data System (ADS)

    Toro, C.; Jobson, B. T.; Haselbach, L.; Shen, S.; Chung, S. H.

    2016-08-01

    This work reports uptake coefficients and by-product yields of ozone precursors onto two photocatalytic paving materials (asphalt and concrete) treated with a commercial TiO2 surface application product. The experimental approach used a continuously stirred tank reactor (CSTR) and allowed for testing large samples with the same surface morphology encountered with real urban surfaces. The measured uptake coefficient (γgeo) and surface resistances are useful for parametrizing dry deposition velocities in air quality model evaluation of the impact of photoactive surfaces on urban air chemistry. At 46% relative humidity, the surface resistance to NO uptake was ∼1 s cm-1 for concrete and ∼2 s cm-1 for a freshly coated older roadway asphalt sample. HONO and NO2 were detected as side products from NO uptake to asphalt, with NO2 molar yields on the order of 20% and HONO molar yields ranging between 14 and 33%. For concrete samples, the NO2 molar yields increased with the increase of water vapor, ranging from 1% to 35% and HONO was not detected as a by-product. Uptake of monoaromatic VOCs to the asphalt sample set displayed a dependence on the compound vapor pressure, and was influenced by competitive adsorption from less volatile VOCs. Formaldehyde and acetaldehyde were detected as byproducts, with molar yields ranging from 5 to 32%.

  18. Experimental Characterization and Material Modelling of an AZ31 Magnesium Sheet Alloy at Elevated Temperatures under Consideration of the Tension-Compression Asymmetry

    NASA Astrophysics Data System (ADS)

    Behrens, B.-A.; Bouguecha, A.; Bonk, C.; Dykiert, M.

    2017-09-01

    Magnesium sheet alloys have a great potential as a construction material in the aerospace and automotive industry. However, the current state of research regarding temperature dependent material parameters for the description of the plastic behaviour of magnesium sheet alloys is scarce in literature and accurate statements concerning yield criteria and appropriate characterization tests to describe the plastic behaviour of a magnesium sheet alloy at elevated temperatures in deep drawing processes are to define. Hence, in this paper the plastic behaviour of the well-established magnesium sheet alloy AZ31 has been characterized by means of convenient mechanical tests (e. g. tension, compression and biaxial tests) at temperatures between 180 and 230 °C. In this manner, anisotropic and hardening behaviour as well as differences between the tension-compression asymmetry of the yield locus have been estimated. Furthermore, using the evaluated data from the above mentioned tests, two different yield criteria have been parametrized; the commonly used Hill’48 and an orthotropic yield criterion, CPB2006, which was developed especially for materials with hexagonal close packed lattice structure and is able to describe an asymmetrical yielding behaviour regarding tensile and compressive stress states. Numerical simulations have been finally carried out with both yield functions in order to assess the accuracy of the material models.

  19. A complete genetic linkage map and QTL analyses for bast fibre quality traits, yield and yield components in jute (Corchorus olitorius L.).

    PubMed

    Topdar, N; Kundu, A; Sinha, M K; Sarkar, D; Das, M; Banerjee, S; Kar, C S; Satya, P; Balyan, H S; Mahapatra, B S; Gupta, P K

    2013-01-01

    We report the first complete microsatellite genetic map of jute (Corchorus olitorius L.; 2n = 2x = 14) using an F6 recombinant inbred population. Of the 403 microsatellite markers screened, 82 were mapped on the seven linkage groups (LGs) that covered a total genetic distance of 799.9 cM, with an average marker interval of 10.7 cM. LG5 had the longest and LG7 the shortest genetic lengths, whereas LG1 had the maximum and LG7 the minimum number of markers. Segregation distortion of microsatellite loci was high (61%), with the majority of them (76%) skewed towards the female parent. Genomewide non-parametric single-marker analysis in combination with multiple quantitative trait loci (QTL)-models (MQM) mapping detected 26 definitive QTLs for bast fibre quality, yield and yield-related traits. These were unevenly distributed on six LGs, as colocalized clusters, at genomic sectors marked by 15 microsatellite loci. LG1 was the QTL-richest map sector, with the densest colocalized clusters of QTLs governing fibre yield, yield-related traits and tensile strength. Expectedly, favorable QTLs were derived from the desirable parents, except for nearly all of those of fibre fineness, which might be due to the creation of new gene combinations. Our results will be a good starting point for further genome analyses in jute.

  20. Global Agriculture Yields and Conflict under Future Climate

    NASA Astrophysics Data System (ADS)

    Rising, J.; Cane, M. A.

    2013-12-01

    Aspects of climate have been shown to correlate significantly with conflict. We investigate a possible pathway for these effects through changes in agriculture yields, as predicted by field crop models (FAO's AquaCrop and DSSAT). Using satellite and station weather data, and surveyed data for soil and management, we simulate major crop yields across all countries between 1961 and 2008, and compare these to FAO and USDA reported yields. Correlations vary by country and by crop, from approximately .8 to -.5. Some of this range in crop model performance is explained by crop varieties, data quality, and other natural, economic, and political features. We also quantify the ability of AquaCrop and DSSAT to simulate yields under past cycles of ENSO as a proxy for their performance under changes in climate. We then describe two statistical models which relate crop yields to conflict events from the UCDP/PRIO Armed Conflict dataset. The first relates several preceding years of predicted yields of the major grain in each country to any conflict involving that country. The second uses the GREG ethnic group maps to identify differences in predicted yields between neighboring regions. By using variation in predicted yields to explain conflict, rather than actual yields, we can identify the exogenous effects of weather on conflict. Finally, we apply precipitation and temperature time-series under IPCC's A1B scenario to the statistical models. This allows us to estimate the scale of the impact of future yields on future conflict. Centroids of the major growing regions for each country's primary crop, based on USDA FAS consumption. Correlations between simulated yields and reported yields, for AquaCrop and DSSAT, under the assumption that no irrigation, fertilization, or pest control is used. Reported yields are the average of FAO yields and USDA FAS yields, where both are available.

  1. SU-D-BRB-01: A Comparison of Learning Methods for Knowledge Based Dose Prediction for Coplanar and Non-Coplanar Liver Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tran, A; Ruan, D; Woods, K

    Purpose: The predictive power of knowledge based planning (KBP) has considerable potential in the development of automated treatment planning. Here, we examine the predictive capabilities and accuracy of previously reported KBP methods, as well as an artificial neural networks (ANN) method. Furthermore, we compare the predictive accuracy of these methods on coplanar volumetric-modulated arc therapy (VMAT) and non-coplanar 4π radiotherapy. Methods: 30 liver SBRT patients previously treated using coplanar VMAT were selected for this study. The patients were re-planned using 4π radiotherapy, which involves 20 optimally selected non-coplanar IMRT fields. ANNs were used to incorporate enhanced geometric information including livermore » and PTV size, prescription dose, patient girth, and proximity to beams. The performance of ANN was compared to three methods from statistical voxel dose learning (SVDL), wherein the doses of voxels sharing the same distance to the PTV are approximated by either taking the median of the distribution, non-parametric fitting, or skew-normal fitting. These three methods were shown to be capable of predicting DVH, but only median approximation can predict 3D dose. Prediction methods were tested using leave-one-out cross-validation tests and evaluated using residual sum of squares (RSS) for DVH and 3D dose predictions. Results: DVH prediction using non-parametric fitting had the lowest average RSS with 0.1176(4π) and 0.1633(VMAT), compared to 0.4879(4π) and 1.8744(VMAT) RSS for ANN. 3D dose prediction with median approximation had lower RSS with 12.02(4π) and 29.22(VMAT), compared to 27.95(4π) and 130.9(VMAT) for ANN. Conclusion: Paradoxically, although the ANNs included geometric features in addition to the distances to the PTV, it did not perform better in predicting DVH or 3D dose compared to simpler, faster methods based on the distances alone. The study further confirms that the prediction of 4π non-coplanar plans were more accurate than VMAT. NIH R43CA183390 and R01CA188300.« less

  2. Comparison of statistical models for analyzing wheat yield time series.

    PubMed

    Michel, Lucie; Makowski, David

    2013-01-01

    The world's population is predicted to exceed nine billion by 2050 and there is increasing concern about the capability of agriculture to feed such a large population. Foresight studies on food security are frequently based on crop yield trends estimated from yield time series provided by national and regional statistical agencies. Various types of statistical models have been proposed for the analysis of yield time series, but the predictive performances of these models have not yet been evaluated in detail. In this study, we present eight statistical models for analyzing yield time series and compare their ability to predict wheat yield at the national and regional scales, using data provided by the Food and Agriculture Organization of the United Nations and by the French Ministry of Agriculture. The Holt-Winters and dynamic linear models performed equally well, giving the most accurate predictions of wheat yield. However, dynamic linear models have two advantages over Holt-Winters models: they can be used to reconstruct past yield trends retrospectively and to analyze uncertainty. The results obtained with dynamic linear models indicated a stagnation of wheat yields in many countries, but the estimated rate of increase of wheat yield remained above 0.06 t ha⁻¹ year⁻¹ in several countries in Europe, Asia, Africa and America, and the estimated values were highly uncertain for several major wheat producing countries. The rate of yield increase differed considerably between French regions, suggesting that efforts to identify the main causes of yield stagnation should focus on a subnational scale.

  3. Polyelectrolyte scaling laws for microgel yielding near jamming.

    PubMed

    Bhattacharjee, Tapomoy; Kabb, Christopher P; O'Bryan, Christopher S; Urueña, Juan M; Sumerlin, Brent S; Sawyer, W Gregory; Angelini, Thomas E

    2018-02-28

    Micro-scale hydrogel particles, known as microgels, are used in industry to control the rheology of numerous different products, and are also used in experimental research to study the origins of jamming and glassy behavior in soft-sphere model systems. At the macro-scale, the rheological behaviour of densely packed microgels has been thoroughly characterized; at the particle-scale, careful investigations of jamming, yielding, and glassy-dynamics have been performed through experiment, theory, and simulation. However, at low packing fractions near jamming, the connection between microgel yielding phenomena and the physics of their constituent polymer chains has not been made. Here we investigate whether basic polymer physics scaling laws predict macroscopic yielding behaviours in packed microgels. We measure the yield stress and cross-over shear-rate in several different anionic microgel systems prepared at packing fractions just above the jamming transition, and show that our data can be predicted from classic polyelectrolyte physics scaling laws. We find that diffusive relaxations of microgel deformation during particle re-arrangements can predict the shear-rate at which microgels yield, and the elastic stress associated with these particle deformations predict the yield stress.

  4. Predicting multi-wall structural response to hypervelocity impact using the hull code

    NASA Technical Reports Server (NTRS)

    Schonberg, William P.

    1993-01-01

    Previously, multi-wall structures have been analyzed extensively, primarily through experiment, as a means of increasing the meteoroid/space debris impact protection of spacecraft. As structural configurations become more varied, the number of tests required to characterize their response increases dramatically. As an alternative to experimental testing, numerical modeling of high-speed impact phenomena is often being used to predict the response of a variety of structural systems under different impact loading conditions. The results of comparing experimental tests to Hull Hydrodynamic Computer Code predictions are reported. Also, the results of a numerical parametric study of multi-wall structural response to hypervelocity cylindrical projectile impact are presented.

  5. Multidisciplinary design of a rocket-based combined cycle SSTO launch vehicle using Taguchi methods

    NASA Technical Reports Server (NTRS)

    Olds, John R.; Walberg, Gerald D.

    1993-01-01

    Results are presented from the optimization process of a winged-cone configuration SSTO launch vehicle that employs a rocket-based ejector/ramjet/scramjet/rocket operational mode variable-cycle engine. The Taguchi multidisciplinary parametric-design method was used to evaluate the effects of simultaneously changing a total of eight design variables, rather than changing them one at a time as in conventional tradeoff studies. A combination of design variables was in this way identified which yields very attractive vehicle dry and gross weights.

  6. Comment on 'Parametrization of Stillinger-Weber potential based on a valence force field model: application to single-layer MoS2 and black phosphorus'.

    PubMed

    Midtvedt, Daniel; Croy, Alexander

    2016-06-10

    We compare the simplified valence-force model for single-layer black phosphorus with the original model and recent ab initio results. Using an analytic approach and numerical calculations we find that the simplified model yields Young's moduli that are smaller compared to the original model and are almost a factor of two smaller than ab initio results. Moreover, the Poisson ratios are an order of magnitude smaller than values found in the literature.

  7. High-power picosecond fiber source for coherent Raman microscopy

    PubMed Central

    Kieu, Khanh; Saar, Brian G.; Holtom, Gary R.; Xie, X. Sunney; Wise, Frank W.

    2011-01-01

    We report a high-power picosecond fiber pump laser system for coherent Raman microscopy (CRM). The fiber laser system generates 3.5 ps pulses with 6 W average power at 1030 nm. Frequency doubling yields more than 2 W of green light, which can be used to pump an optical parametric oscillator to produce the pump and the Stokes beams for CRM. Detailed performance data on the laser and the various wavelength conversion steps are discussed, together with representative CRM images of fresh animal tissue obtained with the new source. PMID:19571996

  8. Modularity, quaternion-Kähler spaces, and mirror symmetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alexandrov, Sergei; Banerjee, Sibasish

    2013-10-15

    We provide an explicit twistorial construction of quaternion-Kähler manifolds obtained by deformation of c-map spaces and carrying an isometric action of the modular group SL(2,Z). The deformation is not assumed to preserve any continuous isometry and therefore this construction presents a general framework for describing NS5-brane instanton effects in string compactifications with N= 2 supersymmetry. In this context the modular invariant parametrization of twistor lines found in this work yields the complete non-perturbative mirror map between type IIA and type IIB physical fields.

  9. A comparison of two adaptive multivariate analysis methods (PLSR and ANN) for winter wheat yield forecasting using Landsat-8 OLI images

    NASA Astrophysics Data System (ADS)

    Chen, Pengfei; Jing, Qi

    2017-02-01

    An assumption that the non-linear method is more reasonable than the linear method when canopy reflectance is used to establish the yield prediction model was proposed and tested in this study. For this purpose, partial least squares regression (PLSR) and artificial neural networks (ANN), represented linear and non-linear analysis method, were applied and compared for wheat yield prediction. Multi-period Landsat-8 OLI images were collected at two different wheat growth stages, and a field campaign was conducted to obtain grain yields at selected sampling sites in 2014. The field data were divided into a calibration database and a testing database. Using calibration data, a cross-validation concept was introduced for the PLSR and ANN model construction to prevent over-fitting. All models were tested using the test data. The ANN yield-prediction model produced R2, RMSE and RMSE% values of 0.61, 979 kg ha-1, and 10.38%, respectively, in the testing phase, performing better than the PLSR yield-prediction model, which produced R2, RMSE, and RMSE% values of 0.39, 1211 kg ha-1, and 12.84%, respectively. Non-linear method was suggested as a better method for yield prediction.

  10. Syngas production by chemical-looping gasification of wheat straw with Fe-based oxygen carrier.

    PubMed

    Hu, Jianjun; Li, Chong; Guo, Qianhui; Dang, Jiatao; Zhang, Quanguo; Lee, Duu-Jong; Yang, Yunlong

    2018-05-03

    The iron-based oxygen carriers (OC's), Fe 2 O 3 /support (Al 2 O 3 , TiO 2 , SiO 2 and ZrO 2 ), for chemical looping gasification of wheat straw were prepared using impregnation method. The surface morphology, crystal structure, carbon deposition potential, lattice oxygen activity and selectivity of the yielded OCs were examined. The Fe 2 O 3 /Al 2 O 3 OCs at 60% loading has the highest H 2 yield, H 2 /CO ratio, gas yield, and carbon conversion amongst the tested OC's. Parametric studies revealed that an optimal loading Fe 2 O 3 of 60%, steam-to-biomass ratio of 0.8 and oxygen carrier-to-biomass ratio of 1.0 led to the maximum H 2 /CO ratio, gas yield, H 2  + CO ratio, and carbon conversion from the gasified wheat straw. High temperature, up to 950 °C, enhanced the gasification performance. A kinetic network interpreted the noted experimental results. The lattice oxygen provided by the prepared Fe 2 O 3 /Al 2 O 3 oxygen carriers promotes chemical looping gasification efficiencies from wheat straw. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. Rapid analysis of composition and reactivity in cellulosic biomass feedstocks with near-infrared spectroscopy.

    PubMed

    Payne, Courtney E; Wolfrum, Edward J

    2015-01-01

    Obtaining accurate chemical composition and reactivity (measures of carbohydrate release and yield) information for biomass feedstocks in a timely manner is necessary for the commercialization of biofuels. Our objective was to use near-infrared (NIR) spectroscopy and partial least squares (PLS) multivariate analysis to develop calibration models to predict the feedstock composition and the release and yield of soluble carbohydrates generated by a bench-scale dilute acid pretreatment and enzymatic hydrolysis assay. Major feedstocks included in the calibration models are corn stover, sorghum, switchgrass, perennial cool season grasses, rice straw, and miscanthus. We present individual model statistics to demonstrate model performance and validation samples to more accurately measure predictive quality of the models. The PLS-2 model for composition predicts glucan, xylan, lignin, and ash (wt%) with uncertainties similar to primary measurement methods. A PLS-2 model was developed to predict glucose and xylose release following pretreatment and enzymatic hydrolysis. An additional PLS-2 model was developed to predict glucan and xylan yield. PLS-1 models were developed to predict the sum of glucose/glucan and xylose/xylan for release and yield (grams per gram). The release and yield models have higher uncertainties than the primary methods used to develop the models. It is possible to build effective multispecies feedstock models for composition, as well as carbohydrate release and yield. The model for composition is useful for predicting glucan, xylan, lignin, and ash with good uncertainties. The release and yield models have higher uncertainties; however, these models are useful for rapidly screening sample populations to identify unusual samples.

  12. Weather-based forecasts of California crop yields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lobell, D B; Cahill, K N; Field, C B

    2005-09-26

    Crop yield forecasts provide useful information to a range of users. Yields for several crops in California are currently forecast based on field surveys and farmer interviews, while for many crops official forecasts do not exist. As broad-scale crop yields are largely dependent on weather, measurements from existing meteorological stations have the potential to provide a reliable, timely, and cost-effective means to anticipate crop yields. We developed weather-based models of state-wide yields for 12 major California crops (wine grapes, lettuce, almonds, strawberries, table grapes, hay, oranges, cotton, tomatoes, walnuts, avocados, and pistachios), and tested their accuracy using cross-validation over themore » 1980-2003 period. Many crops were forecast with high accuracy, as judged by the percent of yield variation explained by the forecast, the number of yields with correctly predicted direction of yield change, or the number of yields with correctly predicted extreme yields. The most successfully modeled crop was almonds, with 81% of yield variance captured by the forecast. Predictions for most crops relied on weather measurements well before harvest time, allowing for lead times that were longer than existing procedures in many cases.« less

  13. Multi-parametric studies of electrically-driven flyer plates

    NASA Astrophysics Data System (ADS)

    Neal, William; Bowden, Michael; Explosive Trains; Devices Collaboration

    2015-06-01

    Exploding foil initiator (EFI) detonators function by the acceleration of a flyer plate, by the electrical explosion of a metallic bridge, into an explosive pellet. The length, and therefore time, scales of this shock initation process is dominated by the magnitude and duration of the imparted shock pulse. To predict the dynamics of this initiation, it is critical to further understand the velocity, shape and thickness of this flyer plate. This study uses multi-parametric diagnostics to investigate the geometry and velocity of the flyer plate upon impact including the imparted electrical energy: photon Doppler velocimetry (PDV), dual axis imaging, time-resolved impact imaging, voltage and current. The investigation challenges the validity of traditional assumptions about the state of the flyer plate at impact and discusses the improved understanding of the process.

  14. Broadband and tunable optical parametric generator for remote detection of gas molecules in the short and mid-infrared.

    PubMed

    Lambert-Girard, Simon; Allard, Martin; Piché, Michel; Babin, François

    2015-04-01

    The development of a novel broadband and tunable optical parametric generator (OPG) is presented. The OPG properties are studied numerically and experimentally in order to optimize the generator's use in a broadband spectroscopic LIDAR operating in the short and mid-infrared. This paper discusses trade-offs to be made on the properties of the pump, crystal, and seeding signal in order to optimize the pulse spectral density and divergence while enabling energy scaling. A seed with a large spectral bandwidth is shown to enhance the pulse-to-pulse stability and optimize the pulse spectral density. A numerical model shows excellent agreement with output power measurements; the model predicts that a pump having a large number of longitudinal modes improves conversion efficiency and pulse stability.

  15. Nuclear ``pasta'' phase within density dependent hadronic models

    NASA Astrophysics Data System (ADS)

    Avancini, S. S.; Brito, L.; Marinelli, J. R.; Menezes, D. P.; de Moraes, M. M. W.; Providência, C.; Santos, A. M.

    2009-03-01

    In the present paper, we investigate the onset of the “pasta” phase with different parametrizations of the density dependent hadronic model and compare the results with one of the usual parametrizations of the nonlinear Walecka model. The influence of the scalar-isovector virtual δ meson is shown. At zero temperature, two different methods are used, one based on coexistent phases and the other on the Thomas-Fermi approximation. At finite temperature, only the coexistence phases method is used. npe matter with fixed proton fractions and in β equilibrium are studied. We compare our results with restrictions imposed on the values of the density and pressure at the inner edge of the crust, obtained from observations of the Vela pulsar and recent isospin diffusion data from heavy-ion reactions, and with predictions from spinodal calculations.

  16. Linear and nonlinear analysis of fluid slosh dampers

    NASA Astrophysics Data System (ADS)

    Sayar, B. A.; Baumgarten, J. R.

    1982-11-01

    A vibrating structure and a container partially filled with fluid are considered coupled in a free vibration mode. To simplify the mathematical analysis, a pendulum model to duplicate the fluid motion and a mass-spring dashpot representing the vibrating structure are used. The equations of motion are derived by Lagrange's energy approach and expressed in parametric form. For a wide range of parametric values the logarithmic decrements of the main system are calculated from theoretical and experimental response curves in the linear analysis. However, for the nonlinear analysis the theoretical and experimental response curves of the main system are compared. Theoretical predictions are justified by experimental observations with excellent agreement. It is concluded finally that for a proper selection of design parameters, containers partially filled with viscous fluids serve as good vibration dampers.

  17. Witnessing entanglement without entanglement witness operators.

    PubMed

    Pezzè, Luca; Li, Yan; Li, Weidong; Smerzi, Augusto

    2016-10-11

    Quantum mechanics predicts the existence of correlations between composite systems that, although puzzling to our physical intuition, enable technologies not accessible in a classical world. Notwithstanding, there is still no efficient general method to theoretically quantify and experimentally detect entanglement of many qubits. Here we propose to detect entanglement by measuring the statistical response of a quantum system to an arbitrary nonlocal parametric evolution. We witness entanglement without relying on the tomographic reconstruction of the quantum state, or the realization of witness operators. The protocol requires two collective settings for any number of parties and is robust against noise and decoherence occurring after the implementation of the parametric transformation. To illustrate its user friendliness we demonstrate multipartite entanglement in different experiments with ions and photons by analyzing published data on fidelity visibilities and variances of collective observables.

  18. Non-linear wave interaction in a magnetoplasma column. I - Theory. II Experiment

    NASA Technical Reports Server (NTRS)

    Larsen, J.-M.; Crawford, F. W.

    1979-01-01

    The paper presents an analysis of non-linear three-wave interaction for propagation along a cylindrical plasma column surrounded either by a metallic boundary, or by an infinite dielectric, and immersed in an infinite, static, axial magnetic field. An averaged Lagrangian method is used and the results are specialized to parametric amplification and mode conversion, assuming an undepleted pump wave. Computations are presented for a magneto-plasma column surrounded by free space, indicating that parametric growth rates of the order of a fraction of a decibel per centimeter should be obtainable for plausible laboratory plasma parameters. In addition, experiments on non-linear mode conversion in a cylindrical magnetoplasma column are described. The results are compared with the theoretical predictions and good qualitative agreement is demonstrated.

  19. Saturation of low-threshold two-plasmon parametric decay leading to excitation of one localized upper hybrid wave

    NASA Astrophysics Data System (ADS)

    Gusakov, E. Z.; Popov, A. Yu.; Saveliev, A. N.

    2018-06-01

    We analyze the saturation of the low-threshold absolute parametric decay instability of an extraordinary pump wave leading to the excitation of two upper hybrid (UH) waves, only one of which is trapped in the vicinity of a local maximum of the plasma density profile. The pump depletion and the secondary decay of the localized daughter UH wave are treated as the most likely moderators of a primary two-plasmon decay instability. The reduced equations describing the nonlinear saturation phenomena are derived. The general analytical consideration is accompanied by the numerical analysis performed under the experimental conditions typical of the off-axis X2-mode ECRH experiments at TEXTOR. The possibility of substantial (up to 20%) anomalous absorption of the pump wave is predicted.

  20. Radioactivity Registered With a Small Number of Events

    NASA Astrophysics Data System (ADS)

    Zlokazov, Victor; Utyonkov, Vladimir

    2018-02-01

    The synthesis of superheavy elements asks for the analysis of low statistics experimental data presumably obeying an unknown exponential distribution and to take the decision whether they originate from one source or have admixtures. Here we analyze predictions following from non-parametrical methods, employing only such fundamental sample properties as the sample mean, the median and the mode.

  1. Program For Optimization Of Nuclear Rocket Engines

    NASA Technical Reports Server (NTRS)

    Plebuch, R. K.; Mcdougall, J. K.; Ridolphi, F.; Walton, James T.

    1994-01-01

    NOP is versatile digital-computer program devoloped for parametric analysis of beryllium-reflected, graphite-moderated nuclear rocket engines. Facilitates analysis of performance of engine with respect to such considerations as specific impulse, engine power, type of engine cycle, and engine-design constraints arising from complications of fuel loading and internal gradients of temperature. Predicts minimum weight for specified performance.

  2. Modeling of the blood rheology in steady-state shear flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Apostolidis, Alex J.; Beris, Antony N., E-mail: beris@udel.edu

    We undertake here a systematic study of the rheology of blood in steady-state shear flows. As blood is a complex fluid, the first question that we try to answer is whether, even in steady-state shear flows, we can model it as a rheologically simple fluid, i.e., we can describe its behavior through a constitutive model that involves only local kinematic quantities. Having answered that question positively, we then probe as to which non-Newtonian model best fits available shear stress vs shear-rate literature data. We show that under physiological conditions blood is typically viscoplastic, i.e., it exhibits a yield stress thatmore » acts as a minimum threshold for flow. We further show that the Casson model emerges naturally as the best approximation, at least for low and moderate shear-rates. We then develop systematically a parametric dependence of the rheological parameters entering the Casson model on key physiological quantities, such as the red blood cell volume fraction (hematocrit). For the yield stress, we base our description on its critical, percolation-originated nature. Thus, we first determine onset conditions, i.e., the critical threshold value that the hematocrit has to have in order for yield stress to appear. It is shown that this is a function of the concentration of a key red blood cell binding protein, fibrinogen. Then, we establish a parametric dependence as a function of the fibrinogen and the square of the difference of the hematocrit from its critical onset value. Similarly, we provide an expression for the Casson viscosity, in terms of the hematocrit and the temperature. A successful validation of the proposed formula is performed against additional experimental literature data. The proposed expression is anticipated to be useful not only for steady-state blood flow modeling but also as providing the starting point for transient shear, or more general flow modeling.« less

  3. Prediction of thermal cycling induced matrix cracking

    NASA Technical Reports Server (NTRS)

    Mcmanus, Hugh L.

    1992-01-01

    Thermal fatigue has been observed to cause matrix cracking in laminated composite materials. A method is presented to predict transverse matrix cracks in composite laminates subjected to cyclic thermal load. Shear lag stress approximations and a simple energy-based fracture criteria are used to predict crack densities as a function of temperature. Prediction of crack densities as a function of thermal cycling is accomplished by assuming that fatigue degrades the material's inherent resistance to cracking. The method is implemented as a computer program. A simple experiment provides data on progressive cracking of a laminate with decreasing temperature. Existing data on thermal fatigue is also used. Correlations of the analytical predictions to the data are very good. A parametric study using the analytical method is presented which provides insight into material behavior under cyclical thermal loads.

  4. Application of a GCM Ensemble Seasonal Climate Forecasts to Crop Yield Prediction in East Africa

    NASA Astrophysics Data System (ADS)

    Ogutu, G.; Franssen, W.; Supit, I.; Hutjes, R. W. A.

    2016-12-01

    We evaluated the potential use of ECMWF System-4 seasonal climate forecasts (S4) for impacts analysis over East Africa. Using the 15 member, 7 months ensemble forecasts initiated every month for 1981-2010, we tested precipitation (tp), air temperature (tas) and surface shortwave radiation (rsds) forecast skill against the WATCH forcing Data ERA-Interim (WFDEI) re-analysis and other data. We used these forecasts as input in the WOFOST crop model to predict maize yields. Forecast skill is assessed using anomaly correlation (ACC), Ranked Probability Skill Score (RPSS) and the Relative Operating Curve Skill Score (ROCSS) for MAM, JJA and OND growing seasons. Predicted maize yields (S4-yields) are verified against historical observed FAO and nationally reported (NAT) yield statistics, and yields from the same crop model forced by WFDEI (WFDEI-yields). Predictability of the climate forecasts vary with season, location and lead-time. The OND tp forecasts show skill over a larger area up to three months lead-time compared to MAM and JJA. Upper- and lower-tercile tp forecasts are 20-80% better than climatology. Good tas forecast skill is apparent with three months lead-time. The rsds is less skillful than tp and tas in all seasons when verified against WFDEI but higher against others. S4-forecasts captures ENSO related anomalous years with region dependent skill. Anomalous ENSO influence is also seen in simulated yields. Focussing on the main sowing dates in the northern (July), equatorial (March-April) and southern (December) regions, WFDEI-yields are lower than FAO and NAT but anomalies are comparable. Yield anomalies are predictable 3-months before sowing in most of the regions. Differences in interannual variability in the range of ±40% may be related to sensitivity of WOFOST to drought stress while the ACCs are largely positive ranging from 0.3 to 0.6. Above and below-normal yields are predictable with 2-months lead time. We evidenced a potential use of seasonal climate forecasts with a crop simulation model to predict anomalous maize yields over East Africa. The findings open a window to better use of climate forecasts in food security early warning systems, and pre-season policy and farm management decisions.

  5. Combined valence bond-molecular mechanics potential-energy surface and direct dynamics study of rate constants and kinetic isotope effects for the H + C2H6 reaction.

    PubMed

    Chakraborty, Arindam; Zhao, Yan; Lin, Hai; Truhlar, Donald G

    2006-01-28

    This article presents a multifaceted study of the reaction H+C(2)H(6)-->H(2)+C(2)H(5) and three of its deuterium-substituted isotopologs. First we present high-level electronic structure calculations by the W1, G3SX, MCG3-MPWB, CBS-APNO, and MC-QCISD/3 methods that lead to a best estimate of the barrier height of 11.8+/-0.5 kcal/mol. Then we obtain a specific reaction parameter for the MPW density functional in order that it reproduces the best estimate of the barrier height; this yields the MPW54 functional. The MPW54 functional, as well as the MPW60 functional that was previously parametrized for the H+CH(4) reaction, is used with canonical variational theory with small-curvature tunneling to calculate the rate constants for all four ethane reactions from 200 to 2000 K. The final MPW54 calculations are based on curvilinear-coordinate generalized-normal-mode analysis along the reaction path, and they include scaled frequencies and an anharmonic C-C bond torsion. They agree with experiment within 31% for 467-826 K except for a 38% deviation at 748 K; the results for the isotopologs are predictions since these rate constants have never been measured. The kinetic isotope effects (KIEs) are analyzed to reveal the contributions from subsets of vibrational partition functions and from tunneling, which conspire to yield a nonmonotonic temperature dependence for one of the KIEs. The stationary points and reaction-path potential of the MPW54 potential-energy surface are then used to parametrize a new kind of analytical potential-energy surface that combines a semiempirical valence bond formalism for the reactive part of the molecule with a standard molecular mechanics force field for the rest; this may be considered to be either an extension of molecular mechanics to treat a reactive potential-energy surface or a new kind of combined quantum-mechanical/molecular mechanical (QM/MM) method in which the QM part is semiempirical valence bond theory; that is, the new potential-energy surface is a combined valence bond molecular mechanics (CVBMM) surface. Rate constants calculated with the CVBMM surface agree with the MPW54 rate constants within 12% for 534-2000 K and within 23% for 200-491 K. The full CVBMM potential-energy surface is now available for use in variety of dynamics calculations, and it provides a prototype for developing CVBMM potential-energy surfaces for other reactions.

  6. Monitoring Crop Yield in USA Using a Satellite-Based Climate-Variability Impact Index

    NASA Technical Reports Server (NTRS)

    Zhang, Ping; Anderson, Bruce; Tan, Bin; Barlow, Mathew; Myneni, Ranga

    2011-01-01

    A quantitative index is applied to monitor crop growth and predict agricultural yield in continental USA. The Climate-Variability Impact Index (CVII), defined as the monthly contribution to overall anomalies in growth during a given year, is derived from 1-km MODIS Leaf Area Index. The growing-season integrated CVII can provide an estimate of the fractional change in overall growth during a given year. In turn these estimates can provide fine-scale and aggregated information on yield for various crops. Trained from historical records of crop production, a statistical model is used to produce crop yield during the growing season based upon the strong positive relationship between crop yield and the CVII. By examining the model prediction as a function of time, it is possible to determine when the in-season predictive capability plateaus and which months provide the greatest predictive capacity.

  7. Impact of signal scattering and parametric uncertainties on receiver operating characteristics

    NASA Astrophysics Data System (ADS)

    Wilson, D. Keith; Breton, Daniel J.; Hart, Carl R.; Pettit, Chris L.

    2017-05-01

    The receiver operating characteristic (ROC curve), which is a plot of the probability of detection as a function of the probability of false alarm, plays a key role in the classical analysis of detector performance. However, meaningful characterization of the ROC curve is challenging when practically important complications such as variations in source emissions, environmental impacts on the signal propagation, uncertainties in the sensor response, and multiple sources of interference are considered. In this paper, a relatively simple but realistic model for scattered signals is employed to explore how parametric uncertainties impact the ROC curve. In particular, we show that parametric uncertainties in the mean signal and noise power substantially raise the tails of the distributions; since receiver operation with a very low probability of false alarm and a high probability of detection is normally desired, these tails lead to severely degraded performance. Because full a priori knowledge of such parametric uncertainties is rarely available in practice, analyses must typically be based on a finite sample of environmental states, which only partially characterize the range of parameter variations. We show how this effect can lead to misleading assessments of system performance. For the cases considered, approximately 64 or more statistically independent samples of the uncertain parameters are needed to accurately predict the probabilities of detection and false alarm. A connection is also described between selection of suitable distributions for the uncertain parameters, and Bayesian adaptive methods for inferring the parameters.

  8. Pressure dependence of axisymmetric vortices in superfluid 3B

    NASA Astrophysics Data System (ADS)

    Fetter, Alexander L.

    1985-06-01

    The pressure dependence of the vortex core in rotating 3B is studied in the Ginzburg-Landau formalism with two distinct models of the strong-coupling corrections. The parametrization of Sauls and Serene [Phys. Rev. B 24, 183 (1981)] predicts a transition from a core with large magnetic moment below ~10 bars to one with small magnetic moment for higher pressures, in qualitative agreement with experiments. The earlier one-parameter model of Brinkman, Serene, and Anderson predicts no such transition, with the core having a large moment for all values of the parameter δ.

  9. Missile Guidance Law Based on Robust Model Predictive Control Using Neural-Network Optimization.

    PubMed

    Li, Zhijun; Xia, Yuanqing; Su, Chun-Yi; Deng, Jun; Fu, Jun; He, Wei

    2015-08-01

    In this brief, the utilization of robust model-based predictive control is investigated for the problem of missile interception. Treating the target acceleration as a bounded disturbance, novel guidance law using model predictive control is developed by incorporating missile inside constraints. The combined model predictive approach could be transformed as a constrained quadratic programming (QP) problem, which may be solved using a linear variational inequality-based primal-dual neural network over a finite receding horizon. Online solutions to multiple parametric QP problems are used so that constrained optimal control decisions can be made in real time. Simulation studies are conducted to illustrate the effectiveness and performance of the proposed guidance control law for missile interception.

  10. Design and Optimization of an Austenitic TRIP Steel for Blast and Fragment Protection

    NASA Astrophysics Data System (ADS)

    Feinberg, Zechariah Daniel

    In light of the pervasive nature of terrorist attacks, there is a pressing need for the design and optimization of next generation materials for blast and fragment protection applications. Sadhukhan used computational tools and a systems-based approach to design TRIP-120---a fully austenitic transformation-induced plasticity (TRIP) steel. Current work more completely evaluates the mechanical properties of the prototype, optimizes the processing for high performance in tension and shear, and builds models for more predictive power of the mechanical behavior and austenite stability. Under quasi-static and dynamic tension and shear, the design exhibits high strength and high uniform ductility as a result of a strain hardening effect that arises with martensitic transformation. Significantly more martensitic transformation occurred under quasi-static loading conditions (69% in tension and 52% in shear) compared to dynamic loading conditions (13% tension and 5% in shear). Nonetheless, significant transformation occurs at high-strain rates which increases strain hardening, delays the onset of necking instability, and increases total energy absorption under adiabatic conditions. Although TRIP-120 effectively utilizes a TRIP effect to delay necking instability, a common trend of abrupt failure with limited fracture ductility was observed in tension and shear at all strain rates. Further characterization of the structure of TRIP-120 showed that an undesired grain boundary cellular reaction (η phase formation) consumed the fine dispersion of the metastable gamma' phase and limited the fracture ductility. A warm working procedure was added to the processing of TRIP-120 in order to eliminate the grain boundary cellular reaction from the structure. By eliminating η formation at the grain boundaries, warm-worked TRIP-120 exhibits a drastic improvement in the mechanical properties in tension and shear. In quasi-static tension, the optimized warm-worked TRIP-120 with an Mssigma( u.t.) of -13°C has a yield strength of 180 ksi (1241 MPa), uniform ductility of 0.303, and fracture ductility of 0.95, which corresponds to a 48% increase in yield strength, a 43% increase in uniform ductility, and a 254% increase in fracture ductility relative to the designed processing of TRIP-120. The highest performing condition of warm-worked TRIP-120 in quasi-static shear with an Mssigma( sh) of 58°C exhibits a shear yield strength of 95.1 ksi (656 MPa), shear fracture strain of 144%, and energy dissipation density of 1099 MJ/m3, which corresponds to a shear yield strength increase of 61%, a shear fracture strain increase of 55%, and an energy dissipation density increase of 76%. A wide range of austenite stabilities can be achieved by altering the heat treatment times and temperatures, which significantly alters the mechanical properties. Although performance cannot be optimized for tension and shear simultaneously, different heat treatments can be applied to warm-worked TRIP-120 to achieve high performance in tension or shear. Parametric models calibrated with three-dimensional atom probe data played a crucial role in guiding the predictive process optimization of TRIP-120. Such models have been built to provide the predictive capability of inputting warm working and aging conditions and outputting the resulting structure, austenite stability, and mechanical properties. The predictive power of computational models has helped identify processing conditions that have improved the performance of TRIP-120 in tension and shear and can be applied to future designs that optimize for adiabatic conditions.

  11. Robust simulation of buckled structures using reduced order modeling

    NASA Astrophysics Data System (ADS)

    Wiebe, R.; Perez, R. A.; Spottswood, S. M.

    2016-09-01

    Lightweight metallic structures are a mainstay in aerospace engineering. For these structures, stability, rather than strength, is often the critical limit state in design. For example, buckling of panels and stiffeners may occur during emergency high-g maneuvers, while in supersonic and hypersonic aircraft, it may be induced by thermal stresses. The longstanding solution to such challenges was to increase the sizing of the structural members, which is counter to the ever present need to minimize weight for reasons of efficiency and performance. In this work we present some recent results in the area of reduced order modeling of post- buckled thin beams. A thorough parametric study of the response of a beam to changing harmonic loading parameters, which is useful in exposing complex phenomena and exercising numerical models, is presented. Two error metrics that use but require no time stepping of a (computationally expensive) truth model are also introduced. The error metrics are applied to several interesting forcing parameter cases identified from the parametric study and are shown to yield useful information about the quality of a candidate reduced order model. Parametric studies, especially when considering forcing and structural geometry parameters, coupled environments, and uncertainties would be computationally intractable with finite element models. The goal is to make rapid simulation of complex nonlinear dynamic behavior possible for distributed systems via fast and accurate reduced order models. This ability is crucial in allowing designers to rigorously probe the robustness of their designs to account for variations in loading, structural imperfections, and other uncertainties.

  12. Evaluating effects of developmental education for college students using a regression discontinuity design.

    PubMed

    Moss, Brian G; Yeaton, William H

    2013-10-01

    Annually, American colleges and universities provide developmental education (DE) to millions of underprepared students; however, evaluation estimates of DE benefits have been mixed. Using a prototypic exemplar of DE, our primary objective was to investigate the utility of a replicative evaluative framework for assessing program effectiveness. Within the context of the regression discontinuity (RD) design, this research examined the effectiveness of a DE program for five, sequential cohorts of first-time college students. Discontinuity estimates were generated for individual terms and cumulatively, across terms. Participants were 3,589 first-time community college students. DE program effects were measured by contrasting both college-level English grades and a dichotomous measure of pass/fail, for DE and non-DE students. Parametric and nonparametric estimates of overall effect were positive for continuous and dichotomous measures of achievement (grade and pass/fail). The variability of program effects over time was determined by tracking results within individual terms and cumulatively, across terms. Applying this replication strategy, DE's overall impact was modest (an effect size of approximately .20) but quite consistent, based on parametric and nonparametric estimation approaches. A meta-analysis of five RD results yielded virtually the same estimate as the overall, parametric findings. Subset analysis, though tentative, suggested that males benefited more than females, while academic gains were comparable for different ethnicities. The cumulative, within-study comparison, replication approach offers considerable potential for the evaluation of new and existing policies, particularly when effects are relatively small, as is often the case in applied settings.

  13. Parametric Bayesian priors and better choice of negative examples improve protein function prediction.

    PubMed

    Youngs, Noah; Penfold-Brown, Duncan; Drew, Kevin; Shasha, Dennis; Bonneau, Richard

    2013-05-01

    Computational biologists have demonstrated the utility of using machine learning methods to predict protein function from an integration of multiple genome-wide data types. Yet, even the best performing function prediction algorithms rely on heuristics for important components of the algorithm, such as choosing negative examples (proteins without a given function) or determining key parameters. The improper choice of negative examples, in particular, can hamper the accuracy of protein function prediction. We present a novel approach for choosing negative examples, using a parameterizable Bayesian prior computed from all observed annotation data, which also generates priors used during function prediction. We incorporate this new method into the GeneMANIA function prediction algorithm and demonstrate improved accuracy of our algorithm over current top-performing function prediction methods on the yeast and mouse proteomes across all metrics tested. Code and Data are available at: http://bonneaulab.bio.nyu.edu/funcprop.html

  14. Loblolly Pine Growth and Yield Prediction for Managed West Gulf Plantations

    Treesearch

    V. Clark Baldwin; D.P. Feduccia

    1987-01-01

    Complete description, including tables, graphs, computer output, of a growth and yield prediction system providing volume and weight yields in stand and stock table format. An example of system use is given along with information about the computer program, COMPUTE P-LOB, that operates the system.

  15. The Spatial Structure of Planform Migration - Curvature Relation of Meandering Rivers

    NASA Astrophysics Data System (ADS)

    Guneralp, I.; Rhoads, B. L.

    2005-12-01

    Planform dynamics of meandering rivers have been of fundamental interest to fluvial geomorphologists and engineers because of the intriguing complexity of these dynamics, the role of planform change in floodplain development and landscape evolution, and the economic and social consequences of bank erosion and channel migration. Improved understanding of the complex spatial structure of planform change and capacity to predict these changes are important for effective stream management, engineering and restoration. The planform characteristics of a meandering river channel are integral to its planform dynamics. Active meandering rivers continually change their positions and shapes as a consequence of hydraulic forces exerted on the channel banks and bed, but as the banks and bed change through sediment transport, so do the hydraulic forces. Thus far, this complex feedback between form and process is incompletely understood, despite the fact that the characteristics and the dynamics of meandering rivers have been studied extensively. Current theoretical models aimed at predicting planform dynamics relate rates of meander migration to local and upstream planform curvature where weighting of the influence of curvature on migration rate decays exponentially over distance. This theoretical relation, however, has not been rigorously evaluated empirically. Furthermore, although models based on exponential-weighting of curvature effects yield fairly realistic predictions of meander migration, such models are incapable of reproducing complex forms of bend development, such as double heading or compound looping. This study presents the development of a new methodology based on parametric cubic spline interpolation for the characterization of channel planform and the planform curvature of meandering rivers. The use of continuous mathematical functions overcomes the reliance on bend-averaged values or piece-wise discrete approximations of planform curvature - a major limitation of previous studies. Continuous curvature series can be related to measured rates of lateral migration to explore empirically the relationship between spatially extended curvature and local bend migration. The methodology is applied to a study reach along a highly sinuous section of the Embarras River in Illinois, USA, which contains double-headed asymmetrical loops. To identify patterns of channel planform and rates of lateral migration for a study reach along Embarrass River in central Illinois, geographical information systems analysis of historical aerial photography over a period from 1936 to 1998 was conducted. Results indicate that parametric cubic spline interpolation provides excellent characterization of the complex planforms and planform curvatures of meandering rivers. The findings also indicate that the spatial structure of migration rate-curvature relation may be more complex than a simple exponential distance-decay function. The study represents a first step toward unraveling the spatial structure of planform evolution of meandering rivers and for developing models of planform dynamics that accurately relate spatially extended patterns of channel curvature to local rates of lateral migration. Such knowledge is vital for improving the capacity to accurately predict planform change of meandering rivers.

  16. Kinetically accessible yield (KAY) for redirection of metabolism to produce exo-metabolites

    DOE PAGES

    Lafontaine Rivera, Jimmy G.; Theisen, Matthew K.; Chen, Po-Wei; ...

    2017-04-05

    The product formation yield (product formed per unit substrate consumed) is often the most important performance indicator in metabolic engineering. Until now, the actual yield cannot be predicted, but it can be bounded by its maximum theoretical value. The maximum theoretical yield is calculated by considering the stoichiometry of the pathways and cofactor regeneration involved. Here in this paper we found that in many cases, dynamic stability becomes an issue when excessive pathway flux is drawn to a product. This constraint reduces the yield and renders the maximal theoretical yield too loose to be predictive. We propose a more realisticmore » quantity, defined as the kinetically accessible yield (KAY) to predict the maximum accessible yield for a given flux alteration. KAY is either determined by the point of instability, beyond which steady states become unstable and disappear, or a local maximum before becoming unstable. Thus, KAY is the maximum flux that can be redirected for a given metabolic engineering strategy without losing stability. Strictly speaking, calculation of KAY requires complete kinetic information. With limited or no kinetic information, an Ensemble Modeling strategy can be used to determine a range of likely values for KAY, including an average prediction. We first apply the KAY concept with a toy model to demonstrate the principle of kinetic limitations on yield. We then used a full-scale E. coli model (193 reactions, 153 metabolites) and this approach was successful in E. coli for predicting production of isobutanol: the calculated KAY values are consistent with experimental data for three genotypes previously published.« less

  17. Comparison of Statistical Models for Analyzing Wheat Yield Time Series

    PubMed Central

    Michel, Lucie; Makowski, David

    2013-01-01

    The world's population is predicted to exceed nine billion by 2050 and there is increasing concern about the capability of agriculture to feed such a large population. Foresight studies on food security are frequently based on crop yield trends estimated from yield time series provided by national and regional statistical agencies. Various types of statistical models have been proposed for the analysis of yield time series, but the predictive performances of these models have not yet been evaluated in detail. In this study, we present eight statistical models for analyzing yield time series and compare their ability to predict wheat yield at the national and regional scales, using data provided by the Food and Agriculture Organization of the United Nations and by the French Ministry of Agriculture. The Holt-Winters and dynamic linear models performed equally well, giving the most accurate predictions of wheat yield. However, dynamic linear models have two advantages over Holt-Winters models: they can be used to reconstruct past yield trends retrospectively and to analyze uncertainty. The results obtained with dynamic linear models indicated a stagnation of wheat yields in many countries, but the estimated rate of increase of wheat yield remained above 0.06 t ha−1 year−1 in several countries in Europe, Asia, Africa and America, and the estimated values were highly uncertain for several major wheat producing countries. The rate of yield increase differed considerably between French regions, suggesting that efforts to identify the main causes of yield stagnation should focus on a subnational scale. PMID:24205280

  18. Multitrait, Random Regression, or Simple Repeatability Model in High-Throughput Phenotyping Data Improve Genomic Prediction for Wheat Grain Yield.

    PubMed

    Sun, Jin; Rutkoski, Jessica E; Poland, Jesse A; Crossa, José; Jannink, Jean-Luc; Sorrells, Mark E

    2017-07-01

    High-throughput phenotyping (HTP) platforms can be used to measure traits that are genetically correlated with wheat ( L.) grain yield across time. Incorporating such secondary traits in the multivariate pedigree and genomic prediction models would be desirable to improve indirect selection for grain yield. In this study, we evaluated three statistical models, simple repeatability (SR), multitrait (MT), and random regression (RR), for the longitudinal data of secondary traits and compared the impact of the proposed models for secondary traits on their predictive abilities for grain yield. Grain yield and secondary traits, canopy temperature (CT) and normalized difference vegetation index (NDVI), were collected in five diverse environments for 557 wheat lines with available pedigree and genomic information. A two-stage analysis was applied for pedigree and genomic selection (GS). First, secondary traits were fitted by SR, MT, or RR models, separately, within each environment. Then, best linear unbiased predictions (BLUPs) of secondary traits from the above models were used in the multivariate prediction models to compare predictive abilities for grain yield. Predictive ability was substantially improved by 70%, on average, from multivariate pedigree and genomic models when including secondary traits in both training and test populations. Additionally, (i) predictive abilities slightly varied for MT, RR, or SR models in this data set, (ii) results indicated that including BLUPs of secondary traits from the MT model was the best in severe drought, and (iii) the RR model was slightly better than SR and MT models under drought environment. Copyright © 2017 Crop Science Society of America.

  19. Climate change and maize yield in Iowa

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Hong; Twine, Tracy E.; Girvetz, Evan

    Climate is changing across the world, including the major maize-growing state of Iowa in the USA. To maintain crop yields, farmers will need a suite of adaptation strategies, and choice of strategy will depend on how the local to regional climate is expected to change. Here we predict how maize yield might change through the 21 st century as compared with late 20 th century yields across Iowa, USA, a region representing ideal climate and soils for maize production that contributes substantially to the global maize economy. To account for climate model uncertainty, we drive a dynamic ecosystem model withmore » output from six climate models and two future climate forcing scenarios. Despite a wide range in the predicted amount of warming and change to summer precipitation, all simulations predict a decrease in maize yields from late 20 th century to middle and late 21 st century ranging from 15% to 50%. Linear regression of all models predicts a 6% state-averaged yield decrease for every 1°C increase in warm season average air temperature. When the influence of moisture stress on crop growth is removed from the model, yield decreases either remain the same or are reduced, depending on predicted changes in warm season precipitation. Lastly, our results suggest that even if maize were to receive all the water it needed, under the strongest climate forcing scenario yields will decline by 10-20% by the end of the 21 st century.« less

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lafontaine Rivera, Jimmy G.; Theisen, Matthew K.; Chen, Po-Wei

    The product formation yield (product formed per unit substrate consumed) is often the most important performance indicator in metabolic engineering. Until now, the actual yield cannot be predicted, but it can be bounded by its maximum theoretical value. The maximum theoretical yield is calculated by considering the stoichiometry of the pathways and cofactor regeneration involved. Here in this paper we found that in many cases, dynamic stability becomes an issue when excessive pathway flux is drawn to a product. This constraint reduces the yield and renders the maximal theoretical yield too loose to be predictive. We propose a more realisticmore » quantity, defined as the kinetically accessible yield (KAY) to predict the maximum accessible yield for a given flux alteration. KAY is either determined by the point of instability, beyond which steady states become unstable and disappear, or a local maximum before becoming unstable. Thus, KAY is the maximum flux that can be redirected for a given metabolic engineering strategy without losing stability. Strictly speaking, calculation of KAY requires complete kinetic information. With limited or no kinetic information, an Ensemble Modeling strategy can be used to determine a range of likely values for KAY, including an average prediction. We first apply the KAY concept with a toy model to demonstrate the principle of kinetic limitations on yield. We then used a full-scale E. coli model (193 reactions, 153 metabolites) and this approach was successful in E. coli for predicting production of isobutanol: the calculated KAY values are consistent with experimental data for three genotypes previously published.« less

  1. Climate change and maize yield in Iowa

    DOE PAGES

    Xu, Hong; Twine, Tracy E.; Girvetz, Evan

    2016-05-24

    Climate is changing across the world, including the major maize-growing state of Iowa in the USA. To maintain crop yields, farmers will need a suite of adaptation strategies, and choice of strategy will depend on how the local to regional climate is expected to change. Here we predict how maize yield might change through the 21 st century as compared with late 20 th century yields across Iowa, USA, a region representing ideal climate and soils for maize production that contributes substantially to the global maize economy. To account for climate model uncertainty, we drive a dynamic ecosystem model withmore » output from six climate models and two future climate forcing scenarios. Despite a wide range in the predicted amount of warming and change to summer precipitation, all simulations predict a decrease in maize yields from late 20 th century to middle and late 21 st century ranging from 15% to 50%. Linear regression of all models predicts a 6% state-averaged yield decrease for every 1°C increase in warm season average air temperature. When the influence of moisture stress on crop growth is removed from the model, yield decreases either remain the same or are reduced, depending on predicted changes in warm season precipitation. Lastly, our results suggest that even if maize were to receive all the water it needed, under the strongest climate forcing scenario yields will decline by 10-20% by the end of the 21 st century.« less

  2. Power counting to better jet observables

    NASA Astrophysics Data System (ADS)

    Larkoski, Andrew J.; Moult, Ian; Neill, Duff

    2014-12-01

    Optimized jet substructure observables for identifying boosted topologies will play an essential role in maximizing the physics reach of the Large Hadron Collider. Ideally, the design of discriminating variables would be informed by analytic calculations in perturbative QCD. Unfortunately, explicit calculations are often not feasible due to the complexity of the observables used for discrimination, and so many validation studies rely heavily, and solely, on Monte Carlo. In this paper we show how methods based on the parametric power counting of the dynamics of QCD, familiar from effective theory analyses, can be used to design, understand, and make robust predictions for the behavior of jet substructure variables. As a concrete example, we apply power counting for discriminating boosted Z bosons from massive QCD jets using observables formed from the n-point energy correlation functions. We show that power counting alone gives a definite prediction for the observable that optimally separates the background-rich from the signal-rich regions of phase space. Power counting can also be used to understand effects of phase space cuts and the effect of contamination from pile-up, which we discuss. As these arguments rely only on the parametric scaling of QCD, the predictions from power counting must be reproduced by any Monte Carlo, which we verify using Pythia 8 and Herwig++. We also use the example of quark versus gluon discrimination to demonstrate the limits of the power counting technique.

  3. Nonlinear Brillouin amplification of finite-duration seeds in the strong coupling regime

    NASA Astrophysics Data System (ADS)

    Lehmann, G.; Spatschek, K. H.

    2013-07-01

    Parametric plasma processes received renewed interest in the context of generating ultra-intense and ultra-short laser pulses up to the exawatt-zetawatt regime. Both Raman as well as Brillouin amplifications of seed pulses were proposed. Here, we investigate Brillouin processes in the one-dimensional (1D) backscattering geometry with the help of numerical simulations. For optimal seed amplification, Brillouin scattering is considered in the so called strong coupling (sc) regime. Special emphasis lies on the dependence of the amplification process on the finite duration of the initial seed pulses. First, the standard plane-wave instability predictions are generalized to pulse models, and the changes of initial seed pulse forms due to parametric instabilities are investigated. Three-wave-interaction results are compared to predictions by a new (kinetic) Vlasov code. The calculations are then extended to the nonlinear region with pump depletion. Generation of different seed layers is interpreted by self-similar solutions of the three-wave interaction model. Similar to Raman amplification, shadowing of the rear layers by the leading layers of the seed occurs. The shadowing is more pronounced for initially broad seed pulses. The effect is quantified for Brillouin amplification. Kinetic Vlasov simulations agree with the three-wave interaction predictions and thereby affirm the universal validity of self-similar layer formation during Brillouin seed amplification in the strong coupling regime.

  4. Parametric decadal climate forecast recalibration (DeFoReSt 1.0)

    NASA Astrophysics Data System (ADS)

    Pasternack, Alexander; Bhend, Jonas; Liniger, Mark A.; Rust, Henning W.; Müller, Wolfgang A.; Ulbrich, Uwe

    2018-01-01

    Near-term climate predictions such as decadal climate forecasts are increasingly being used to guide adaptation measures. For near-term probabilistic predictions to be useful, systematic errors of the forecasting systems have to be corrected. While methods for the calibration of probabilistic forecasts are readily available, these have to be adapted to the specifics of decadal climate forecasts including the long time horizon of decadal climate forecasts, lead-time-dependent systematic errors (drift) and the errors in the representation of long-term changes and variability. These features are compounded by small ensemble sizes to describe forecast uncertainty and a relatively short period for which typically pairs of reforecasts and observations are available to estimate calibration parameters. We introduce the Decadal Climate Forecast Recalibration Strategy (DeFoReSt), a parametric approach to recalibrate decadal ensemble forecasts that takes the above specifics into account. DeFoReSt optimizes forecast quality as measured by the continuous ranked probability score (CRPS). Using a toy model to generate synthetic forecast observation pairs, we demonstrate the positive effect on forecast quality in situations with pronounced and limited predictability. Finally, we apply DeFoReSt to decadal surface temperature forecasts from the MiKlip prototype system and find consistent, and sometimes considerable, improvements in forecast quality compared with a simple calibration of the lead-time-dependent systematic errors.

  5. Development, Evaluation, and Sensitivity Analysis of Parametric Finite Element Whole-Body Human Models in Side Impacts.

    PubMed

    Hwang, Eunjoo; Hu, Jingwen; Chen, Cong; Klein, Katelyn F; Miller, Carl S; Reed, Matthew P; Rupp, Jonathan D; Hallman, Jason J

    2016-11-01

    Occupant stature and body shape may have significant effects on injury risks in motor vehicle crashes, but the current finite element (FE) human body models (HBMs) only represent occupants with a few sizes and shapes. Our recent studies have demonstrated that, by using a mesh morphing method, parametric FE HBMs can be rapidly developed for representing a diverse population. However, the biofidelity of those models across a wide range of human attributes has not been established. Therefore, the objectives of this study are 1) to evaluate the accuracy of HBMs considering subject-specific geometry information, and 2) to apply the parametric HBMs in a sensitivity analysis for identifying the specific parameters affecting body responses in side impact conditions. Four side-impact tests with two male post-mortem human subjects (PMHSs) were selected to evaluate the accuracy of the geometry and impact responses of the morphed HBMs. For each PMHS test, three HBMs were simulated to compare with the test results: the original Total Human Model for Safety (THUMS) v4.01 (O-THUMS), a parametric THUMS (P-THUMS), and a subject-specific THUMS (S-THUMS). The P-THUMS geometry was predicted from only age, sex, stature, and BMI using our statistical geometry models of skeleton and body shape, while the S-THUMS geometry was based on each PMHS's CT data. The simulation results showed a preliminary trend that the correlations between the PTHUMS- predicted impact responses and the four PMHS tests (mean-CORA: 0.84, 0.78, 0.69, 0.70) were better than those between the O-THUMS and the normalized PMHS responses (mean-CORA: 0.74, 0.72, 0.55, 0.63), while they are similar to the correlations between S-THUMS and the PMHS tests (mean-CORA: 0.85, 0.85, 0.67, 0.72). The sensitivity analysis using the PTHUMS showed that, in side impact conditions, the HBM skeleton and body shape geometries as well as the body posture were more important in modeling the occupant impact responses than the bone and soft tissue material properties and the padding stiffness with the given parameter ranges. More investigations are needed to further support these findings.

  6. Multivariate Statistical Models for Predicting Sediment Yields from Southern California Watersheds

    USGS Publications Warehouse

    Gartner, Joseph E.; Cannon, Susan H.; Helsel, Dennis R.; Bandurraga, Mark

    2009-01-01

    Debris-retention basins in Southern California are frequently used to protect communities and infrastructure from the hazards of flooding and debris flow. Empirical models that predict sediment yields are used to determine the size of the basins. Such models have been developed using analyses of records of the amount of material removed from debris retention basins, associated rainfall amounts, measures of watershed characteristics, and wildfire extent and history. In this study we used multiple linear regression methods to develop two updated empirical models to predict sediment yields for watersheds located in Southern California. The models are based on both new and existing measures of volume of sediment removed from debris retention basins, measures of watershed morphology, and characterization of burn severity distributions for watersheds located in Ventura, Los Angeles, and San Bernardino Counties. The first model presented reflects conditions in watersheds located throughout the Transverse Ranges of Southern California and is based on volumes of sediment measured following single storm events with known rainfall conditions. The second model presented is specific to conditions in Ventura County watersheds and was developed using volumes of sediment measured following multiple storm events. To relate sediment volumes to triggering storm rainfall, a rainfall threshold was developed to identify storms likely to have caused sediment deposition. A measured volume of sediment deposited by numerous storms was parsed among the threshold-exceeding storms based on relative storm rainfall totals. The predictive strength of the two models developed here, and of previously-published models, was evaluated using a test dataset consisting of 65 volumes of sediment yields measured in Southern California. The evaluation indicated that the model developed using information from single storm events in the Transverse Ranges best predicted sediment yields for watersheds in San Bernardino, Los Angeles, and Ventura Counties. This model predicts sediment yield as a function of the peak 1-hour rainfall, the watershed area burned by the most recent fire (at all severities), the time since the most recent fire, watershed area, average gradient, and relief ratio. The model that reflects conditions specific to Ventura County watersheds consistently under-predicted sediment yields and is not recommended for application. Some previously-published models performed reasonably well, while others either under-predicted sediment yields or had a larger range of errors in the predicted sediment yields.

  7. Embedded Model Error Representation and Propagation in Climate Models

    NASA Astrophysics Data System (ADS)

    Sargsyan, K.; Ricciuto, D. M.; Safta, C.; Thornton, P. E.

    2017-12-01

    Over the last decade, parametric uncertainty quantification (UQ) methods have reached a level of maturity, while the same can not be said about representation and quantification of structural or model errors. Lack of characterization of model errors, induced by physical assumptions, phenomenological parameterizations or constitutive laws, is a major handicap in predictive science. In particular, e.g. in climate models, significant computational resources are dedicated to model calibration without gaining improvement in predictive skill. Neglecting model errors during calibration/tuning will lead to overconfident and biased model parameters. At the same time, the most advanced methods accounting for model error merely correct output biases, augmenting model outputs with statistical error terms that can potentially violate physical laws, or make the calibrated model ineffective for extrapolative scenarios. This work will overview a principled path for representing and quantifying model errors, as well as propagating them together with the rest of the predictive uncertainty budget, including data noise, parametric uncertainties and surrogate-related errors. Namely, the model error terms will be embedded in select model components rather than as external corrections. Such embedding ensures consistency with physical constraints on model predictions, and renders calibrated model predictions meaningful and robust with respect to model errors. Besides, in the presence of observational data, the approach can effectively differentiate model structural deficiencies from those of data acquisition. The methodology is implemented in UQ Toolkit (www.sandia.gov/uqtoolkit), relying on a host of available forward and inverse UQ tools. We will demonstrate the application of the technique on few application of interest, including ACME Land Model calibration via a wide range of measurements obtained at select sites.

  8. Application of grey-fuzzy approach in parametric optimization of EDM process in machining of MDN 300 steel

    NASA Astrophysics Data System (ADS)

    Protim Das, Partha; Gupta, P.; Das, S.; Pradhan, B. B.; Chakraborty, S.

    2018-01-01

    Maraging steel (MDN 300) find its application in many industries as it exhibits high hardness which are very difficult to machine material. Electro discharge machining (EDM) is an extensively popular machining process which can be used in machining of such materials. Optimization of response parameters are essential for effective machining of these materials. Past researchers have already used Taguchi for obtaining the optimal responses of EDM process for this material with responses such as material removal rate (MRR), tool wear rate (TWR), relative wear ratio (RWR), and surface roughness (SR) considering discharge current, pulse on time, pulse off time, arc gap, and duty cycle as process parameters. In this paper, grey relation analysis (GRA) with fuzzy logic is applied to this multi objective optimization problem to check the responses by an implementation of the derived parametric setting. It was found that the parametric setting derived by the proposed method results in better a response than those reported by the past researchers. Obtained results are also verified using the technique for order of preference by similarity to ideal solution (TOPSIS). The predicted result also shows that there is a significant improvement in comparison to the results of past researchers.

  9. Modeling and Simulation of a Parametrically Resonant Micromirror With Duty-Cycled Excitation

    PubMed Central

    Shahid, Wajiha; Qiu, Zhen; Duan, Xiyu; Li, Haijun; Wang, Thomas D.; Oldham, Kenn R.

    2014-01-01

    High frequency large scanning angle electrostatically actuated microelectromechanical systems (MEMS) mirrors are used in a variety of applications involving fast optical scanning. A 1-D parametrically resonant torsional micromirror for use in biomedical imaging is analyzed here with respect to operation by duty-cycled square waves. Duty-cycled square wave excitation can have significant advantages for practical mirror regulation and/or control. The mirror’s nonlinear dynamics under such excitation is analyzed in a Hill’s equation form. This form is used to predict stability regions (the voltage-frequency relationship) of parametric resonance behavior over large scanning angles using iterative approximations for nonlinear capacitance behavior of the mirror. Numerical simulations are also performed to obtain the mirror’s frequency response over several voltages for various duty cycles. Frequency sweeps, stability results, and duty cycle trends from both analytical and simulation methods are compared with experimental results. Both analytical models and simulations show good agreement with experimental results over the range of duty cycled excitations tested. This paper discusses the implications of changing amplitude and phase with duty cycle for robust open-loop operation and future closed-loop operating strategies. PMID:25506188

  10. Observation of Geometric Parametric Instability Induced by the Periodic Spatial Self-Imaging of Multimode Waves

    NASA Astrophysics Data System (ADS)

    Krupa, Katarzyna; Tonello, Alessandro; Barthélémy, Alain; Couderc, Vincent; Shalaby, Badr Mohamed; Bendahmane, Abdelkrim; Millot, Guy; Wabnitz, Stefan

    2016-05-01

    Spatiotemporal mode coupling in highly multimode physical systems permits new routes for exploring complex instabilities and forming coherent wave structures. We present here the first experimental demonstration of multiple geometric parametric instability sidebands, generated in the frequency domain through resonant space-time coupling, owing to the natural periodic spatial self-imaging of a multimode quasi-continuous-wave beam in a standard graded-index multimode fiber. The input beam was launched in the fiber by means of an amplified microchip laser emitting sub-ns pulses at 1064 nm. The experimentally observed frequency spacing among sidebands agrees well with analytical predictions and numerical simulations. The first-order peaks are located at the considerably large detuning of 123.5 THz from the pump. These results open the remarkable possibility to convert a near-infrared laser directly into a broad spectral range spanning visible and infrared wavelengths, by means of a single resonant parametric nonlinear effect occurring in the normal dispersion regime. As further evidence of our strong space-time coupling regime, we observed the striking effect that all of the different sideband peaks were carried by a well-defined and stable bell-shaped spatial profile.

  11. Out-of-Sample Extensions for Non-Parametric Kernel Methods.

    PubMed

    Pan, Binbin; Chen, Wen-Sheng; Chen, Bo; Xu, Chen; Lai, Jianhuang

    2017-02-01

    Choosing suitable kernels plays an important role in the performance of kernel methods. Recently, a number of studies were devoted to developing nonparametric kernels. Without assuming any parametric form of the target kernel, nonparametric kernel learning offers a flexible scheme to utilize the information of the data, which may potentially characterize the data similarity better. The kernel methods using nonparametric kernels are referred to as nonparametric kernel methods. However, many nonparametric kernel methods are restricted to transductive learning, where the prediction function is defined only over the data points given beforehand. They have no straightforward extension for the out-of-sample data points, and thus cannot be applied to inductive learning. In this paper, we show how to make the nonparametric kernel methods applicable to inductive learning. The key problem of out-of-sample extension is how to extend the nonparametric kernel matrix to the corresponding kernel function. A regression approach in the hyper reproducing kernel Hilbert space is proposed to solve this problem. Empirical results indicate that the out-of-sample performance is comparable to the in-sample performance in most cases. Experiments on face recognition demonstrate the superiority of our nonparametric kernel method over the state-of-the-art parametric kernel methods.

  12. The binned bispectrum estimator: template-based and non-parametric CMB non-Gaussianity searches

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bucher, Martin; Racine, Benjamin; Tent, Bartjan van, E-mail: bucher@apc.univ-paris7.fr, E-mail: benjar@uio.no, E-mail: vantent@th.u-psud.fr

    2016-05-01

    We describe the details of the binned bispectrum estimator as used for the official 2013 and 2015 analyses of the temperature and polarization CMB maps from the ESA Planck satellite. The defining aspect of this estimator is the determination of a map bispectrum (3-point correlation function) that has been binned in harmonic space. For a parametric determination of the non-Gaussianity in the map (the so-called f NL parameters), one takes the inner product of this binned bispectrum with theoretically motivated templates. However, as a complementary approach one can also smooth the binned bispectrum using a variable smoothing scale in ordermore » to suppress noise and make coherent features stand out above the noise. This allows one to look in a model-independent way for any statistically significant bispectral signal. This approach is useful for characterizing the bispectral shape of the galactic foreground emission, for which a theoretical prediction of the bispectral anisotropy is lacking, and for detecting a serendipitous primordial signal, for which a theoretical template has not yet been put forth. Both the template-based and the non-parametric approaches are described in this paper.« less

  13. Predictive capacity of a non-radioisotopic local lymph node assay using flow cytometry, LLNA:BrdU-FCM: Comparison of a cutoff approach and inferential statistics.

    PubMed

    Kim, Da-Eun; Yang, Hyeri; Jang, Won-Hee; Jung, Kyoung-Mi; Park, Miyoung; Choi, Jin Kyu; Jung, Mi-Sook; Jeon, Eun-Young; Heo, Yong; Yeo, Kyung-Wook; Jo, Ji-Hoon; Park, Jung Eun; Sohn, Soo Jung; Kim, Tae Sung; Ahn, Il Young; Jeong, Tae-Cheon; Lim, Kyung-Min; Bae, SeungJin

    2016-01-01

    In order for a novel test method to be applied for regulatory purposes, its reliability and relevance, i.e., reproducibility and predictive capacity, must be demonstrated. Here, we examine the predictive capacity of a novel non-radioisotopic local lymph node assay, LLNA:BrdU-FCM (5-bromo-2'-deoxyuridine-flow cytometry), with a cutoff approach and inferential statistics as a prediction model. 22 reference substances in OECD TG429 were tested with a concurrent positive control, hexylcinnamaldehyde 25%(PC), and the stimulation index (SI) representing the fold increase in lymph node cells over the vehicle control was obtained. The optimal cutoff SI (2.7≤cutoff <3.5), with respect to predictive capacity, was obtained by a receiver operating characteristic curve, which produced 90.9% accuracy for the 22 substances. To address the inter-test variability in responsiveness, SI values standardized with PC were employed to obtain the optimal percentage cutoff (42.6≤cutoff <57.3% of PC), which produced 86.4% accuracy. A test substance may be diagnosed as a sensitizer if a statistically significant increase in SI is elicited. The parametric one-sided t-test and non-parametric Wilcoxon rank-sum test produced 77.3% accuracy. Similarly, a test substance could be defined as a sensitizer if the SI means of the vehicle control, and of the low, middle, and high concentrations were statistically significantly different, which was tested using ANOVA or Kruskal-Wallis, with post hoc analysis, Dunnett, or DSCF (Dwass-Steel-Critchlow-Fligner), respectively, depending on the equal variance test, producing 81.8% accuracy. The absolute SI-based cutoff approach produced the best predictive capacity, however the discordant decisions between prediction models need to be examined further. Copyright © 2015 Elsevier Inc. All rights reserved.

  14. A Cluster Analytic Approach to Identifying Predictors and Moderators of Psychosocial Treatment for Bipolar Depression: Results from STEP-BD

    PubMed Central

    Deckersbach, Thilo; Peters, Amy T.; Sylvia, Louisa G.; Gold, Alexandra K.; da Silva Magalhaes, Pedro Vieira; Henry, David B.; Frank, Ellen; Otto, Michael W.; Berk, Michael; Dougherty, Darin D.; Nierenberg, Andrew A.; Miklowitz, David J.

    2016-01-01

    Background We sought to address how predictors and moderators of psychotherapy for bipolar depression – identified individually in prior analyses – can inform the development of a metric for prospectively classifying treatment outcome in intensive psychotherapy (IP) versus collaborative care (CC) adjunctive to pharmacotherapy in the Systematic Treatment Enhancement Program (STEP-BD) study. Methods We conducted post-hoc analyses on 135 STEP-BD participants using cluster analysis to identify subsets of participants with similar clinical profiles and investigated this combined metric as a moderator and predictor of response to IP. We used agglomerative hierarchical cluster analyses and k-means clustering to determine the content of the clinical profiles. Logistic regression and Cox proportional hazard models were used to evaluate whether the resulting clusters predicted or moderated likelihood of recovery or time until recovery. Results The cluster analysis yielded a two-cluster solution: 1) “less-recurrent/severe” and 2) “chronic/recurrent.” Rates of recovery in IP were similar for less-recurrent/severe and chronic/recurrent participants. Less-recurrent/severe patients were more likely than chronic/recurrent patients to achieve recovery in CC (p = .040, OR = 4.56). IP yielded a faster recovery for chronic/recurrent participants, whereas CC led to recovery sooner in the less-recurrent/severe cluster (p = .034, OR = 2.62). Limitations Cluster analyses require list-wise deletion of cases with missing data so we were unable to conduct analyses on all STEP-BD participants. Conclusions A well-powered, parametric approach can distinguish patients based on illness history and provide clinicians with symptom profiles of patients that confer differential prognosis in CC vs. IP. PMID:27289316

  15. Rapid analysis of composition and reactivity in cellulosic biomass feedstocks with near-infrared spectroscopy

    DOE PAGES

    Payne, Courtney E.; Wolfrum, Edward J.

    2015-03-12

    Obtaining accurate chemical composition and reactivity (measures of carbohydrate release and yield) information for biomass feedstocks in a timely manner is necessary for the commercialization of biofuels. Our objective was to use near-infrared (NIR) spectroscopy and partial least squares (PLS) multivariate analysis to develop calibration models to predict the feedstock composition and the release and yield of soluble carbohydrates generated by a bench-scale dilute acid pretreatment and enzymatic hydrolysis assay. Major feedstocks included in the calibration models are corn stover, sorghum, switchgrass, perennial cool season grasses, rice straw, and miscanthus. Here are the results: We present individual model statistics tomore » demonstrate model performance and validation samples to more accurately measure predictive quality of the models. The PLS-2 model for composition predicts glucan, xylan, lignin, and ash (wt%) with uncertainties similar to primary measurement methods. A PLS-2 model was developed to predict glucose and xylose release following pretreatment and enzymatic hydrolysis. An additional PLS-2 model was developed to predict glucan and xylan yield. PLS-1 models were developed to predict the sum of glucose/glucan and xylose/xylan for release and yield (grams per gram). The release and yield models have higher uncertainties than the primary methods used to develop the models. In conclusion, it is possible to build effective multispecies feedstock models for composition, as well as carbohydrate release and yield. The model for composition is useful for predicting glucan, xylan, lignin, and ash with good uncertainties. The release and yield models have higher uncertainties; however, these models are useful for rapidly screening sample populations to identify unusual samples.« less

  16. Rapid analysis of composition and reactivity in cellulosic biomass feedstocks with near-infrared spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Payne, Courtney E.; Wolfrum, Edward J.

    Obtaining accurate chemical composition and reactivity (measures of carbohydrate release and yield) information for biomass feedstocks in a timely manner is necessary for the commercialization of biofuels. Our objective was to use near-infrared (NIR) spectroscopy and partial least squares (PLS) multivariate analysis to develop calibration models to predict the feedstock composition and the release and yield of soluble carbohydrates generated by a bench-scale dilute acid pretreatment and enzymatic hydrolysis assay. Major feedstocks included in the calibration models are corn stover, sorghum, switchgrass, perennial cool season grasses, rice straw, and miscanthus. Here are the results: We present individual model statistics tomore » demonstrate model performance and validation samples to more accurately measure predictive quality of the models. The PLS-2 model for composition predicts glucan, xylan, lignin, and ash (wt%) with uncertainties similar to primary measurement methods. A PLS-2 model was developed to predict glucose and xylose release following pretreatment and enzymatic hydrolysis. An additional PLS-2 model was developed to predict glucan and xylan yield. PLS-1 models were developed to predict the sum of glucose/glucan and xylose/xylan for release and yield (grams per gram). The release and yield models have higher uncertainties than the primary methods used to develop the models. In conclusion, it is possible to build effective multispecies feedstock models for composition, as well as carbohydrate release and yield. The model for composition is useful for predicting glucan, xylan, lignin, and ash with good uncertainties. The release and yield models have higher uncertainties; however, these models are useful for rapidly screening sample populations to identify unusual samples.« less

  17. MULTI-KEV X-RAY YIELDS FROM HIGH-Z GAS TARGETS FIELDED AT OMEGA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kane, J O; Fournier, K B; May, M J

    2010-11-04

    The authors report on modeling of x-ray yield from gas-filled targets shot at the OMEGA laser facility. The OMEGA targets were 1.8 mm long, 1.95 mm in diameter Be cans filled with either a 50:50 Ar:Xe mixture, pure Ar, pure Kr or pure Xe at {approx} 1 atm. The OMEGA experiments heated the gas with 20 kJ of 3{omega} ({approx} 350 nm) laser energy delivered in a 1 ns square pulse. the emitted x-ray flux was monitored with the x-ray diode based DANTE instruments in the sub-keV range. Two-dimensional x-ray images (for energies 3-5 keV) of the targets were recordedmore » with gated x-ray detectors. The x-ray spectra were recorded with the HENWAY crystal spectrometer at OMEGA. Predictions are 2D r-z cylindrical with DCA NLTE atomic physics. Models generally: (1) underpredict the Xe L-shell yields; (2) overpredict the Ar K-shell yields; (3) correctly predict the Xe thermal yields; and (4) greatly underpredict the Ar thermal yields. However, there are spreads within the data, e.g. the DMX Ar K-shell yields are correctly predicted. The predicted thermal yields show strong angular dependence.« less

  18. Novel loci interacting epistatically with bone morphogenetic protein receptor 2 cause familial pulmonary arterial hypertension.

    PubMed

    Rodriguez-Murillo, Laura; Subaran, Ryan; Stewart, William C L; Pramanik, Sreemanta; Marathe, Sudhir; Barst, Robyn J; Chung, Wendy K; Greenberg, David A

    2010-02-01

    Familial pulmonary arterial hypertension (FPAH) is a rare, autosomal-dominant, inherited disease with low penetrance. Mutations in the bone morphogenetic protein receptor 2 (BMPR2) have been identified in at least 70% of FPAH patients. However, the lifetime penetrance of these BMPR2 mutations is 10% to 20%, suggesting that genetic and/or environmental modifiers are required for disease expression. Our goal in this study was to identify genetic loci that may influence FPAH expression in BMPR2 mutation carriers. We performed a genome-wide linkage scan in 15 FPAH families segregating for BMPR2 mutations. We used a dense single-nucleotide polymorphism (SNP) array and a novel multi-scan linkage procedure that provides increased power and precision for the localization of linked loci. We observed linkage evidence in four regions: 3q22 ([median log of the odds (LOD) = 3.43]), 3p12 (median LOD) = 2.35), 2p22 (median LOD = 2.21), and 13q21 (median LOD = 2.09). When used in conjunction with the non-parametric bootstrap, our approach yields high-resolution to identify candidate gene regions containing putative BMPR2-interacting genes. Imputation of the disease model by LOD-score maximization indicates that the 3q22 locus alone predicts most FPAH cases in BMPR2 mutation carriers, providing strong evidence that BMPR2 and the 3q22 locus interact epistatically. Our findings suggest that genotypes at loci in the newly identified regions, especially at 3q22, could improve FPAH risk prediction in FPAH families. We also suggest other targets for therapeutic intervention.

  19. It's all relative: ranking the diversity of aquatic bacterial communities.

    PubMed

    Shaw, Allison K; Halpern, Aaron L; Beeson, Karen; Tran, Bao; Venter, J Craig; Martiny, Jennifer B H

    2008-09-01

    The study of microbial diversity patterns is hampered by the enormous diversity of microbial communities and the lack of resources to sample them exhaustively. For many questions about richness and evenness, however, one only needs to know the relative order of diversity among samples rather than total diversity. We used 16S libraries from the Global Ocean Survey to investigate the ability of 10 diversity statistics (including rarefaction, non-parametric, parametric, curve extrapolation and diversity indices) to assess the relative diversity of six aquatic bacterial communities. Overall, we found that the statistics yielded remarkably similar rankings of the samples for a given sequence similarity cut-off. This correspondence, despite the different underlying assumptions of the statistics, suggests that diversity statistics are a useful tool for ranking samples of microbial diversity. In addition, sequence similarity cut-off influenced the diversity ranking of the samples, demonstrating that diversity statistics can also be used to detect differences in phylogenetic structure among microbial communities. Finally, a subsampling analysis suggests that further sequencing from these particular clone libraries would not have substantially changed the richness rankings of the samples.

  20. Parametric study of potential early commercial MHD power plants

    NASA Technical Reports Server (NTRS)

    Hals, F. A.

    1979-01-01

    Three different reference power plant configurations were considered with parametric variations of the various design parameters for each plant. Two of the reference plant designs were based on the use of high temperature regenerative air preheaters separately fired by a low Btu gas produced from a coal gasifier which was integrated with the power plant. The third reference plant design was based on the use of oxygen enriched combustion air preheated to a more moderate temperature in a tubular type metallic recuperative heat exchanger which is part of the bottoming plant heat recovery system. Comparative information was developed on plant performance and economics. The highest net plant efficiency of about 45 percent was attained by the reference plant design with the use of a high temperature air preheater separately fired with the advanced entrained bed gasifier. The use of oxygen enrichment of the combustion air yielded the lowest cost of generating electricity at a slightly lower plant efficiency. Both of these two reference plant designs are identified as potentially attractive for early MHD power plant applications.

Top